[jira] [Commented] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts
[ https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704322#comment-15704322 ] Damien Kamerman commented on SOLR-7280: --- I've just been testing this and found a couple of issues: All the core registrations are done in background threads. This can flood the overseer queue. See CoreContainer.load() calls zkSys.registerInZk(core, true, false); I've increased leaderConflictResolveWait to 30min but every 15s I can see: org.apache.solr.handler.admin.PrepRecoveryOp; After 15 seconds, core ip_1224_shard1_replica1 (shard1 of ip_1224) still does not have state: recovering; forcing ClusterState update from ZooKeeper Again, I think this can flood the overseer queue. > Load cores in sorted order and tweak coreLoadThread counts to improve cluster > stability on restarts > --- > > Key: SOLR-7280 > URL: https://issues.apache.org/jira/browse/SOLR-7280 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Noble Paul > Fix For: 5.5.3, 6.2 > > Attachments: SOLR-7280-5x.patch, SOLR-7280-5x.patch, > SOLR-7280-5x.patch, SOLR-7280-test.patch, SOLR-7280.patch, SOLR-7280.patch > > > In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order > and tweaking some of the coreLoadThread counts, he was able to improve the > stability of a cluster with thousands of collections. We should explore some > of these changes and fold them into Solr. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 589 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/589/ Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test Error Message: Stack Trace: java.lang.NullPointerException at __randomizedtesting.SeedInfo.seed([7D4CD5B05AAB1158:F518EA6AF4577CA0]:0) at org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12500 lines...] [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerConcurrent [junit4] 2> Creating
[jira] [Commented] (SOLR-9784) Refactor CloudSolrClient to eliminate direct dependency on ZK
[ https://issues.apache.org/jira/browse/SOLR-9784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703955#comment-15703955 ] ASF subversion and git services commented on SOLR-9784: --- Commit d89da61a6dfed2cec5b2232ae978ebd71f1216de in lucene-solr's branch refs/heads/branch_6x from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d89da61 ] SOLR-9784: added deprecation javadocs > Refactor CloudSolrClient to eliminate direct dependency on ZK > - > > Key: SOLR-9784 > URL: https://issues.apache.org/jira/browse/SOLR-9784 > Project: Solr > Issue Type: Sub-task > Components: SolrJ >Reporter: Noble Paul >Assignee: Noble Paul > Attachments: SOLR-9584.patch > > > CloudSolrClient should decouple itself from the ZK reading/write. This will > help us provide alternate implementations w/o direct ZK dependency -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9784) Refactor CloudSolrClient to eliminate direct dependency on ZK
[ https://issues.apache.org/jira/browse/SOLR-9784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703949#comment-15703949 ] ASF subversion and git services commented on SOLR-9784: --- Commit 5b2594350df11ef54d52f417b34c6d082ad85e89 in lucene-solr's branch refs/heads/master from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5b25943 ] SOLR-9784: added deprecation javadocs > Refactor CloudSolrClient to eliminate direct dependency on ZK > - > > Key: SOLR-9784 > URL: https://issues.apache.org/jira/browse/SOLR-9784 > Project: Solr > Issue Type: Sub-task > Components: SolrJ >Reporter: Noble Paul >Assignee: Noble Paul > Attachments: SOLR-9584.patch > > > CloudSolrClient should decouple itself from the ZK reading/write. This will > help us provide alternate implementations w/o direct ZK dependency -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 18397 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18397/ Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestCloudDeleteByQuery Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestCloudDeleteByQuery: 1) Thread[id=211, name=OverseerHdfsCoreFailoverThread-97018240649068556-127.0.0.1:33090_solr-n_02, state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestCloudDeleteByQuery: 1) Thread[id=211, name=OverseerHdfsCoreFailoverThread-97018240649068556-127.0.0.1:33090_solr-n_02, state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([4AC34103DE4681B1]:0) Build Log: [...truncated 10684 lines...] [junit4] Suite: org.apache.solr.cloud.TestCloudDeleteByQuery [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestCloudDeleteByQuery_4AC34103DE4681B1-001/init-core-data-001 [junit4] 2> 0INFO (SUITE-TestCloudDeleteByQuery-seed#[4AC34103DE4681B1]-worker) [] o.e.j.u.log Logging initialized @2113ms [junit4] 2> 16 INFO (SUITE-TestCloudDeleteByQuery-seed#[4AC34103DE4681B1]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: @org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN) [junit4] 2> 243 INFO (SUITE-TestCloudDeleteByQuery-seed#[4AC34103DE4681B1]-worker) [] o.a.s.c.MiniSolrCloudCluster Starting cluster of 5 servers in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestCloudDeleteByQuery_4AC34103DE4681B1-001/tempDir-001 [junit4] 2> 255 INFO (SUITE-TestCloudDeleteByQuery-seed#[4AC34103DE4681B1]-worker) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 276 INFO (Thread-1) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 276 INFO (Thread-1) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 375 INFO (SUITE-TestCloudDeleteByQuery-seed#[4AC34103DE4681B1]-worker) [] o.a.s.c.ZkTestServer start zk server on port:38555 [junit4] 2> 484 WARN (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] o.a.z.s.NIOServerCnxn Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running [junit4] 2> 2753 INFO (jetty-launcher-1-thread-1) [] o.e.j.s.Server jetty-9.3.14.v20161028 [junit4] 2> 2753 INFO (jetty-launcher-1-thread-2) [] o.e.j.s.Server jetty-9.3.14.v20161028 [junit4] 2> 2753 INFO (jetty-launcher-1-thread-5) [] o.e.j.s.Server jetty-9.3.14.v20161028 [junit4] 2> 2753 INFO (jetty-launcher-1-thread-4) [] o.e.j.s.Server jetty-9.3.14.v20161028 [junit4] 2> 2753 INFO (jetty-launcher-1-thread-3) [] o.e.j.s.Server jetty-9.3.14.v20161028 [junit4] 2> 2784 INFO (jetty-launcher-1-thread-1) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@35244d53{/solr,null,AVAILABLE} [junit4] 2> 2785 INFO (jetty-launcher-1-thread-4) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@4fab49a5{/solr,null,AVAILABLE} [junit4] 2> 2784 INFO (jetty-launcher-1-thread-3) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@2fc8375f{/solr,null,AVAILABLE} [junit4] 2> 2784 INFO (jetty-launcher-1-thread-2) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@64e1d480{/solr,null,AVAILABLE} [junit4] 2> 2784 INFO (jetty-launcher-1-thread-5) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@1b06f2dc{/solr,null,AVAILABLE} [junit4] 2> 2964 INFO (jetty-launcher-1-thread-5) [] o.e.j.s.AbstractConnector Started ServerConnector@1ccc4720{SSL,[ssl, http/1.1]}{127.0.0.1:38222} [junit4] 2> 2964 INFO (jetty-launcher-1-thread-5) [] o.e.j.s.Server Started @5088ms [junit4] 2> 2970 INFO (jetty-launcher-1-thread-3) [] o.e.j.s.AbstractConnector Started ServerConnector@c6073b8{SSL,[ssl, http/1.1]}{127.0.0.1:33090} [junit4] 2> 2970 INFO (jetty-launcher-1-thread-3) [] o.e.j.s.Server Started @5094ms [junit4] 2> 2966 INFO (jetty-launcher-1-thread-1) [] o.e.j.s.AbstractConnector Started ServerConnector@1d08a6e4{SSL,[ssl, http/1.1]}{127.0.0.1:38323}
[jira] [Updated] (SOLR-9707) DeleteByQuery forward requests to down replicas and set it in LiR
[ https://issues.apache.org/jira/browse/SOLR-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-9707: Attachment: SOLR-9707.patch Hi Jessica, I was trying to write a test case for this and I wasn't able to get it to fail without the patch. I realized that it's because we already filter out replicas which aren't on the live node list ( https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java#L805 ) Attaching the updated patch with the test case for reference. > DeleteByQuery forward requests to down replicas and set it in LiR > - > > Key: SOLR-9707 > URL: https://issues.apache.org/jira/browse/SOLR-9707 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Jessica Cheng Mallet >Assignee: Varun Thacker > Labels: solrcloud > Attachments: SOLR-9707.diff, SOLR-9707.patch > > > DeleteByQuery, unlike other requests, does not filter out the down replicas. > Thus, the update is still forwarded to the down replica and fails, and the > leader then sets the replica in LiR. In a cluster where there are lots of > deleteByQuery requests, this can flood the /overseer/queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_102) - Build # 6254 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6254/ Java: 32bit/jdk1.8.0_102 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: [index.20161129002758850, index.20161129002759469, index.properties, replication.properties, snapshot_metadata] expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: [index.20161129002758850, index.20161129002759469, index.properties, replication.properties, snapshot_metadata] expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([C453BA5BA0539335:1FF8BA9DA57BFA86]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:907) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:874) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[jira] [Updated] (LUCENE-7563) BKD index should compress unused leading bytes
[ https://issues.apache.org/jira/browse/LUCENE-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-7563: --- Attachment: LUCENE-7563.patch New patch, folding in [~jpountz]'s first idea. I like the second idea ... I'll try that next. I tested on LatLonPoint and Geo3D with the ~60M document OpenStreetMaps geo benchmark and it reduces heap usage from from 2.29 MB -> 1.79 (Geo3D) and 2.29 -> 1.77 (LatLonPoint), ~22% smaller. > BKD index should compress unused leading bytes > -- > > Key: LUCENE-7563 > URL: https://issues.apache.org/jira/browse/LUCENE-7563 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7563.patch, LUCENE-7563.patch, LUCENE-7563.patch > > > Today the BKD (points) in-heap index always uses {{dimensionNumBytes}} per > dimension, but if e.g. you are indexing {{LongPoint}} yet only use the bottom > two bytes in a given segment, we shouldn't store all those leading 0s in the > index. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9809) TrieField.createFields produces useless IndexableField instances when field is stored=false indexed=false
[ https://issues.apache.org/jira/browse/SOLR-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703727#comment-15703727 ] Hoss Man commented on SOLR-9809: Backstory... Working on SOLR-5944, i was really confused by a bit of code ishan in his patch had & the associated explanation when i asked about it relating to the results of calling {{field.getType().createFields(field, ...)}} in a code path where we'd already asserted that the field was a single valued docValues only field... {quote} * if {{true==forInPlaceUpdate}} and {{createFields(...)}} returns anything that is *not* a NumericDocValuesField (or returns more then one) shouldn't that trip an assert or something? (ie: doesn't that mean this SchemaField isn't valid for using with an in place update, and the code shouldn't have gotten this far?) ** {color:green}This is fine, since createFields() generates both NDV and non NDV fields for an indexable field, and the intent is to drop the non NDV one. Added a comment to this effect{color} {quote} ...this response confused the hell out of me for a while because i couldn't figure out any reason why createFields() should be returning "non NDV fields" untill i noticed the way TrieField.createFields delegates to TrieField.createField is different then every other docValue related FieldType: it expects the result to always be non-null, which (unlike every other FieldType) it always is. > TrieField.createFields produces useless IndexableField instances when field > is stored=false indexed=false > - > > Key: SOLR-9809 > URL: https://issues.apache.org/jira/browse/SOLR-9809 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man > > I'll provide more context in jira comment below, but important bit is this: > * It seems that {{TrieField.createFields}} and/or {{TriedField.createField}} > have a bug causing {{TrieField.createFields}} to return useless > {{Legacy*Field}} instances when the field is _only_ using docValues (in > addition to the important {{NumericDocValuesField}} instance which is also > included in the list). > * These useless IndexableField instances are passed along to the IndexWriter > where they are ultimatley ignored because neither the stored nor index > properties are set. > * Other field types that support docValues (like StrField, BoolField and > EnumField) don't seem to have this problem > ** but EnumField may be including a useless {{null}} in the list? ... seems > like a closely related bug. > * root of the bug seems to be that in most classes, {{createField}} returns > null if the field is indexed=false AND stored=false, but that's not true in > {{TrieField}} > ** subsequently {{createFields}} seems to to depend on {{createField}} not > returning null, so it can reuse the already parsed numeric value > * {{TrieField}} should be refactored to work the same as other fields that > support docvalues, and not produce useless IndexableField objects -- or at > the very least, to not pass them up to the caller > * we should add some low level unit tests that loop over all the possible > fieldTypes and sanity check that {{createFields}} returns an emptylist when > appropriate (no docValues, no stored, no indexed) > ** we should also probably update the key consumers of > {{FieldType.createFields}} to assert the values in the list are non-null -- > wouldn't have caught this bug, but it might help catch similarly silly bugs > in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9809) TrieField.createFields produces useless IndexableField instances when field is stored=false indexed=false
Hoss Man created SOLR-9809: -- Summary: TrieField.createFields produces useless IndexableField instances when field is stored=false indexed=false Key: SOLR-9809 URL: https://issues.apache.org/jira/browse/SOLR-9809 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Hoss Man I'll provide more context in jira comment below, but important bit is this: * It seems that {{TrieField.createFields}} and/or {{TriedField.createField}} have a bug causing {{TrieField.createFields}} to return useless {{Legacy*Field}} instances when the field is _only_ using docValues (in addition to the important {{NumericDocValuesField}} instance which is also included in the list). * These useless IndexableField instances are passed along to the IndexWriter where they are ultimatley ignored because neither the stored nor index properties are set. * Other field types that support docValues (like StrField, BoolField and EnumField) don't seem to have this problem ** but EnumField may be including a useless {{null}} in the list? ... seems like a closely related bug. * root of the bug seems to be that in most classes, {{createField}} returns null if the field is indexed=false AND stored=false, but that's not true in {{TrieField}} ** subsequently {{createFields}} seems to to depend on {{createField}} not returning null, so it can reuse the already parsed numeric value * {{TrieField}} should be refactored to work the same as other fields that support docvalues, and not produce useless IndexableField objects -- or at the very least, to not pass them up to the caller * we should add some low level unit tests that loop over all the possible fieldTypes and sanity check that {{createFields}} returns an emptylist when appropriate (no docValues, no stored, no indexed) ** we should also probably update the key consumers of {{FieldType.createFields}} to assert the values in the list are non-null -- wouldn't have caught this bug, but it might help catch similarly silly bugs in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703638#comment-15703638 ] Michael McCandless commented on LUCENE-7466: OK it was definitely in some bizarro state ;) But I think I fixed it by reopening and then resolving again! > add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: master (7.0), 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-7466. Resolution: Fixed Fix Version/s: master (7.0) > add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: master (7.0), 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless reopened LUCENE-7466: > add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7105) Running Solr as a windows service
[ https://issues.apache.org/jira/browse/SOLR-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703621#comment-15703621 ] Alex Crome commented on SOLR-7105: -- Unfortunately you can't run any old executable as a windows service - it needs to understand how to communicate with the Service Control Manager. Thus some wrapper (nssm or commons daemon wrappers) is required. I have a script that does a basic install of solr at https://gist.github.com/afscrome/e6c4f3c8e9ca89e9882b1b77fde1e2c0 which should be a good starting point. Specifically it: * Sets up solr as a windows service (through nssm) * Work around an issue with nssm being unable to gracefully stop solr (due to windows stupidity) * Avoid double logging in both nssm logs & solr logs * Set up solr with least required privileges (Service specific virtual account + minimal permissions). (Needs more work as {{solr.cmd}} doesn't support everything to provide a clean split between app files & data files, like the opt/var split supported in the bash scripts) * Work around for SOLR-9760 when running with minimal permissions * Some diagnostics to make troubleshooting service installation failures easier. (Specifically determining whether to look at nssm, solr or windows event logs to troubleshoot further). Work needed * Probably needs to allow for more configuration options (solr home, log directory etc.) Check what the linux script supports) * Should probably add a {{NoStartService}} option that installs the service, but doesn't start it after. * Any problems with using nssm over Commons Dameon? I'm not familiar with the latter. This script supports Win 8 / Server 2012 +. Win 7 and Server 2008 R2 will work, but will require powershell 3 or higher to be installed. Might be able to support older versions of powershell, but would need more investigation. > Running Solr as a windows service > - > > Key: SOLR-7105 > URL: https://issues.apache.org/jira/browse/SOLR-7105 > Project: Solr > Issue Type: Improvement >Reporter: Varun Thacker > Fix For: 6.0 > > > Since we moved away from shipping a war, it's useful to have scripts to start > Solr as a service. > In 5.0 we already added a script for unix systems, we should also add one for > windows. > The Commons Daemon project seems like a good way to implement it - > http://commons.apache.org/proper/commons-daemon/procrun.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+140) - Build # 2293 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2293/ Java: 32bit/jdk-9-ea+140 -server -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup Error Message: no segments* file found in SimpleFSDirectory@/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandlerBackup_5A0F4F581134B450-001/solr-instance-001/collection1/data/snapshot.svig lockFactory=org.apache.lucene.store.NativeFSLockFactory@56f237: files: [_0.si] Stack Trace: org.apache.lucene.index.IndexNotFoundException: no segments* file found in SimpleFSDirectory@/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandlerBackup_5A0F4F581134B450-001/solr-instance-001/collection1/data/snapshot.svig lockFactory=org.apache.lucene.store.NativeFSLockFactory@56f237: files: [_0.si] at __randomizedtesting.SeedInfo.seed([5A0F4F581134B450:1B846F3D368A471F]:0) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:680) at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:77) at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63) at org.apache.solr.handler.TestReplicationHandlerBackup.verify(TestReplicationHandlerBackup.java:150) at org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup(TestReplicationHandlerBackup.java:214) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-9779) Basic auth in not supported in Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703603#comment-15703603 ] Kevin Risden commented on SOLR-9779: Thanks [~wiredcity11] for checking! There should be a better way to handle this than the custom httpclient. I'll leave this open for that. > Basic auth in not supported in Streaming Expressions > > > Key: SOLR-9779 > URL: https://issues.apache.org/jira/browse/SOLR-9779 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, security >Affects Versions: 6.0 >Reporter: Sandeep Mukherjee >Assignee: Kevin Risden > Labels: features, security > Fix For: 6.4 > > > I'm creating a StreamFactory object like the following code: > {code} > new StreamFactory().withDefaultZkHost(solrConfig.getConnectString()) > .withFunctionName("gatherNodes", GatherNodesStream.class); > {code} > However once I create the StreamFactory there is no way provided to set the > CloudSolrClient object which can be used to set Basic Auth headers. > In StreamContext object there is a way to set the SolrClientCache object > which keep reference to all the CloudSolrClient where I can set a reference > to HttpClient which sets the Basic Auth header. However the problem is, > inside the SolrClientCache there is no way to set your own version of > CloudSolrClient with BasicAuth enabled. > I think we should expose method in StreamContext where I can specify > basic-auth enabled CloudSolrClient to use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9779) Basic auth in not supported in Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703603#comment-15703603 ] Kevin Risden edited comment on SOLR-9779 at 11/28/16 11:50 PM: --- Thanks [~wiredcity11] for checking! There should be a better way to handle this than the custom solrclientcache. I'll leave this open for that. was (Author: risdenk): Thanks [~wiredcity11] for checking! There should be a better way to handle this than the custom httpclient. I'll leave this open for that. > Basic auth in not supported in Streaming Expressions > > > Key: SOLR-9779 > URL: https://issues.apache.org/jira/browse/SOLR-9779 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, security >Affects Versions: 6.0 >Reporter: Sandeep Mukherjee >Assignee: Kevin Risden > Labels: features, security > Fix For: 6.4 > > > I'm creating a StreamFactory object like the following code: > {code} > new StreamFactory().withDefaultZkHost(solrConfig.getConnectString()) > .withFunctionName("gatherNodes", GatherNodesStream.class); > {code} > However once I create the StreamFactory there is no way provided to set the > CloudSolrClient object which can be used to set Basic Auth headers. > In StreamContext object there is a way to set the SolrClientCache object > which keep reference to all the CloudSolrClient where I can set a reference > to HttpClient which sets the Basic Auth header. However the problem is, > inside the SolrClientCache there is no way to set your own version of > CloudSolrClient with BasicAuth enabled. > I think we should expose method in StreamContext where I can specify > basic-auth enabled CloudSolrClient to use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9779) Basic auth in not supported in Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703595#comment-15703595 ] Sandeep Mukherjee commented on SOLR-9779: - That works! Thanks a bunch. > Basic auth in not supported in Streaming Expressions > > > Key: SOLR-9779 > URL: https://issues.apache.org/jira/browse/SOLR-9779 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, security >Affects Versions: 6.0 >Reporter: Sandeep Mukherjee >Assignee: Kevin Risden > Labels: features, security > Fix For: 6.4 > > > I'm creating a StreamFactory object like the following code: > {code} > new StreamFactory().withDefaultZkHost(solrConfig.getConnectString()) > .withFunctionName("gatherNodes", GatherNodesStream.class); > {code} > However once I create the StreamFactory there is no way provided to set the > CloudSolrClient object which can be used to set Basic Auth headers. > In StreamContext object there is a way to set the SolrClientCache object > which keep reference to all the CloudSolrClient where I can set a reference > to HttpClient which sets the Basic Auth header. However the problem is, > inside the SolrClientCache there is no way to set your own version of > CloudSolrClient with BasicAuth enabled. > I think we should expose method in StreamContext where I can specify > basic-auth enabled CloudSolrClient to use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 521 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/521/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.schema.TestCloudSchemaless.test Error Message: QUERY FAILED: xpath=/response/arr[@name='fields']/lst/str[@name='name'][.='newTestFieldInt448'] request=/schema/fields?wt=xml response=
[jira] [Commented] (SOLR-9779) Basic auth in not supported in Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703504#comment-15703504 ] Sandeep Mukherjee commented on SOLR-9779: - Thanks for that Kevin. I'm going to try it out. > Basic auth in not supported in Streaming Expressions > > > Key: SOLR-9779 > URL: https://issues.apache.org/jira/browse/SOLR-9779 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, security >Affects Versions: 6.0 >Reporter: Sandeep Mukherjee >Assignee: Kevin Risden > Labels: features, security > Fix For: 6.4 > > > I'm creating a StreamFactory object like the following code: > {code} > new StreamFactory().withDefaultZkHost(solrConfig.getConnectString()) > .withFunctionName("gatherNodes", GatherNodesStream.class); > {code} > However once I create the StreamFactory there is no way provided to set the > CloudSolrClient object which can be used to set Basic Auth headers. > In StreamContext object there is a way to set the SolrClientCache object > which keep reference to all the CloudSolrClient where I can set a reference > to HttpClient which sets the Basic Auth header. However the problem is, > inside the SolrClientCache there is no way to set your own version of > CloudSolrClient with BasicAuth enabled. > I think we should expose method in StreamContext where I can specify > basic-auth enabled CloudSolrClient to use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9767) Add/Remove Role in Collection API does not pass role parameter to SolrServer
[ https://issues.apache.org/jira/browse/SOLR-9767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703377#comment-15703377 ] Kevin Risden commented on SOLR-9767: [~daisy_yu] - Can you add a test for this? > Add/Remove Role in Collection API does not pass role parameter to SolrServer > > > Key: SOLR-9767 > URL: https://issues.apache.org/jira/browse/SOLR-9767 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 6.2 > Environment: Linux Suse And Windows 7 >Reporter: Daisy.Yuan >Priority: Minor > Attachments: SOLR-9767.patch > > > // CollectionAdminResponse response = > CollectionAdminRequest.addRole("192.168.1.2:21104_solr", > "overseer").process(cloudSolrClient); > CollectionAdminResponse response = > CollectionAdminRequest.removeRole("192.168.1.3:21104", > "overseer").process(cloudSolrClient); > if (response.getStatus() != 0) { > System.out.println(response.getErrorMessages()); > } > When I add or remove role, it throw exception " Missing required parameter: > role" -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-9767) Add/Remove Role in Collection API does not pass role parameter to SolrServer
[ https://issues.apache.org/jira/browse/SOLR-9767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden reassigned SOLR-9767: -- Assignee: Kevin Risden > Add/Remove Role in Collection API does not pass role parameter to SolrServer > > > Key: SOLR-9767 > URL: https://issues.apache.org/jira/browse/SOLR-9767 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 6.2 > Environment: Linux Suse And Windows 7 >Reporter: Daisy.Yuan >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-9767.patch > > > // CollectionAdminResponse response = > CollectionAdminRequest.addRole("192.168.1.2:21104_solr", > "overseer").process(cloudSolrClient); > CollectionAdminResponse response = > CollectionAdminRequest.removeRole("192.168.1.3:21104", > "overseer").process(cloudSolrClient); > if (response.getStatus() != 0) { > System.out.println(response.getErrorMessages()); > } > When I add or remove role, it throw exception " Missing required parameter: > role" -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9759) Admin UI should post streaming expressions
[ https://issues.apache.org/jira/browse/SOLR-9759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-9759: --- Description: Haven't had the chance to test this in 6.3, but in 6.2.1 I just ran into request entity too large, when I pasted an expression into the admin ui to begin debugging it... Furthermore, the UI gives no indication of any error at all, leading one to sit, waiting for the response. Firefox JavaScript console shows a 413 response and this: {code} 11:01:11.095 Error: JSON.parse: unexpected character at line 1 column 1 of the JSON data $scope.doStream/<@http://localhost:8984/solr/js/angular/controllers/stream.js:48:24 v/http://localhost:8984/solr/libs/angular-resource.min.js:33:133 processQueue@http://localhost:8984/solr/libs/angular.js:13193:27 scheduleProcessQueue/<@http://localhost:8984/solr/libs/angular.js:13209:27 $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14406:16 $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14222:15 $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14511:13 done@http://localhost:8984/solr/libs/angular.js:9669:36 completeRequest@http://localhost:8984/solr/libs/angular.js:9859:7 requestLoaded@http://localhost:8984/solr/libs/angular.js:9800:9 1angular.js:11617:18 consoleLog/<()angular.js:11617 $ExceptionHandlerProvider/this.$gethttp://localhost:8984/solr/js/angular/controllers/stream.js:48:24 v/http://localhost:8984/solr/libs/angular-resource.min.js:33:133 processQueue@http://localhost:8984/solr/libs/angular.js:13193:27 scheduleProcessQueue/<@http://localhost:8984/solr/libs/angular.js:13209:27 $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14406:16 $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14222:15 $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14511:13 done@http://localhost:8984/solr/libs/angular.js:9669:36 completeRequest@http://localhost:8984/solr/libs/angular.js:9859:7 requestLoaded@http://localhost:8984/solr/libs/angular.js:9800:9 1angular.js:11617:18 consoleLog/<()angular.js:11617 $ExceptionHandlerProvider/this.$get Admin UI should post streaming expressions > -- > > Key: SOLR-9759 > URL: https://issues.apache.org/jira/browse/SOLR-9759 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UI >Affects Versions: 6.2.1 >Reporter: Gus Heck > > Haven't had the chance to test this in 6.3, but in 6.2.1 I just ran into > request entity too large, when I pasted an expression into the admin ui to > begin debugging it... > Furthermore, the UI gives no indication of any error at all, leading one to > sit, waiting for the response. Firefox JavaScript console shows a 413 > response and this: > {code} > 11:01:11.095 Error: JSON.parse: unexpected character at line 1 column 1 of > the JSON data > $scope.doStream/<@http://localhost:8984/solr/js/angular/controllers/stream.js:48:24 > v/http://localhost:8984/solr/libs/angular-resource.min.js:33:133 > processQueue@http://localhost:8984/solr/libs/angular.js:13193:27 > scheduleProcessQueue/<@http://localhost:8984/solr/libs/angular.js:13209:27 > $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14406:16 > $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14222:15 > $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14511:13 > done@http://localhost:8984/solr/libs/angular.js:9669:36 > completeRequest@http://localhost:8984/solr/libs/angular.js:9859:7 > requestLoaded@http://localhost:8984/solr/libs/angular.js:9800:9 > 1angular.js:11617:18 > consoleLog/<()angular.js:11617 > $ExceptionHandlerProvider/this.$get processQueue()angular.js:13201 > scheduleProcessQueue/<()angular.js:13209 > $RootScopeProvider/this.$get $RootScopeProvider/this.$get $RootScopeProvider/this.$get done()angular.js:9669 > completeRequest()angular.js:9859 > requestLoaded()angular.js:9800 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9808) Update ivy dependencies for Hadoop 3 Minikdc
Hrishikesh Gadre created SOLR-9808: -- Summary: Update ivy dependencies for Hadoop 3 Minikdc Key: SOLR-9808 URL: https://issues.apache.org/jira/browse/SOLR-9808 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Reporter: Hrishikesh Gadre Priority: Minor -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18396 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18396/ Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([FEBFA134F5479CF0:7DC9FEC6233E9251]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:155) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12146 lines...] [junit4] Suite: org.apache.solr.cloud.TestCloudRecovery [junit4] 2> Creating dataDir:
[jira] [Comment Edited] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703302#comment-15703302 ] Tommaso Teofili edited comment on LUCENE-7466 at 11/28/16 10:14 PM: well .. that's weird, I had set it to resolved back on Nov 20th (click on the 'All' tab), but then when you commented I saw it was still unresolved and therefore assumed it was reopened by someone else. Now it looks resolved because you can close and reopen, but also unresolved as per current resolution value ... was (Author: teofili): well .. that's weird, I had set it to resolved back on Nov 20th (click on the 'All' tab), but then when you commented I saw it was still unresolved and therefore assumed it was reopened by someone else. Now it looksresolved because you can close and reopen, but also unresolved as per current resolution value ... > add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703302#comment-15703302 ] Tommaso Teofili edited comment on LUCENE-7466 at 11/28/16 10:14 PM: well .. that's weird, I had set it to resolved back on Nov 20th (click on the 'All' tab), but then when you commented I saw it was still unresolved and therefore assumed it was reopened by someone else. Now it looksresolved because you can close and reopen, but also unresolved as per current resolution value ... was (Author: teofili): well .. that's weird, I had set it to resolved back on Nov 20th (click on the 'All' tab), but then when you commented I saw it was still unresolved and therefore assumed it was reopened by someone else. Now it looks fixed resolved because you can close and reopen, but also unresolved as per current resolution value ... > add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703302#comment-15703302 ] Tommaso Teofili commented on LUCENE-7466: - well .. that's weird, I had set it to resolved back on Nov 20th (click on the 'All' tab), but then when you commented I saw it was still unresolved and therefore assumed it was reopened by someone else. Now it looks fixed resolved because you can close and reopen, but also unresolved as per current resolution value ... > add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 977 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/977/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor146.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:723) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:785) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1024) at org.apache.solr.core.SolrCore.(SolrCore.java:889) at org.apache.solr.core.SolrCore.(SolrCore.java:793) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:868) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:517) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor146.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:723) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:785) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1024) at org.apache.solr.core.SolrCore.(SolrCore.java:889) at org.apache.solr.core.SolrCore.(SolrCore.java:793) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:868) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:517) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([82E6C06A2B1C9E3B]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266) at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[jira] [Commented] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703298#comment-15703298 ] Michael McCandless commented on LUCENE-7466: I'm confused: why does the issue say it's Open yet I only see a Reopen Issue or Close Issue buttons here? > add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9807) Update TestSolrCloudWithKerberosAlt to work with Hadoop 3 Minikdc
Hrishikesh Gadre created SOLR-9807: -- Summary: Update TestSolrCloudWithKerberosAlt to work with Hadoop 3 Minikdc Key: SOLR-9807 URL: https://issues.apache.org/jira/browse/SOLR-9807 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Reporter: Hrishikesh Gadre Priority: Minor The Minikdc provided by Hadoop 3 is using Apache Kerby library for kerberos authentication (instead of ApacheDS library used in 2.x versions). This jira is to track the changes required to get the TestSolrCloudWithKerberosAlt to pass with the Minikdc shipped as part of Hadoop 3. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs
[ https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703181#comment-15703181 ] Michael Sun edited comment on SOLR-9764 at 11/28/16 9:27 PM: - Uploaded a new patch with all tests passed. bq. What is the issue with intDocSet? Basic in DocSetBase.equals(), both DocSet are converted to FixedBitSet and then both FixedBitSet are compared. However, both DocSet may go through different code path and resize differently in conversion even these two DocSet are equal. The result is that one FixedBitSet has more zero paddings than the other which makes FixedBitSet.equals() think they are different. The fix is to resize both FixedBitSet to the same larger size before comparison in DocSetBase.equals(). Since DocSetBase.equals() is marked for test purpose only, the efficiency of the extra sizing would not be a problem. was (Author: michael.sun): Uploaded a new patch with all tests passed. bq. What is the issue with intDocSet? Basic in DocSetBase.equals(), both DocSet are converted to FixedBitSet and then both FixedBitSet are compared. However, both DocSet may go through different code path and resize differently in conversion even these two DocSet are equal. The result is taht one FixedBitSet has more zero paddings than the other which makes FixedBitSet.equals() think they are different. The fix is to resize both FixedBitSet to the same larger size before comparison in DocSetBase.equals(). Since DocSetBase.equals() is marked for test purpose only, the efficiency of the extra sizing would not be a problem. > Design a memory efficient DocSet if a query returns all docs > > > Key: SOLR-9764 > URL: https://issues.apache.org/jira/browse/SOLR-9764 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Michael Sun > Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, > SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch > > > In some use cases, particularly use cases with time series data, using > collection alias and partitioning data into multiple small collections using > timestamp, a filter query can match all documents in a collection. Currently > BitDocSet is used which contains a large array of long integers with every > bits set to 1. After querying, the resulted DocSet saved in filter cache is > large and becomes one of the main memory consumers in these use cases. > For example. suppose a Solr setup has 14 collections for data in last 14 > days, each collection with one day of data. A filter query for last one week > data would result in at least six DocSet in filter cache which matches all > documents in six collections respectively. > This is to design a new DocSet that is memory efficient for such a use case. > The new DocSet removes the large array, reduces memory usage and GC pressure > without losing advantage of large filter cache. > In particular, for use cases when using time series data, collection alias > and partition data into multiple small collections using timestamp, the gain > can be large. > For further optimization, it may be helpful to design a DocSet with run > length encoding. Thanks [~mmokhtar] for suggestion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs
[ https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703181#comment-15703181 ] Michael Sun commented on SOLR-9764: --- Uploaded a new patch with all tests passed. bq. What is the issue with intDocSet? Basic in DocSetBase.equals(), both DocSet are converted to FixedBitSet and then both FixedBitSet are compared. However, both DocSet may go through different code path and resize differently in conversion even these two DocSet are equal. The result is taht one FixedBitSet has more zero paddings than the other which makes FixedBitSet.equals() think they are different. The fix is to resize both FixedBitSet to the same larger size before comparison in DocSetBase.equals(). Since DocSetBase.equals() is marked for test purpose only, the efficiency of the extra sizing would not be a problem. > Design a memory efficient DocSet if a query returns all docs > > > Key: SOLR-9764 > URL: https://issues.apache.org/jira/browse/SOLR-9764 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Michael Sun > Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, > SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch > > > In some use cases, particularly use cases with time series data, using > collection alias and partitioning data into multiple small collections using > timestamp, a filter query can match all documents in a collection. Currently > BitDocSet is used which contains a large array of long integers with every > bits set to 1. After querying, the resulted DocSet saved in filter cache is > large and becomes one of the main memory consumers in these use cases. > For example. suppose a Solr setup has 14 collections for data in last 14 > days, each collection with one day of data. A filter query for last one week > data would result in at least six DocSet in filter cache which matches all > documents in six collections respectively. > This is to design a new DocSet that is memory efficient for such a use case. > The new DocSet removes the large array, reduces memory usage and GC pressure > without losing advantage of large filter cache. > In particular, for use cases when using time series data, collection alias > and partition data into multiple small collections using timestamp, the gain > can be large. > For further optimization, it may be helpful to design a DocSet with run > length encoding. Thanks [~mmokhtar] for suggestion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-6.x - Build # 568 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/568/ 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: ObjectTracker found 5 object(s) that were not released!!! [MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, SolrCore, MDCAwareThreadPoolExecutor] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347) at org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:434) at org.apache.solr.core.SolrCore.(SolrCore.java:841) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347) at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:332) at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:641) at org.apache.solr.core.SolrCore.(SolrCore.java:847) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347) at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:66) at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:673) at org.apache.solr.core.SolrCore.(SolrCore.java:847) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.SolrCore.(SolrCore.java:937) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.SolrCore.(SolrCore.java:798) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at
[jira] [Updated] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs
[ https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Sun updated SOLR-9764: -- Attachment: SOLR-9764.patch > Design a memory efficient DocSet if a query returns all docs > > > Key: SOLR-9764 > URL: https://issues.apache.org/jira/browse/SOLR-9764 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Michael Sun > Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, > SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch > > > In some use cases, particularly use cases with time series data, using > collection alias and partitioning data into multiple small collections using > timestamp, a filter query can match all documents in a collection. Currently > BitDocSet is used which contains a large array of long integers with every > bits set to 1. After querying, the resulted DocSet saved in filter cache is > large and becomes one of the main memory consumers in these use cases. > For example. suppose a Solr setup has 14 collections for data in last 14 > days, each collection with one day of data. A filter query for last one week > data would result in at least six DocSet in filter cache which matches all > documents in six collections respectively. > This is to design a new DocSet that is memory efficient for such a use case. > The new DocSet removes the large array, reduces memory usage and GC pressure > without losing advantage of large filter cache. > In particular, for use cases when using time series data, collection alias > and partition data into multiple small collections using timestamp, the gain > can be large. > For further optimization, it may be helpful to design a DocSet with run > length encoding. Thanks [~mmokhtar] for suggestion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+140) - Build # 18395 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18395/ Java: 64bit/jdk-9-ea+140 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:44408/solr/managed-preanalyzed, https://127.0.0.1:43213/solr/managed-preanalyzed] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:44408/solr/managed-preanalyzed, https://127.0.0.1:43213/solr/managed-preanalyzed] at __randomizedtesting.SeedInfo.seed([3D35BA5B82C8000F:952008B05ECE93F9]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:418) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1340) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1091) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1033) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149) at org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.addField(PreAnalyzedFieldManagedSchemaCloudTest.java:61) at org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields(PreAnalyzedFieldManagedSchemaCloudTest.java:52) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-9546) There is a lot of unnecessary boxing/unboxing going on in {{SolrParams}} class
[ https://issues.apache.org/jira/browse/SOLR-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702936#comment-15702936 ] Pushkar Raste commented on SOLR-9546: - Looks like we stepped on each other foot when I was fixing the {{CloudMLTQParser}} class. Please check updated patch. > There is a lot of unnecessary boxing/unboxing going on in {{SolrParams}} class > -- > > Key: SOLR-9546 > URL: https://issues.apache.org/jira/browse/SOLR-9546 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Pushkar Raste >Assignee: Noble Paul >Priority: Minor > Attachments: SOLR-9546.patch, SOLR-9546_CloudMLTQParser.patch > > > Here is an excerpt > {code} > public Long getLong(String param, Long def) { > String val = get(param); > try { > return val== null ? def : Long.parseLong(val); > } > catch( Exception ex ) { > throw new SolrException( SolrException.ErrorCode.BAD_REQUEST, > ex.getMessage(), ex ); > } > } > {code} > {{Long.parseLong()}} returns a primitive type but since method expect to > return a {{Long}}, it needs to be wrapped. There are many more method like > that. We might be creating a lot of unnecessary objects here. > I am not sure if JVM catches upto it and somehow optimizes it if these > methods are called enough times (or may be compiler does some modifications > at compile time) > Let me know if I am thinking of some premature optimization -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9546) There is a lot of unnecessary boxing/unboxing going on in {{SolrParams}} class
[ https://issues.apache.org/jira/browse/SOLR-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pushkar Raste updated SOLR-9546: Attachment: SOLR-9546_CloudMLTQParser.patch > There is a lot of unnecessary boxing/unboxing going on in {{SolrParams}} class > -- > > Key: SOLR-9546 > URL: https://issues.apache.org/jira/browse/SOLR-9546 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Pushkar Raste >Assignee: Noble Paul >Priority: Minor > Attachments: SOLR-9546.patch, SOLR-9546_CloudMLTQParser.patch > > > Here is an excerpt > {code} > public Long getLong(String param, Long def) { > String val = get(param); > try { > return val== null ? def : Long.parseLong(val); > } > catch( Exception ex ) { > throw new SolrException( SolrException.ErrorCode.BAD_REQUEST, > ex.getMessage(), ex ); > } > } > {code} > {{Long.parseLong()}} returns a primitive type but since method expect to > return a {{Long}}, it needs to be wrapped. There are many more method like > that. We might be creating a lot of unnecessary objects here. > I am not sure if JVM catches upto it and somehow optimizes it if these > methods are called enough times (or may be compiler does some modifications > at compile time) > Let me know if I am thinking of some premature optimization -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9546) There is a lot of unnecessary boxing/unboxing going on in {{SolrParams}} class
[ https://issues.apache.org/jira/browse/SOLR-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pushkar Raste updated SOLR-9546: Attachment: (was: SOLR-9546_CloudMLTQParser.patch) > There is a lot of unnecessary boxing/unboxing going on in {{SolrParams}} class > -- > > Key: SOLR-9546 > URL: https://issues.apache.org/jira/browse/SOLR-9546 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Pushkar Raste >Assignee: Noble Paul >Priority: Minor > Attachments: SOLR-9546.patch > > > Here is an excerpt > {code} > public Long getLong(String param, Long def) { > String val = get(param); > try { > return val== null ? def : Long.parseLong(val); > } > catch( Exception ex ) { > throw new SolrException( SolrException.ErrorCode.BAD_REQUEST, > ex.getMessage(), ex ); > } > } > {code} > {{Long.parseLong()}} returns a primitive type but since method expect to > return a {{Long}}, it needs to be wrapped. There are many more method like > that. We might be creating a lot of unnecessary objects here. > I am not sure if JVM catches upto it and somehow optimizes it if these > methods are called enough times (or may be compiler does some modifications > at compile time) > Let me know if I am thinking of some premature optimization -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9806) page wise probabilistic elevation of result documents based on combination of signal values
sachin created SOLR-9806: Summary: page wise probabilistic elevation of result documents based on combination of signal values Key: SOLR-9806 URL: https://issues.apache.org/jira/browse/SOLR-9806 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: sachin We had a requirement to elevate result documents based on individual signal values and after searching thoroughly into available Solr modules, we couldn't find anything and therefore decided to write our own module. Problem Description: We wanted to boost results in a probabilistic fashion. i.e x% of the results on each page must have a signal value above threshold value. One immediate usecase that can be thought of is recent results. We want to boost results which are above a certain threshold such that x% of the results in each page belong to these recent documents. Solution: We extend the solrQueryElevationComponent to support our probabilistic elevation component. An additional comparator is added and the overall runTime changes from nlogn to 2nlogn -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-4922) When i create a SolrCore by CoreAdmin ,a SolrException has occur
[ https://issues.apache.org/jira/browse/SOLR-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett closed SOLR-4922. --- Resolution: Cannot Reproduce Fix Version/s: (was: 4.3) > When i create a SolrCore by CoreAdmin ,a SolrException has occur > > > Key: SOLR-4922 > URL: https://issues.apache.org/jira/browse/SOLR-4922 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.3 > Environment: Windows7 Home Basic ,jdk1.6.0_45 >Reporter: zengjie > Labels: LeaderElector, PathUtils, SolrCloud > > Full stack is over here: > 1714364 [qtp29239443-20] ERROR org.apache.solr.core.SolrCore – > org.apache.solr.common.SolrException: Error CREATEing SolrCore 'pconline_cms': > at > org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:524) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:144) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) > at > org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:608) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:206) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) > at org.eclipse.jetty.server.Server.handle(Server.java:365) > at > org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485) > at > org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53) > at > org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:926) > at > org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:988) > at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635) > at > org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) > at > org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72) > at > org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) > at java.lang.Thread.run(Unknown Source) > Caused by: org.apache.solr.common.cloud.ZooKeeperException: > at > org.apache.solr.core.CoreContainer.registerInZk(CoreContainer.java:853) > at > org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:814) > at org.apache.solr.core.CoreContainer.register(CoreContainer.java:869) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:520) > ... 30 more > Caused by: java.lang.IllegalArgumentException: Invalid path string > "/collections/pconline_cms/leader_elect//election" caused by empty node name > specified @39 > at > org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:99) > at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1020) > at > org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:201) > at > org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:198) > at >
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702925#comment-15702925 ] Joel Bernstein commented on SOLR-8593: -- Ok thanks, that should work nicely for us. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7563) BKD index should compress unused leading bytes
[ https://issues.apache.org/jira/browse/LUCENE-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702891#comment-15702891 ] Adrien Grand commented on LUCENE-7563: -- bq. Hmm I think I am already doing that? You are right, I had not read the code correctly. bq. Oooh that's a great idea! Saves 1 byte per inner node. We need 5 bits for the prefix I think since it can range 0 .. 16 inclusive, and 3 bits for the splitDim since it's 0 .. 7 inclusive. I have been thinking about it more and I think we can make it more general. The first two bytes that differ are likely close to each other, so if we call their difference {{firstByteDelta}}, we could pack {{firstByteDelta}}, {{splitDim}} and {{prefix}} into a single vint (eg. {{(firstByteDelta * (1 + bytesPerDim) + prefix) * numDims + splitDim}}) that would sometimes only take one byte (quite often when {{numDims}} and {{bytesPerDim}} are small and rarely in the opposite case). bq. but it felt wrong to just pass these packed bytes to the simple text format ... Agreed. Maybe we should duplicate the curent BKDReader/BKDWriter into a new impl that would be specific to SimpleText and would not need all those optimizations so that both impls can evolve separately. > BKD index should compress unused leading bytes > -- > > Key: LUCENE-7563 > URL: https://issues.apache.org/jira/browse/LUCENE-7563 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7563.patch, LUCENE-7563.patch > > > Today the BKD (points) in-heap index always uses {{dimensionNumBytes}} per > dimension, but if e.g. you are indexing {{LongPoint}} yet only use the bottom > two bytes in a given segment, we shouldn't store all those leading 0s in the > index. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-5790) SolrException: Unknown document router '{name=compositeId}'.
[ https://issues.apache.org/jira/browse/SOLR-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-5790. - Resolution: Not A Problem Thanks for confirming Ishan. I'll close this as "Not a problem". > SolrException: Unknown document router '{name=compositeId}'. > > > Key: SOLR-5790 > URL: https://issues.apache.org/jira/browse/SOLR-5790 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.6.1 > Environment: Windows 7 64 Bit >Reporter: Günther Ruck >Priority: Minor > > I tried to use the CloudServerClass of the SolrJ-Api. SolrJ and Solr-Server > both in version 4.6.1. > {{serverCloud = new CloudSolrServer(zkHost);}} > My JUnit starts with a deleteByQuery. In DocRouter.java:46 a SolrException is > thrown because > {{routerMap.get(routerSpec);}} > finds no entry. > _Hints:_ > routerSpec is an instance of LinkedHashMapwith one entry (key:"name", > value:"compositeId"). > routerMap is a HashMap holding 4 entries, especially key:"compositeId" > has value: " org.apache.solr.common.cloud.CompositeIdRouter". > Probably there is a type mismatch at the routerMap.get call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702674#comment-15702674 ] Tommaso Teofili commented on LUCENE-7466: - sure, thanks Mike. > add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702676#comment-15702676 ] Peilin Yang commented on LUCENE-7466: - Sure. Please feel free to close the issue. On Mon, Nov 28, 2016 at 1:05 PM Michael McCandless (JIRA)> add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9660) in GroupingSpecification factor [group](sort|offset|limit) into [group](sortSpec)
[ https://issues.apache.org/jira/browse/SOLR-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702673#comment-15702673 ] Judith Silverman commented on SOLR-9660: Hi, Christine. I finally have a chunk of time and would like to help out here, but I don't want to get in the way of your program. Is there something in particular you would like me to look into? BTW, not sure whether you saw my question in this thread about changes to groupOffset from 26Oct16; maybe it should be asked/addressed separately. Thanks, Judith > in GroupingSpecification factor [group](sort|offset|limit) into > [group](sortSpec) > - > > Key: SOLR-9660 > URL: https://issues.apache.org/jira/browse/SOLR-9660 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-9660.patch, SOLR-9660.patch, SOLR-9660.patch > > > This is split out and adapted from and towards the SOLR-6203 changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7466) add axiomatic similarity
[ https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702670#comment-15702670 ] Michael McCandless commented on LUCENE-7466: Can this issue be resolved now? > add axiomatic similarity > - > > Key: LUCENE-7466 > URL: https://issues.apache.org/jira/browse/LUCENE-7466 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Peilin Yang >Assignee: Tommaso Teofili > Labels: patch > Fix For: 6.4 > > > Add axiomatic similarity approaches to the similarity family. > More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and > https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf > There are in total six similarity models. All of them are based on BM25, > Pivoted Document Length Normalization or Language Model with Dirichlet prior. > We think it is worthy to add the models as part of Lucene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4922) When i create a SolrCore by CoreAdmin ,a SolrException has occur
[ https://issues.apache.org/jira/browse/SOLR-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702559#comment-15702559 ] Ishan Chattopadhyaya commented on SOLR-4922: It seems your shardId here is empty here (can you confirm?). I couldn't reproduce it when my shardIds were shard1, shard2. I will test separately to see if I can somehow create a collection with empty shard names (but that deserves a separate issue). However, the code for the ElectionContext seems correct here. My feeling is that we should close this as a "cannot reproduce" until we can verify that you saw this issue in spite of having proper shardIds for your collection. > When i create a SolrCore by CoreAdmin ,a SolrException has occur > > > Key: SOLR-4922 > URL: https://issues.apache.org/jira/browse/SOLR-4922 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.3 > Environment: Windows7 Home Basic ,jdk1.6.0_45 >Reporter: zengjie > Labels: LeaderElector, PathUtils, SolrCloud > Fix For: 4.3 > > > Full stack is over here: > 1714364 [qtp29239443-20] ERROR org.apache.solr.core.SolrCore – > org.apache.solr.common.SolrException: Error CREATEing SolrCore 'pconline_cms': > at > org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:524) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:144) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) > at > org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:608) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:206) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) > at org.eclipse.jetty.server.Server.handle(Server.java:365) > at > org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485) > at > org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53) > at > org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:926) > at > org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:988) > at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635) > at > org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) > at > org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72) > at > org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) > at java.lang.Thread.run(Unknown Source) > Caused by: org.apache.solr.common.cloud.ZooKeeperException: > at > org.apache.solr.core.CoreContainer.registerInZk(CoreContainer.java:853) > at > org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:814) > at org.apache.solr.core.CoreContainer.register(CoreContainer.java:869) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:520) > ... 30 more > Caused by: java.lang.IllegalArgumentException: Invalid path string >
[jira] [Commented] (SOLR-5790) SolrException: Unknown document router '{name=compositeId}'.
[ https://issues.apache.org/jira/browse/SOLR-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702515#comment-15702515 ] Ishan Chattopadhyaya commented on SOLR-5790: I tried this on 6.3, both on Linux and Windows. Created a collection each with compositeid router. I was able to issue a DBQ using a CloudSolrClient. I think this was temporary, due to reason that Shalin mentioned. > SolrException: Unknown document router '{name=compositeId}'. > > > Key: SOLR-5790 > URL: https://issues.apache.org/jira/browse/SOLR-5790 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 4.6.1 > Environment: Windows 7 64 Bit >Reporter: Günther Ruck >Priority: Minor > > I tried to use the CloudServerClass of the SolrJ-Api. SolrJ and Solr-Server > both in version 4.6.1. > {{serverCloud = new CloudSolrServer(zkHost);}} > My JUnit starts with a deleteByQuery. In DocRouter.java:46 a SolrException is > thrown because > {{routerMap.get(routerSpec);}} > finds no entry. > _Hints:_ > routerSpec is an instance of LinkedHashMapwith one entry (key:"name", > value:"compositeId"). > routerMap is a HashMap holding 4 entries, especially key:"compositeId" > has value: " org.apache.solr.common.cloud.CompositeIdRouter". > Probably there is a type mismatch at the routerMap.get call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs
[ https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702509#comment-15702509 ] ASF subversion and git services commented on SOLR-8029: --- Commit 47fd4929e60359e3df86966451ce9372dae74fd8 in lucene-solr's branch refs/heads/apiv2 from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=47fd492 ] SOLR-8029: more spec refinements for schema read > Modernize and standardize Solr APIs > --- > > Key: SOLR-8029 > URL: https://issues.apache.org/jira/browse/SOLR-8029 > Project: Solr > Issue Type: Improvement >Affects Versions: 6.0 >Reporter: Noble Paul >Assignee: Noble Paul > Labels: API, EaseOfUse > Fix For: 6.0 > > Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, > SOLR-8029.patch > > > Solr APIs have organically evolved and they are sometimes inconsistent with > each other or not in sync with the widely followed conventions of HTTP > protocol. Trying to make incremental changes to make them modern is like > applying band-aid. So, we have done a complete rethink of what the APIs > should be. The most notable aspects of the API are as follows: > The new set of APIs will be placed under a new path {{/solr2}}. The legacy > APIs will continue to work under the {{/solr}} path as they used to and they > will be eventually deprecated. > There are 4 types of requests in the new API > * {{/v2//*}} : Hit a collection directly or manage > collections/shards/replicas > * {{/v2//*}} : Hit a core directly or manage cores > * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection > or core. e.g: security, overseer ops etc > This will be released as part of a major release. Check the link given below > for the full specification. Your comments are welcome > [Solr API version 2 Specification | http://bit.ly/1JYsBMQ] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+140) - Build # 18394 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18394/ Java: 64bit/jdk-9-ea+140 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit Error Message: expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([882A591F544E1798:7167CAB0683B5A12]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:283) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
[jira] [Resolved] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections
[ https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-7191. -- Resolution: Fixed Assignee: Noble Paul (was: Shalin Shekhar Mangar) Fix Version/s: 6.3 > Improve stability and startup performance of SolrCloud with thousands of > collections > > > Key: SOLR-7191 > URL: https://issues.apache.org/jira/browse/SOLR-7191 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.0 >Reporter: Shawn Heisey >Assignee: Noble Paul > Labels: performance, scalability > Fix For: 6.3 > > Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, > SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, > lots-of-zkstatereader-updates-branch_5x.log > > > A user on the mailing list with thousands of collections (5000 on 4.10.3, > 4000 on 5.0) is having severe problems with getting Solr to restart. > I tried as hard as I could to duplicate the user setup, but I ran into many > problems myself even before I was able to get 4000 collections created on a > 5.0 example cloud setup. Restarting Solr takes a very long time, and it is > not very stable once it's up and running. > This kind of setup is very much pushing the envelope on SolrCloud performance > and scalability. It doesn't help that I'm running both Solr nodes on one > machine (I started with 'bin/solr -e cloud') and that ZK is embedded. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702473#comment-15702473 ] Julian Hyde commented on SOLR-8593: --- Calcite is an algebra, not an executor. When if converts a HAVING clause to a SolrFilter you are more than welcome to run those filters in parallel. I suppose it would mean SolrAggregate producing parallel output streams. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9727) solr.in.sh properties does not set the correct values.
[ https://issues.apache.org/jira/browse/SOLR-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702468#comment-15702468 ] Kevin Risden commented on SOLR-9727: [~michaelsuzuki] - Can you verify this is still happening after SOLR-9728 and SOLR-9801? You need to set at least SOLR_SSL_KEY_STORE for SOLR_SSL_*_CLIENT_AUTH to take place based on the logic. Another thing that happened was Jetty was upgraded to 9.3.14 from 9.3.8 with SOLR-9801 recently. > solr.in.sh properties does not set the correct values. > -- > > Key: SOLR-9727 > URL: https://issues.apache.org/jira/browse/SOLR-9727 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Server >Affects Versions: master (7.0) >Reporter: Michael Suzuki >Assignee: Kevin Risden > Attachments: SOLR-9727.patch > > > When setting values to true on SOLR_SSL_NEED_CLIENT_AUTH or > SOLR_SSL_WANT_CLIENT_AUTH, jetty starts with these values as set to false. > {code} > SOLR_SSL_NEED_CLIENT_AUTH=true > SOLR_SSL_WANT_CLIENT_AUTH=false > {code} > To recreate the issue: > 1) Edit solr.ini.sh to enable ssl and set SOLR_SSL_NEED_CLIENT_AUTH to true. > 2) Start solr with remote debugging. > 3) Set a debug point in SSLContextFactory.java, on setNeedClientAuth(...) > Expected value for needClientAuth should be true instead its false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702457#comment-15702457 ] Julian Hyde commented on SOLR-8593: --- Calcite rewrites {{SELECT DISTINCT ...}} to {{SELECT ... GROUP BY ...}}. So if you just deal with {{GROUP BY}} (i.e. Calcite's Aggregate operator) you should be fine. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect
[ https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-7576: --- Attachment: LUCENE-7576.patch Patch w/ test (thank you!) and fix. This is unfortunately a confusing expert API; other terms dicts were checking that the provided compiled automaton is {{NORMAL}} and throwing a clearer exception if not, so I carried that same check over to the default terms dict. I also added a note to the javadocs for {{Terms.intersect}}. > RegExp automaton causes NPE on Terms.intersect > -- > > Key: LUCENE-7576 > URL: https://issues.apache.org/jira/browse/LUCENE-7576 > Project: Lucene - Core > Issue Type: Bug > Components: core/codecs, core/index >Affects Versions: 6.2.1 > Environment: java version "1.8.0_77" macOS 10.12.1 >Reporter: Tom Mortimer >Assignee: Michael McCandless >Priority: Minor > Attachments: LUCENE-7576.patch > > > Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an > NPE: > String index_path = > String term = > Directory directory = FSDirectory.open(Paths.get(index_path)); > IndexReader reader = DirectoryReader.open(directory); > Fields fields = MultiFields.getFields(reader); > Terms terms = fields.terms(args[1]); > CompiledAutomaton automaton = new CompiledAutomaton( > new RegExp("do_not_match_anything").toAutomaton()); > TermsEnum te = terms.intersect(automaton, null); > throws: > Exception in thread "main" java.lang.NullPointerException > at > org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127) > at > org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185) > at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85) > ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs
[ https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702387#comment-15702387 ] ASF subversion and git services commented on SOLR-8029: --- Commit 5ef717bd97cc1f479dfdf2bdd210f32406c8224d in lucene-solr's branch refs/heads/apiv2 from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5ef717b ] SOLR-8029: more spec refinements for schema edit > Modernize and standardize Solr APIs > --- > > Key: SOLR-8029 > URL: https://issues.apache.org/jira/browse/SOLR-8029 > Project: Solr > Issue Type: Improvement >Affects Versions: 6.0 >Reporter: Noble Paul >Assignee: Noble Paul > Labels: API, EaseOfUse > Fix For: 6.0 > > Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, > SOLR-8029.patch > > > Solr APIs have organically evolved and they are sometimes inconsistent with > each other or not in sync with the widely followed conventions of HTTP > protocol. Trying to make incremental changes to make them modern is like > applying band-aid. So, we have done a complete rethink of what the APIs > should be. The most notable aspects of the API are as follows: > The new set of APIs will be placed under a new path {{/solr2}}. The legacy > APIs will continue to work under the {{/solr}} path as they used to and they > will be eventually deprecated. > There are 4 types of requests in the new API > * {{/v2//*}} : Hit a collection directly or manage > collections/shards/replicas > * {{/v2//*}} : Hit a core directly or manage cores > * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection > or core. e.g: security, overseer ops etc > This will be released as part of a major release. Check the link given below > for the full specification. Your comments are welcome > [Solr API version 2 Specification | http://bit.ly/1JYsBMQ] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect
[ https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702374#comment-15702374 ] Alan Woodward commented on LUCENE-7576: --- TermsEnum.intersect() doesn't work with single-string automata, apparently; we need to use CompiledAutomaton.getTermsEnum() instead. It would be nice to have a better error message in FilterReader though. Or maybe check for the automaton type, and delegate through if need be? > RegExp automaton causes NPE on Terms.intersect > -- > > Key: LUCENE-7576 > URL: https://issues.apache.org/jira/browse/LUCENE-7576 > Project: Lucene - Core > Issue Type: Bug > Components: core/codecs, core/index >Affects Versions: 6.2.1 > Environment: java version "1.8.0_77" macOS 10.12.1 >Reporter: Tom Mortimer >Assignee: Michael McCandless >Priority: Minor > > Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an > NPE: > String index_path = > String term = > Directory directory = FSDirectory.open(Paths.get(index_path)); > IndexReader reader = DirectoryReader.open(directory); > Fields fields = MultiFields.getFields(reader); > Terms terms = fields.terms(args[1]); > CompiledAutomaton automaton = new CompiledAutomaton( > new RegExp("do_not_match_anything").toAutomaton()); > TermsEnum te = terms.intersect(automaton, null); > throws: > Exception in thread "main" java.lang.NullPointerException > at > org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127) > at > org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185) > at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85) > ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect
[ https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless reassigned LUCENE-7576: -- Assignee: Michael McCandless > RegExp automaton causes NPE on Terms.intersect > -- > > Key: LUCENE-7576 > URL: https://issues.apache.org/jira/browse/LUCENE-7576 > Project: Lucene - Core > Issue Type: Bug > Components: core/codecs, core/index >Affects Versions: 6.2.1 > Environment: java version "1.8.0_77" macOS 10.12.1 >Reporter: Tom Mortimer >Assignee: Michael McCandless >Priority: Minor > > Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an > NPE: > String index_path = > String term = > Directory directory = FSDirectory.open(Paths.get(index_path)); > IndexReader reader = DirectoryReader.open(directory); > Fields fields = MultiFields.getFields(reader); > Terms terms = fields.terms(args[1]); > CompiledAutomaton automaton = new CompiledAutomaton( > new RegExp("do_not_match_anything").toAutomaton()); > TermsEnum te = terms.intersect(automaton, null); > throws: > Exception in thread "main" java.lang.NullPointerException > at > org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127) > at > org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185) > at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85) > ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect
[ https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702364#comment-15702364 ] Michael McCandless commented on LUCENE-7576: I'll look... > RegExp automaton causes NPE on Terms.intersect > -- > > Key: LUCENE-7576 > URL: https://issues.apache.org/jira/browse/LUCENE-7576 > Project: Lucene - Core > Issue Type: Bug > Components: core/codecs, core/index >Affects Versions: 6.2.1 > Environment: java version "1.8.0_77" macOS 10.12.1 >Reporter: Tom Mortimer >Priority: Minor > > Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an > NPE: > String index_path = > String term = > Directory directory = FSDirectory.open(Paths.get(index_path)); > IndexReader reader = DirectoryReader.open(directory); > Fields fields = MultiFields.getFields(reader); > Terms terms = fields.terms(args[1]); > CompiledAutomaton automaton = new CompiledAutomaton( > new RegExp("do_not_match_anything").toAutomaton()); > TermsEnum te = terms.intersect(automaton, null); > throws: > Exception in thread "main" java.lang.NullPointerException > at > org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127) > at > org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185) > at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85) > ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-9727) solr.in.sh properties does not set the correct values.
[ https://issues.apache.org/jira/browse/SOLR-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden reassigned SOLR-9727: -- Assignee: Kevin Risden > solr.in.sh properties does not set the correct values. > -- > > Key: SOLR-9727 > URL: https://issues.apache.org/jira/browse/SOLR-9727 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Server >Affects Versions: master (7.0) >Reporter: Michael Suzuki >Assignee: Kevin Risden > Attachments: SOLR-9727.patch > > > When setting values to true on SOLR_SSL_NEED_CLIENT_AUTH or > SOLR_SSL_WANT_CLIENT_AUTH, jetty starts with these values as set to false. > {code} > SOLR_SSL_NEED_CLIENT_AUTH=true > SOLR_SSL_WANT_CLIENT_AUTH=false > {code} > To recreate the issue: > 1) Edit solr.ini.sh to enable ssl and set SOLR_SSL_NEED_CLIENT_AUTH to true. > 2) Start solr with remote debugging. > 3) Set a debug point in SSLContextFactory.java, on setNeedClientAuth(...) > Expected value for needClientAuth should be true instead its false. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs
[ https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702319#comment-15702319 ] ASF subversion and git services commented on SOLR-8029: --- Commit f3b14aebd817e922afc0268d05a8cbbaf6b8a985 in lucene-solr's branch refs/heads/apiv2 from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f3b14ae ] SOLR-8029: more spec refinements > Modernize and standardize Solr APIs > --- > > Key: SOLR-8029 > URL: https://issues.apache.org/jira/browse/SOLR-8029 > Project: Solr > Issue Type: Improvement >Affects Versions: 6.0 >Reporter: Noble Paul >Assignee: Noble Paul > Labels: API, EaseOfUse > Fix For: 6.0 > > Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, > SOLR-8029.patch > > > Solr APIs have organically evolved and they are sometimes inconsistent with > each other or not in sync with the widely followed conventions of HTTP > protocol. Trying to make incremental changes to make them modern is like > applying band-aid. So, we have done a complete rethink of what the APIs > should be. The most notable aspects of the API are as follows: > The new set of APIs will be placed under a new path {{/solr2}}. The legacy > APIs will continue to work under the {{/solr}} path as they used to and they > will be eventually deprecated. > There are 4 types of requests in the new API > * {{/v2//*}} : Hit a collection directly or manage > collections/shards/replicas > * {{/v2//*}} : Hit a core directly or manage cores > * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection > or core. e.g: security, overseer ops etc > This will be released as part of a major release. Check the link given below > for the full specification. Your comments are welcome > [Solr API version 2 Specification | http://bit.ly/1JYsBMQ] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9728) Ability to specify Key Store type in solr.in.sh file for SSL
[ https://issues.apache.org/jira/browse/SOLR-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden resolved SOLR-9728. Resolution: Fixed Fix Version/s: 6.4 master (7.0) > Ability to specify Key Store type in solr.in.sh file for SSL > > > Key: SOLR-9728 > URL: https://issues.apache.org/jira/browse/SOLR-9728 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Server >Affects Versions: master (7.0) >Reporter: Michael Suzuki >Assignee: Kevin Risden > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9728.patch, SOLR-9728.patch, SOLR-9728.patch > > > At present when ssl is enabled we can't set the SSL type. It currently > defaults to JCK. > As a user I would like to configure the SSL type via the solr.in file. > For instance "JCEKS" would be configured as: > {code} > SOLR_SSL_KEYSTORE_TYPE=JCEKS > SOLR_SSL_TRUSTSTORE_TYPE=JCEKS > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9728) Ability to specify Key Store type in solr.in.sh file for SSL
[ https://issues.apache.org/jira/browse/SOLR-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702293#comment-15702293 ] ASF subversion and git services commented on SOLR-9728: --- Commit ec385708c6e0c47440127410c1223f14703c24e1 in lucene-solr's branch refs/heads/branch_6x from [~risdenk] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ec38570 ] SOLR-9728: Ability to specify Key Store type in solr.in file for SSL > Ability to specify Key Store type in solr.in.sh file for SSL > > > Key: SOLR-9728 > URL: https://issues.apache.org/jira/browse/SOLR-9728 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Server >Affects Versions: master (7.0) >Reporter: Michael Suzuki >Assignee: Kevin Risden > Attachments: SOLR-9728.patch, SOLR-9728.patch, SOLR-9728.patch > > > At present when ssl is enabled we can't set the SSL type. It currently > defaults to JCK. > As a user I would like to configure the SSL type via the solr.in file. > For instance "JCEKS" would be configured as: > {code} > SOLR_SSL_KEYSTORE_TYPE=JCEKS > SOLR_SSL_TRUSTSTORE_TYPE=JCEKS > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs
[ https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702289#comment-15702289 ] ASF subversion and git services commented on SOLR-8029: --- Commit 509db5805748e9d8e825f70058d92f6c251aa0f4 in lucene-solr's branch refs/heads/apiv2 from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=509db58 ] SOLR-8029: more spec refinements > Modernize and standardize Solr APIs > --- > > Key: SOLR-8029 > URL: https://issues.apache.org/jira/browse/SOLR-8029 > Project: Solr > Issue Type: Improvement >Affects Versions: 6.0 >Reporter: Noble Paul >Assignee: Noble Paul > Labels: API, EaseOfUse > Fix For: 6.0 > > Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, > SOLR-8029.patch > > > Solr APIs have organically evolved and they are sometimes inconsistent with > each other or not in sync with the widely followed conventions of HTTP > protocol. Trying to make incremental changes to make them modern is like > applying band-aid. So, we have done a complete rethink of what the APIs > should be. The most notable aspects of the API are as follows: > The new set of APIs will be placed under a new path {{/solr2}}. The legacy > APIs will continue to work under the {{/solr}} path as they used to and they > will be eventually deprecated. > There are 4 types of requests in the new API > * {{/v2//*}} : Hit a collection directly or manage > collections/shards/replicas > * {{/v2//*}} : Hit a core directly or manage cores > * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection > or core. e.g: security, overseer ops etc > This will be released as part of a major release. Check the link given below > for the full specification. Your comments are welcome > [Solr API version 2 Specification | http://bit.ly/1JYsBMQ] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9728) Ability to specify Key Store type in solr.in.sh file for SSL
[ https://issues.apache.org/jira/browse/SOLR-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702292#comment-15702292 ] ASF subversion and git services commented on SOLR-9728: --- Commit bf424d1ec1602dffeb33ab0acc8f470e351a6959 in lucene-solr's branch refs/heads/master from [~risdenk] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bf424d1 ] SOLR-9728: Ability to specify Key Store type in solr.in file for SSL > Ability to specify Key Store type in solr.in.sh file for SSL > > > Key: SOLR-9728 > URL: https://issues.apache.org/jira/browse/SOLR-9728 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Server >Affects Versions: master (7.0) >Reporter: Michael Suzuki >Assignee: Kevin Risden > Attachments: SOLR-9728.patch, SOLR-9728.patch, SOLR-9728.patch > > > At present when ssl is enabled we can't set the SSL type. It currently > defaults to JCK. > As a user I would like to configure the SSL type via the solr.in file. > For instance "JCEKS" would be configured as: > {code} > SOLR_SSL_KEYSTORE_TYPE=JCEKS > SOLR_SSL_TRUSTSTORE_TYPE=JCEKS > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7563) BKD index should compress unused leading bytes
[ https://issues.apache.org/jira/browse/LUCENE-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702280#comment-15702280 ] Michael McCandless commented on LUCENE-7563: bq. It seems we are always delta coding with the split value of the parent level, but for the multi-dimensional case, I think it would be better to delta-code with the last split value that was on the same dimension? Hmm I think I am already doing that? Note that the {{splitValuesStack}} in {{BKDReader.PackedIndexTree}} holds all dimensions' last split values, and then when I read the suffix bytes in, I copy them into the packed values for the current split dimension: {noformat} in.readBytes(splitValuesStack[level], splitDim*bytesPerDim+prefix, suffix); {noformat} I think? I'll test on the OpenStreetMaps geo benchmark to measure the impact ... I'll also run the 2B tests to make sure nothing broke. bq. For instance we use whole bytes to store the split dimension or the prefix length while they only need 3 and 4 bits? In the multi-dimensional case we could store both on a single byte. Oooh that's a great idea! Saves 1 byte per inner node. We need 5 bits for the prefix I think since it can range 0 .. 16 inclusive, and 3 bits for the {{splitDim}} since it's 0 .. 7 inclusive. bq. It doesn't need to be done in the same patch, but it would also be nice for SimpleText to not use the legacy format of the index. I'm not sure how to proceed however. Yeah I'm not sure what to do here either ... but it felt wrong to just pass these packed bytes to the simple text format ... that packed form is even further from "simple" than the two arrays we have now. > BKD index should compress unused leading bytes > -- > > Key: LUCENE-7563 > URL: https://issues.apache.org/jira/browse/LUCENE-7563 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7563.patch, LUCENE-7563.patch > > > Today the BKD (points) in-heap index always uses {{dimensionNumBytes}} per > dimension, but if e.g. you are indexing {{LongPoint}} yet only use the bottom > two bytes in a given segment, we shouldn't store all those leading 0s in the > index. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9728) Ability to specify Key Store type in solr.in.sh file for SSL
[ https://issues.apache.org/jira/browse/SOLR-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-9728: --- Summary: Ability to specify Key Store type in solr.in.sh file for SSL (was: Ability to specify Key Store type in solr.in file for SSL) > Ability to specify Key Store type in solr.in.sh file for SSL > > > Key: SOLR-9728 > URL: https://issues.apache.org/jira/browse/SOLR-9728 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Server >Affects Versions: master (7.0) >Reporter: Michael Suzuki >Assignee: Kevin Risden > Attachments: SOLR-9728.patch, SOLR-9728.patch, SOLR-9728.patch > > > At present when ssl is enabled we can't set the SSL type. It currently > defaults to JCK. > As a user I would like to configure the SSL type via the solr.in file. > For instance "JCEKS" would be configured as: > {code} > SOLR_SSL_KEYSTORE_TYPE=JCEKS > SOLR_SSL_TRUSTSTORE_TYPE=JCEKS > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections
[ https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702278#comment-15702278 ] Erick Erickson commented on SOLR-7191: -- [~noble.paul] Can we close this since SOLR-7280 has been committed? > Improve stability and startup performance of SolrCloud with thousands of > collections > > > Key: SOLR-7191 > URL: https://issues.apache.org/jira/browse/SOLR-7191 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.0 >Reporter: Shawn Heisey >Assignee: Shalin Shekhar Mangar > Labels: performance, scalability > Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, > SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, > lots-of-zkstatereader-updates-branch_5x.log > > > A user on the mailing list with thousands of collections (5000 on 4.10.3, > 4000 on 5.0) is having severe problems with getting Solr to restart. > I tried as hard as I could to duplicate the user setup, but I ran into many > problems myself even before I was able to get 4000 collections created on a > 5.0 example cloud setup. Restarting Solr takes a very long time, and it is > not very stable once it's up and running. > This kind of setup is very much pushing the envelope on SolrCloud performance > and scalability. It doesn't help that I'm running both Solr nodes on one > machine (I started with 'bin/solr -e cloud') and that ZK is embedded. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs
[ https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702266#comment-15702266 ] ASF subversion and git services commented on SOLR-8029: --- Commit cc21a767b92df6430cd46e2d07253ef50229c61f in lucene-solr's branch refs/heads/apiv2 from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cc21a76 ] SOLR-8029: more spec > Modernize and standardize Solr APIs > --- > > Key: SOLR-8029 > URL: https://issues.apache.org/jira/browse/SOLR-8029 > Project: Solr > Issue Type: Improvement >Affects Versions: 6.0 >Reporter: Noble Paul >Assignee: Noble Paul > Labels: API, EaseOfUse > Fix For: 6.0 > > Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, > SOLR-8029.patch > > > Solr APIs have organically evolved and they are sometimes inconsistent with > each other or not in sync with the widely followed conventions of HTTP > protocol. Trying to make incremental changes to make them modern is like > applying band-aid. So, we have done a complete rethink of what the APIs > should be. The most notable aspects of the API are as follows: > The new set of APIs will be placed under a new path {{/solr2}}. The legacy > APIs will continue to work under the {{/solr}} path as they used to and they > will be eventually deprecated. > There are 4 types of requests in the new API > * {{/v2//*}} : Hit a collection directly or manage > collections/shards/replicas > * {{/v2//*}} : Hit a core directly or manage cores > * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection > or core. e.g: security, overseer ops etc > This will be released as part of a major release. Check the link given below > for the full specification. Your comments are welcome > [Solr API version 2 Specification | http://bit.ly/1JYsBMQ] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs
[ https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702245#comment-15702245 ] ASF subversion and git services commented on SOLR-8029: --- Commit 5e179a18aa4bcb3697e9f4964ff2a01b9f6b082e in lucene-solr's branch refs/heads/apiv2 from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5e179a1 ] SOLR-8029: reuse add-fieldtype spec > Modernize and standardize Solr APIs > --- > > Key: SOLR-8029 > URL: https://issues.apache.org/jira/browse/SOLR-8029 > Project: Solr > Issue Type: Improvement >Affects Versions: 6.0 >Reporter: Noble Paul >Assignee: Noble Paul > Labels: API, EaseOfUse > Fix For: 6.0 > > Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, > SOLR-8029.patch > > > Solr APIs have organically evolved and they are sometimes inconsistent with > each other or not in sync with the widely followed conventions of HTTP > protocol. Trying to make incremental changes to make them modern is like > applying band-aid. So, we have done a complete rethink of what the APIs > should be. The most notable aspects of the API are as follows: > The new set of APIs will be placed under a new path {{/solr2}}. The legacy > APIs will continue to work under the {{/solr}} path as they used to and they > will be eventually deprecated. > There are 4 types of requests in the new API > * {{/v2//*}} : Hit a collection directly or manage > collections/shards/replicas > * {{/v2//*}} : Hit a core directly or manage cores > * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection > or core. e.g: security, overseer ops etc > This will be released as part of a major release. Check the link given below > for the full specification. Your comments are welcome > [Solr API version 2 Specification | http://bit.ly/1JYsBMQ] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting
[ https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702044#comment-15702044 ] Andrzej Bialecki commented on SOLR-4735: - [~jwartes] thanks for the pointer to your PR, I borrowed parts of your code and updated my PR: * simplified and renamed {{SolrMetricManager}} to {{SolrCoreMetricManager}} as it really is specific to managing metrics related to {{SolrCore}}. * added a global component for registry management {{SolrMetricManager}}, which mostly offers useful syntactic sugar for working with {{SharedMetricRegistries}} Next step: I'm going to merge my work into [~shalinmangar]'s branch. > Improve Solr metrics reporting > -- > > Key: SOLR-4735 > URL: https://issues.apache.org/jira/browse/SOLR-4735 > Project: Solr > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Andrzej Bialecki >Priority: Minor > Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, > SOLR-4735.patch > > > Following on from a discussion on the mailing list: > http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+ > It would be good to make Solr play more nicely with existing devops > monitoring systems, such as Graphite or Ganglia. Stats monitoring at the > moment is poll-only, either via JMX or through the admin stats page. I'd > like to refactor things a bit to make this more pluggable. > This patch is a start. It adds a new interface, InstrumentedBean, which > extends SolrInfoMBean to return a > [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a > couple of MetricReporters (which basically just duplicate the JMX and admin > page reporting that's there at the moment, but which should be more > extensible). The patch includes a change to RequestHandlerBase showing how > this could work. The idea would be to eventually replace the getStatistics() > call on SolrInfoMBean with this instead. > The next step would be to allow more MetricReporters to be defined in > solrconfig.xml. The Metrics library comes with ganglia and graphite > reporting modules, and we can add contrib plugins for both of those. > There's some more general cleanup that could be done around SolrInfoMBean > (we've got two plugin handlers at /mbeans and /plugins that basically do the > same thing, and the beans themselves have some weirdly inconsistent data on > them - getVersion() returns different things for different impls, and > getSource() seems pretty useless), but maybe that's for another issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect
Tom Mortimer created LUCENE-7576: Summary: RegExp automaton causes NPE on Terms.intersect Key: LUCENE-7576 URL: https://issues.apache.org/jira/browse/LUCENE-7576 Project: Lucene - Core Issue Type: Bug Components: core/codecs, core/index Affects Versions: 6.2.1 Environment: java version "1.8.0_77" macOS 10.12.1 Reporter: Tom Mortimer Priority: Minor Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an NPE: String index_path = String term = Directory directory = FSDirectory.open(Paths.get(index_path)); IndexReader reader = DirectoryReader.open(directory); Fields fields = MultiFields.getFields(reader); Terms terms = fields.terms(args[1]); CompiledAutomaton automaton = new CompiledAutomaton( new RegExp("do_not_match_anything").toAutomaton()); TermsEnum te = terms.intersect(automaton, null); throws: Exception in thread "main" java.lang.NullPointerException at org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127) at org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185) at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85) ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs
[ https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702022#comment-15702022 ] ASF subversion and git services commented on SOLR-8029: --- Commit 84695ff71f33716774b85413eced4d42fb26ad09 in lucene-solr's branch refs/heads/apiv2 from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=84695ff ] SOLR-8029: add support for #include in spec files > Modernize and standardize Solr APIs > --- > > Key: SOLR-8029 > URL: https://issues.apache.org/jira/browse/SOLR-8029 > Project: Solr > Issue Type: Improvement >Affects Versions: 6.0 >Reporter: Noble Paul >Assignee: Noble Paul > Labels: API, EaseOfUse > Fix For: 6.0 > > Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, > SOLR-8029.patch > > > Solr APIs have organically evolved and they are sometimes inconsistent with > each other or not in sync with the widely followed conventions of HTTP > protocol. Trying to make incremental changes to make them modern is like > applying band-aid. So, we have done a complete rethink of what the APIs > should be. The most notable aspects of the API are as follows: > The new set of APIs will be placed under a new path {{/solr2}}. The legacy > APIs will continue to work under the {{/solr}} path as they used to and they > will be eventually deprecated. > There are 4 types of requests in the new API > * {{/v2//*}} : Hit a collection directly or manage > collections/shards/replicas > * {{/v2//*}} : Hit a core directly or manage cores > * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection > or core. e.g: security, overseer ops etc > This will be released as part of a major release. Check the link given below > for the full specification. Your comments are welcome > [Solr API version 2 Specification | http://bit.ly/1JYsBMQ] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 213 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/213/ 4 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationStartStop Error Message: Timeout while trying to assert number of documents @ target_collection Stack Trace: java.lang.AssertionError: Timeout while trying to assert number of documents @ target_collection at __randomizedtesting.SeedInfo.seed([11CF307BDB5B8AC9:920CEEE8CF830140]:0) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertNumDocs(BaseCdcrDistributedZkTest.java:271) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationStartStop(CdcrReplicationDistributedZkTest.java:173) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Created] (SOLR-9805) Use metrics-jvm library to instrument jvm internals
Shalin Shekhar Mangar created SOLR-9805: --- Summary: Use metrics-jvm library to instrument jvm internals Key: SOLR-9805 URL: https://issues.apache.org/jira/browse/SOLR-9805 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: master (7.0), 6.4 See http://metrics.dropwizard.io/3.1.0/manual/jvm/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting
[ https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15701901#comment-15701901 ] Shalin Shekhar Mangar commented on SOLR-4735: - I have created a new branch called "feature/metrics" for SOLR-9788, this and other future metrics enhancements -- http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/497212e0 Let's use this for integration between different patches. > Improve Solr metrics reporting > -- > > Key: SOLR-4735 > URL: https://issues.apache.org/jira/browse/SOLR-4735 > Project: Solr > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Andrzej Bialecki >Priority: Minor > Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, > SOLR-4735.patch > > > Following on from a discussion on the mailing list: > http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+ > It would be good to make Solr play more nicely with existing devops > monitoring systems, such as Graphite or Ganglia. Stats monitoring at the > moment is poll-only, either via JMX or through the admin stats page. I'd > like to refactor things a bit to make this more pluggable. > This patch is a start. It adds a new interface, InstrumentedBean, which > extends SolrInfoMBean to return a > [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a > couple of MetricReporters (which basically just duplicate the JMX and admin > page reporting that's there at the moment, but which should be more > extensible). The patch includes a change to RequestHandlerBase showing how > this could work. The idea would be to eventually replace the getStatistics() > call on SolrInfoMBean with this instead. > The next step would be to allow more MetricReporters to be defined in > solrconfig.xml. The Metrics library comes with ganglia and graphite > reporting modules, and we can add contrib plugins for both of those. > There's some more general cleanup that could be done around SolrInfoMBean > (we've got two plugin handlers at /mbeans and /plugins that basically do the > same thing, and the beans themselves have some weirdly inconsistent data on > them - getVersion() returns different things for different impls, and > getSource() seems pretty useless), but maybe that's for another issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2242) Get distinct count of names for a facet field
[ https://issues.apache.org/jira/browse/SOLR-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey updated SOLR-2242: --- Fix Version/s: (was: 4.9) 5.2 > Get distinct count of names for a facet field > - > > Key: SOLR-2242 > URL: https://issues.apache.org/jira/browse/SOLR-2242 > Project: Solr > Issue Type: New Feature > Components: Response Writers >Affects Versions: 4.0-ALPHA >Reporter: Bill Bell >Priority: Minor > Fix For: 5.2, 6.0 > > Attachments: SOLR-2242-3x.patch, SOLR-2242-3x_5_tests.patch, > SOLR-2242-solr40-3.patch, SOLR-2242.patch, SOLR-2242.patch, SOLR-2242.patch, > SOLR-2242.shard.withtests.patch, SOLR-2242.solr3.1-fix.patch, > SOLR-2242.solr3.1.patch, SOLR.2242.solr3.1.patch > > > When returning facet.field= you will get a list of matches for > distinct values. This is normal behavior. This patch tells you how many > distinct values you have (# of rows). Use with limit=-1 and mincount=1. > The feature is called "namedistinct". Here is an example: > Parameters: > facet.numTerms or f..facet.numTerms = true (default is false) - turn > on distinct counting of terms > facet.field - the field to count the terms > It creates a new section in the facet section... > http://localhost:8983/solr/select?shards=localhost:8983/solr,localhost:7574/solr=true=*:*=true=1=true=-1=price > http://localhost:8983/solr/select?shards=localhost:8983/solr,localhost:7574/solr=true=*:*=true=1=false=-1=price > http://localhost:8983/solr/select?shards=localhost:8983/solr,localhost:7574/solr=true=*:*=true=1=true=-1=price > This currently only works on facet.field. > {code} > > > ... > > > 14 > > > 14 > > > > > > OR with no sharding- > > 14 > > {code} > Several people use this to get the group.field count (the # of groups). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9788) Use instrumented jetty classes
[ https://issues.apache.org/jira/browse/SOLR-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15701876#comment-15701876 ] ASF subversion and git services commented on SOLR-9788: --- Commit 497212e05451c11088fb4f04d1c8e6092915dc40 in lucene-solr's branch refs/heads/feature/metrics from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=497212e ] SOLR-9788: Use instrumented jetty classes provided by the dropwizard metric library. This also introduces a new /admin/metrics API endpoint to return all registered metrics in JSON format > Use instrumented jetty classes > -- > > Key: SOLR-9788 > URL: https://issues.apache.org/jira/browse/SOLR-9788 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: master (7.0), 6.4 > > Attachments: SOLR_9788.patch, SOLR_9788.patch, SOLR_9788.patch > > > Dropwizard metrics library integrated in SOLR-8785 provides a set of > instrumented equivalents of Jetty classes. This allows us to collect > statistics on Jetty's connector, thread pool and handlers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9788) Use instrumented jetty classes
[ https://issues.apache.org/jira/browse/SOLR-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-9788: Attachment: SOLR_9788.patch Changed the metric registry name to "/jetty" as per suggestion in SOLR-4735 I am going to create a branch called "origin/feature/metrics" so that we can keep SOLR-9788, SOLR-4735 and other metrics improvements in sync. > Use instrumented jetty classes > -- > > Key: SOLR-9788 > URL: https://issues.apache.org/jira/browse/SOLR-9788 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: master (7.0), 6.4 > > Attachments: SOLR_9788.patch, SOLR_9788.patch, SOLR_9788.patch > > > Dropwizard metrics library integrated in SOLR-8785 provides a set of > instrumented equivalents of Jetty classes. This allows us to collect > statistics on Jetty's connector, thread pool and handlers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 520 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/520/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: ObjectTracker found 5 object(s) that were not released!!! [SolrCore, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.SolrCore.(SolrCore.java:937) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347) at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:332) at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:641) at org.apache.solr.core.SolrCore.(SolrCore.java:847) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.SolrCore.(SolrCore.java:798) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347) at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:66) at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:673) at org.apache.solr.core.SolrCore.(SolrCore.java:847) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347) at org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:434) at org.apache.solr.core.SolrCore.(SolrCore.java:841) at org.apache.solr.core.SolrCore.(SolrCore.java:775) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at
[jira] [Commented] (LUCENE-7563) BKD index should compress unused leading bytes
[ https://issues.apache.org/jira/browse/LUCENE-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15701587#comment-15701587 ] Adrien Grand commented on LUCENE-7563: -- It seems we are always delta coding with the split value of the parent level, but for the multi-dimensional case, I think it would be better to delta-code with the last split value that was on the same dimension? Otherwise compression would be very poor if both dimensions store a very different range of values? Something else I was wondering is whether we can make bigger gains. For instance we use whole bytes to store the split dimension or the prefix length while they only need 3 and 4 bits? In the multi-dimensional case we could store both on a single byte. Maybe we can do even better, I haven't though much about it. It doesn't need to be done in the same patch, but it would also be nice for SimpleText to not use the legacy format of the index. I'm not sure how to proceed however. > BKD index should compress unused leading bytes > -- > > Key: LUCENE-7563 > URL: https://issues.apache.org/jira/browse/LUCENE-7563 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7563.patch, LUCENE-7563.patch > > > Today the BKD (points) in-heap index always uses {{dimensionNumBytes}} per > dimension, but if e.g. you are indexing {{LongPoint}} yet only use the bottom > two bytes in a given segment, we shouldn't store all those leading 0s in the > index. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 976 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/976/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: ObjectTracker found 5 object(s) that were not released!!! [SolrCore, MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.SolrCore.(SolrCore.java:955) at org.apache.solr.core.SolrCore.(SolrCore.java:793) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:868) at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1139) at org.apache.solr.core.TestLazyCores.testCachingLimit(TestLazyCores.java:203) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347) at
[jira] [Commented] (SOLR-8793) Fix stale commit files' size computation in LukeRequestHandler
[ https://issues.apache.org/jira/browse/SOLR-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15701538#comment-15701538 ] Tim Owen commented on SOLR-8793: We get this using Solr 6.3.0 because it's still logged at WARN level, which seems a bit alarmist to me. For indexes that are changing rapidly, it happens a lot. We're going to increase our logging threshold for that class to ERROR, because these messages are just filling up the logs and there's no action we can actually take to prevent it, because they're expected to happen sometimes. Personally I would make this message INFO level. > Fix stale commit files' size computation in LukeRequestHandler > -- > > Key: SOLR-8793 > URL: https://issues.apache.org/jira/browse/SOLR-8793 > Project: Solr > Issue Type: Bug > Components: Server >Affects Versions: 5.5 >Reporter: Shai Erera >Assignee: Shai Erera >Priority: Minor > Fix For: 5.5.1, 6.0 > > Attachments: SOLR-8793.patch > > > SOLR-8587 added segments file information and its size to core admin status > API. However in case of stale commits, calling that API may result on > {{FileNotFoundException}} or {{NoSuchFileException}}, if the segments file no > longer exists due to a new commit. We should fix that by returning a proper > value for the file's length in this case, maybe -1. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 1507 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1507/ 4 tests failed. FAILED: org.apache.solr.uima.processor.UIMAUpdateRequestProcessorTest.testMultiplierProcessing Error Message: unknown UpdateRequestProcessorChain: 2312312321312SpellCheckComponent got improvement related to recent Lucene changes.Add support for specifying Spelling SuggestWord Comparator to Lucene spell checkers for SpellCheckComponent. Issue SOLR-2053 is already fixed, patch is attached if you need it, but it is also committed to trunk and 3_x branch. Last Lucene European Conference has been held in Prague. Stack Trace: org.apache.solr.common.SolrException: unknown UpdateRequestProcessorChain: 2312312321312SpellCheckComponent got improvement related to recent Lucene changes. Add support for specifying Spelling SuggestWord Comparator to Lucene spell checkers for SpellCheckComponent. Issue SOLR-2053 is already fixed, patch is attached if you need it, but it is also committed to trunk and 3_x branch. Last Lucene European Conference has been held in Prague. at __randomizedtesting.SeedInfo.seed([C1296BB03982C13C:FA28A11D3ED6EDC8]:0) at org.apache.solr.core.SolrCore.getUpdateProcessingChain(SolrCore.java:1214) at org.apache.solr.core.SolrCore.getUpdateProcessorChain(SolrCore.java:1222) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:52) at org.apache.solr.SolrTestCaseJ4.addDoc(SolrTestCaseJ4.java:1031) at org.apache.solr.uima.processor.UIMAUpdateRequestProcessorTest.testProcessing(UIMAUpdateRequestProcessorTest.java:83) at org.apache.solr.uima.processor.UIMAUpdateRequestProcessorTest.testMultiplierProcessing(UIMAUpdateRequestProcessorTest.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-8871) Classification Update Request Processor Improvements
[ https://issues.apache.org/jira/browse/SOLR-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15701299#comment-15701299 ] Tommaso Teofili commented on SOLR-8871: --- thanks Alan and Alessandro, I've applied Alessandro's patch which seems to fix the mentioned issue. I've also removed the forbidden API call, as per [~steve_rowe]'s suggestion. > Classification Update Request Processor Improvements > > > Key: SOLR-8871 > URL: https://issues.apache.org/jira/browse/SOLR-8871 > Project: Solr > Issue Type: Improvement > Components: update >Affects Versions: 6.1 >Reporter: Alessandro Benedetti >Assignee: Tommaso Teofili > Labels: classification, classifier, update, update.chain > Attachments: SOLR_8871.patch, SOLR_8871_UIMA_processor_test_fix.patch > > > This task will group a set of modifications to the classification update > reqeust processor ( and Lucene classification module), based on user's > feedback ( thanks [~teofili] and Александър Цветанов ) : > - include boosting support for inputFields in the solrconfig.xml for the > classification update request processor > e.g. > field1^2, field2^5 ... > - multi class assignement ( introduce a parameter, default 1, for the max > number of class to assign) > - separate the classField in : > classTrainingField > classOutputField > Default when classOutputField is not defined, is classTrainingField . > - add support for the classification query, to use only a subset of the > entire index to classify. > - Improve Related Tests -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8871) Classification Update Request Processor Improvements
[ https://issues.apache.org/jira/browse/SOLR-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15701297#comment-15701297 ] ASF subversion and git services commented on SOLR-8871: --- Commit c36ec0b75e06295143601e76de9b71c20295fb7d in lucene-solr's branch refs/heads/master from [~teofili] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c36ec0b ] SOLR-8871 - removed suppress for forbidden API, added locale to toUpperCase > Classification Update Request Processor Improvements > > > Key: SOLR-8871 > URL: https://issues.apache.org/jira/browse/SOLR-8871 > Project: Solr > Issue Type: Improvement > Components: update >Affects Versions: 6.1 >Reporter: Alessandro Benedetti >Assignee: Tommaso Teofili > Labels: classification, classifier, update, update.chain > Attachments: SOLR_8871.patch, SOLR_8871_UIMA_processor_test_fix.patch > > > This task will group a set of modifications to the classification update > reqeust processor ( and Lucene classification module), based on user's > feedback ( thanks [~teofili] and Александър Цветанов ) : > - include boosting support for inputFields in the solrconfig.xml for the > classification update request processor > e.g. > field1^2, field2^5 ... > - multi class assignement ( introduce a parameter, default 1, for the max > number of class to assign) > - separate the classField in : > classTrainingField > classOutputField > Default when classOutputField is not defined, is classTrainingField . > - add support for the classification query, to use only a subset of the > entire index to classify. > - Improve Related Tests -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8871) Classification Update Request Processor Improvements
[ https://issues.apache.org/jira/browse/SOLR-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15701296#comment-15701296 ] ASF subversion and git services commented on SOLR-8871: --- Commit 641294a967b0cc030f5fccdaf07514cf8a2e2ed0 in lucene-solr's branch refs/heads/master from [~teofili] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=641294a ] SOLR-8871 - adjusted UIMA processor test, patch from Alessandro Benedetti > Classification Update Request Processor Improvements > > > Key: SOLR-8871 > URL: https://issues.apache.org/jira/browse/SOLR-8871 > Project: Solr > Issue Type: Improvement > Components: update >Affects Versions: 6.1 >Reporter: Alessandro Benedetti >Assignee: Tommaso Teofili > Labels: classification, classifier, update, update.chain > Attachments: SOLR_8871.patch, SOLR_8871_UIMA_processor_test_fix.patch > > > This task will group a set of modifications to the classification update > reqeust processor ( and Lucene classification module), based on user's > feedback ( thanks [~teofili] and Александър Цветанов ) : > - include boosting support for inputFields in the solrconfig.xml for the > classification update request processor > e.g. > field1^2, field2^5 ... > - multi class assignement ( introduce a parameter, default 1, for the max > number of class to assign) > - separate the classField in : > classTrainingField > classOutputField > Default when classOutputField is not defined, is classTrainingField . > - add support for the classification query, to use only a subset of the > entire index to classify. > - Improve Related Tests -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: lucene-solr:master: SOLR-8871 - added suppress forbidden for toUpperCase usage
thanks Steve for pointing it out, I've fixed it as per your suggestion. Regards, Tommaso Il giorno dom 27 nov 2016 alle ore 05:29 Steve Roweha scritto: > Hi Tommaso, > > Rather than suppressing the forbidden api failure for the no-arg version > of String.toUpperCase(), I think you should be using the version that takes > a Locale, e.g.: > > algorithmString.toUpperCase(Locale.ROOT); > > -- > Steve > www.lucidworks.com > > > On Nov 24, 2016, at 7:12 PM, tomm...@apache.org wrote: > > > > Repository: lucene-solr > > Updated Branches: > > refs/heads/master 96489d238 -> a4573fe7f > > > > > > SOLR-8871 - added suppress forbidden for toUpperCase usage > > > > > > Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo > > Commit: > http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/a4573fe7 > > Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/a4573fe7 > > Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/a4573fe7 > > > > Branch: refs/heads/master > > Commit: a4573fe7f45ba4c84c46d8e7e72c7353164a2696 > > Parents: 96489d2 > > Author: Tommaso Teofili > > Authored: Fri Nov 25 01:12:03 2016 +0100 > > Committer: Tommaso Teofili > > Committed: Fri Nov 25 01:12:03 2016 +0100 > > > > -- > > .../update/processor/ClassificationUpdateProcessorFactory.java | 2 ++ > > 1 file changed, 2 insertions(+) > > -- > > > > > > > http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/a4573fe7/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java > > -- > > diff --git > a/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java > b/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java > > index 19e0dfe..cbe571b 100644 > > --- > a/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java > > +++ > b/solr/core/src/java/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.java > > @@ -22,6 +22,7 @@ import org.apache.lucene.search.Query; > > import org.apache.solr.common.SolrException; > > import org.apache.solr.common.params.SolrParams; > > import org.apache.solr.common.util.NamedList; > > +import org.apache.solr.common.util.SuppressForbidden; > > import org.apache.solr.request.SolrQueryRequest; > > import org.apache.solr.response.SolrQueryResponse; > > import org.apache.solr.schema.IndexSchema; > > @@ -59,6 +60,7 @@ public class ClassificationUpdateProcessorFactory > extends UpdateRequestProcessor > > private SolrParams params; > > private ClassificationUpdateProcessorParams classificationParams; > > > > + @SuppressForbidden(reason = "Need toUpperCase to match algorithm enum > value") > > @Override > > public void init(final NamedList args) { > > if (args != null) { > > > >