[JENKINS] Lucene-Solr-repro - Build # 1757 - Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1757/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/189/consoleText

[repro] Revision: c9776d88f90df2ae77a9e37e36d87e069bfde6ed

[repro] Repro line:  ant test  -Dtestcase=TestSimExtremeIndexing 
-Dtests.seed=10541E1DE2A749DF -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=pt -Dtests.timezone=Asia/Aqtau 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SystemLogListenerTest 
-Dtests.method=test -Dtests.seed=10541E1DE2A749DF -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=nn-NO 
-Dtests.timezone=GB -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
7512cd9425319fb620c1992053a5d4be7cd9229d
[repro] git fetch
[repro] git checkout c9776d88f90df2ae77a9e37e36d87e069bfde6ed

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SystemLogListenerTest
[repro]   TestSimExtremeIndexing
[repro] ant compile-test

[...truncated 3423 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SystemLogListenerTest|*.TestSimExtremeIndexing" 
-Dtests.showOutput=onerror  -Dtests.seed=10541E1DE2A749DF -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=nn-NO 
-Dtests.timezone=GB -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 2248 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.SystemLogListenerTest
[repro] git checkout 7512cd9425319fb620c1992053a5d4be7cd9229d

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8538) Add Simple WKT Shape Parser

2018-10-22 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660088#comment-16660088
 ] 

Lucene/Solr QA commented on LUCENE-8538:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} lucene_sandbox generated 26 new + 5 unchanged - 0 fixed = 31 
total (was 5) {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} sandbox in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  6m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8538 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945094/LUCENE-8538.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 7512cd9 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_172 |
| javac | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/109/artifact/out/diff-compile-javac-lucene_sandbox.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/109/testReport/ |
| modules | C: lucene lucene/sandbox U: lucene |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/109/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Add Simple WKT Shape Parser
> ---
>
> Key: LUCENE-8538
> URL: https://issues.apache.org/jira/browse/LUCENE-8538
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8538.patch
>
>
> Similar to {{SimpleGeoJSONPolygonParser}} for creating {{Polygon}} objects 
> from GeoJSON, it would be helpful to have a {{SimpleWKTParser}} for creating 
> lucene geometries from WKT. Not only is this useful for simple tests, but 
> also helps for benchmarking from real world data (e.g., PlanetOSM).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_172) - Build # 7579 - Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7579/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseG1GC

3 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:57144_solr, 
127.0.0.1:57147_solr, 127.0.0.1:57154_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"http://127.0.0.1:57157/solr;,   
"node_name":"127.0.0.1:57157_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"http://127.0.0.1:57157/solr;,   
"node_name":"127.0.0.1:57157_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:57144_solr, 127.0.0.1:57147_solr, 127.0.0.1:57154_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"http://127.0.0.1:57157/solr;,
  "node_name":"127.0.0.1:57157_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"http://127.0.0.1:57157/solr;,
  "node_name":"127.0.0.1:57157_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([653CE8C6004CF848:F2A891668BEB282]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:224)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2962 - Still Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2962/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments

Error Message:
expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([C7671FC216FCFD1:C9BF670EF90F9566]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3171)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:35819/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:35819/solr
at 
__randomizedtesting.SeedInfo.seed([503E3D04A6D02A5:C4F39A7C673DC802]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 

[jira] [Commented] (SOLR-12754) Solr UnifiedHighlighter support flag WEIGHT_MATCHES

2018-10-22 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660072#comment-16660072
 ] 

David Smiley commented on SOLR-12754:
-

This patch adds a new boolean {{hl.weightMatches}} parameter, defaulting to 
false.  I also improved the ref guide a bit on the UH to not just cover this 
parameter other language pertaining to this highlighter and the original 
highlighter.
I'd like to make this setting true in 8.0.  Hmmm; perhaps this issue in a new 
patch.

> Solr UnifiedHighlighter support flag WEIGHT_MATCHES
> ---
>
> Key: SOLR-12754
> URL: https://issues.apache.org/jira/browse/SOLR-12754
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12754.patch
>
>
> Solr's should support the WEIGHT_MATCHES flag of the UnifiedHighlighter.  It 
> supports best/perfect highlighting accuracy, and nicer phrase snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12754) Solr UnifiedHighlighter support flag WEIGHT_MATCHES

2018-10-22 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12754:

Attachment: SOLR-12754.patch

> Solr UnifiedHighlighter support flag WEIGHT_MATCHES
> ---
>
> Key: SOLR-12754
> URL: https://issues.apache.org/jira/browse/SOLR-12754
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12754.patch
>
>
> Solr's should support the WEIGHT_MATCHES flag of the UnifiedHighlighter.  It 
> supports best/perfect highlighting accuracy, and nicer phrase snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12895) SurroundQParserPlugin support for UnifiedHighlighter

2018-10-22 Thread David Smiley (JIRA)
David Smiley created SOLR-12895:
---

 Summary: SurroundQParserPlugin support for UnifiedHighlighter
 Key: SOLR-12895
 URL: https://issues.apache.org/jira/browse/SOLR-12895
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley
Assignee: David Smiley


The "surround" QParser doesn't work with the UnififedHighlighter -- 
LUCENE-8492.  However I think we can overcome this by having Solr's QParser 
extend getHighlightQuery and rewrite itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11812) Remove backward compatibility of old LIR implementation in 8.0

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659989#comment-16659989
 ] 

ASF subversion and git services commented on SOLR-11812:


Commit 7512cd9425319fb620c1992053a5d4be7cd9229d in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7512cd9 ]

SOLR-11812: Remove LIROnShardRestartTest since the transition from old lir to 
new lir is no longer supported


> Remove backward compatibility of old LIR implementation in 8.0
> --
>
> Key: SOLR-11812
> URL: https://issues.apache.org/jira/browse/SOLR-11812
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: master (8.0)
>
> Attachments: SOLR-11812.patch
>
>
> My plan is commit SOLR-11702 on the next 7.x release. We have to support both 
> old and the new design so users can do rolling updates. 
> This makes code base very complex, in 8.0 we do not have to support rolling 
> updates, so this issue is created to remind us to remove all the old LIR 
> implementation in 8.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 968 - Still Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/968/

2 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A336EF25B0857371:2B62D0FF1E791E89]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitShardWithRule

Error Message:
Error from server at http://127.0.0.1:38322/xaid/hf: Could not find collection 
: shardSplitWithRule_rewrite

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:38322/xaid/hf: 

[GitHub] lucene-solr pull request #464: WIP SOLR-12555: refactor tests in package org...

2018-10-22 Thread gerlowskija
Github user gerlowskija commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/464#discussion_r227202749
  
--- Diff: solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java ---
@@ -414,56 +391,44 @@ public void testOptimisticLocking() throws Exception {
 version2 = addAndGetVersion(sdoc("id","1", "_version_", 
Long.toString(version)), null);
 assertTrue(version2 > version);
 
-try {
-  // overwriting the previous version should now fail
-  version2 = addAndGetVersion(sdoc("id","1"), params("_version_", 
Long.toString(version)));
-  fail();
-} catch (SolrException se) {
-  assertEquals(409, se.code());
-}
+// overwriting the previous version should now fail
+se = expectThrows(SolrException.class, "overwriting previous version 
should fail",
+() -> addAndGetVersion(sdoc("id","1"), params("_version_", 
Long.toString(version;
+assertEquals(409, se.code());
 
-try {
-  // deleting the previous version should now fail
-  version2 = deleteAndGetVersion("1", params("_version_", 
Long.toString(version)));
-  fail();
-} catch (SolrException se) {
-  assertEquals(409, se.code());
-}
+// deleting the previous version should now fail
+se = expectThrows(SolrException.class, "deleting the previous version 
should now fail",
+() -> deleteAndGetVersion("1", params("_version_", 
Long.toString(version;
+assertEquals(409, se.code());
 
-version = version2;
+final long prevVersion = version2;
--- End diff --

Oof, yeah good point, that would be a problem.  No need to revert. I'll 
just need to pay particular attention to this class when testing things out.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11522) Suggestions/recommendations to rebalance replicas

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659917#comment-16659917
 ] 

ASF subversion and git services commented on SOLR-11522:


Commit 576d28f643a89de832b59a783ce729402d70fb9f in lucene-solr's branch 
refs/heads/master from noble
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=576d28f ]

SOLR-11522: Moved the _get methods to a separate interafce and keep MapWriter 
clean


> Suggestions/recommendations to rebalance replicas
> -
>
> Key: SOLR-11522
> URL: https://issues.apache.org/jira/browse/SOLR-11522
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> It is possible that a cluster is unbalanced even if it is not breaking any of 
> the policy rules. Some nodes may have very little load while some others may 
> be heavily loaded. So, it is possible to move replicas around so that the 
> load is more evenly distributed. This is going to be driven by preferences. 
> The way we arrive at these suggestions is going to be as follows
>  # Sort the nodes according to the given preferences
>  # Choose a replica from the most loaded node ({{source-node}}) 
>  # try adding them to the least loaded node ({{target-node}})
>  # See if it breaks any policy rules. If yes , try another {{target-node}} 
> (go to #3)
>  # If no policy rules are being broken, present this as a {{suggestion}} . 
> The suggestion contains the following information
>  #* The {{source-node}} and {{target-node}} names
>  #* The actual v2 command that can be run to effect the operation
>  # Go to step #1
>  # Do this until the a replicas can be moved in such a way that the {{target 
> node}} is more loaded than the {{source-node}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 2961 - Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2961/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

28 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([1AC49BF800F18470]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testVersionsAreReturned

Error Message:
Error from server at http://127.0.0.1:37487/solr/collection1_shard2_replica_n3: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update  HTTP ERROR 
404 Problem accessing /solr/collection1_shard2_replica_n3/update. 
Reason: Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:37487/solr/collection1_shard2_replica_n3: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n3/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([FB146D3BC8D04F5D:3D2940E5BC9D795]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 

[jira] [Commented] (SOLR-11522) Suggestions/recommendations to rebalance replicas

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659911#comment-16659911
 ] 

ASF subversion and git services commented on SOLR-11522:


Commit e28cd0cad15f378ebfcdc85c7ff40009fb21cd2d in lucene-solr's branch 
refs/heads/branch_7x from noble
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e28cd0c ]

SOLR-11522: Moved the _get methods to a separate interafce and keep MapWriter 
clean


> Suggestions/recommendations to rebalance replicas
> -
>
> Key: SOLR-11522
> URL: https://issues.apache.org/jira/browse/SOLR-11522
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> It is possible that a cluster is unbalanced even if it is not breaking any of 
> the policy rules. Some nodes may have very little load while some others may 
> be heavily loaded. So, it is possible to move replicas around so that the 
> load is more evenly distributed. This is going to be driven by preferences. 
> The way we arrive at these suggestions is going to be as follows
>  # Sort the nodes according to the given preferences
>  # Choose a replica from the most loaded node ({{source-node}}) 
>  # try adding them to the least loaded node ({{target-node}})
>  # See if it breaks any policy rules. If yes , try another {{target-node}} 
> (go to #3)
>  # If no policy rules are being broken, present this as a {{suggestion}} . 
> The suggestion contains the following information
>  #* The {{source-node}} and {{target-node}} names
>  #* The actual v2 command that can be run to effect the operation
>  # Go to step #1
>  # Do this until the a replicas can be moved in such a way that the {{target 
> node}} is more loaded than the {{source-node}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12746) Ref Guide HTML output should adhere to more standard HTML5

2018-10-22 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659870#comment-16659870
 ] 

Cassandra Targett commented on SOLR-12746:
--

I've now updated the {{jira/solr-12746}} branch to master as of last night, and 
added a couple more CSS fixes, added a license reference to NOTICE.txt [1], and 
updated the README and {{dev-tools/scripts/jenkins.build.sh}} scripts for the 
proper Slim version as mentioned in the earlier comment [2].

[~arafalov], I think you were interested in this issue last week?

I think this is ready to go. I'll check it out a bit more before committing - 
thoughts/reviews are welcome.

[1] - I'm not sure if I really needed to include the license for 3 reasons: 1) 
we aren't distributing the templates at all, just the output of the templates; 
2) I borrowed only the templates while the project they are from includes much 
more; and 3) I also modified the templates to make integration easier, so they 
aren't the same as the originals. Out of abundance of caution and respect for 
the original author I included a mention in NOTICE.txt anyway.

[2] - The need to define the Slim version is temporary. After I mentioned to 
the Asciidoctor project that I had the problem and that downgrading Slim fixed 
it, they were able to identify the Slim API changes in Slim's v4.0 release that 
caused the problem. Asciidoctor's future 1.5.8 release (which we'll consume in 
some way, eventually) will include the fix. This is the issue that has the fix: 
https://github.com/asciidoctor/asciidoctor/issues/2928. The error is harmless, 
just alarming, so if anyone is using Slim 4.x and sees the error, they can 
continue without any problems. Downgrading just allows us to avoid having to 
see it 30+ times for every HTML build.

> Ref Guide HTML output should adhere to more standard HTML5
> --
>
> Key: SOLR-12746
> URL: https://issues.apache.org/jira/browse/SOLR-12746
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
>
> The default HTML produced by Jekyll/Asciidoctor adds a lot of extra {{}} 
> tags to the content which break up our content into very small chunks. This 
> is acceptable to a casual website reader as far as it goes, but any Reader 
> view in a browser or another type of content extraction system that uses a 
> similar "readability" scoring algorithm is going to either miss a lot of 
> content or fail to display the page entirely.
> To see what I mean, take a page like 
> https://lucene.apache.org/solr/guide/7_4/language-analysis.html and enable 
> Reader View in your browser (I used Firefox; Steve Rowe told me offline 
> Safari would not even offer the option on the page for him). You will notice 
> a lot of missing content. It's almost like someone selected sentences at 
> random.
> Asciidoctor has a long-standing issue to provide a better more 
> semantic-oriented HTML5 output, but it has not been resolved yet: 
> https://github.com/asciidoctor/asciidoctor/issues/242
> Asciidoctor does provide a way to override the default output templates by 
> providing your own in Slim, HAML, ERB or any other template language 
> supported by Tilt (none of which I know yet). There are some samples 
> available via the Asciidoctor project which we can borrow, but it's otherwise 
> unknown as of yet what parts of the output are causing the worst of the 
> problems. This issue is to explore how to fix it to improve this part of the 
> HTML reading experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2893 - Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2893/

3 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudPivotFacet.test

Error Message:
Error from server at 
http://127.0.0.1:33678/pj_hvr/n/collection1_shard1_replica_n43: ERROR: [doc=1] 
multiple values encountered for non multiValued field pivot_i: [42, 441972300, 
21]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:33678/pj_hvr/n/collection1_shard1_replica_n43: 
ERROR: [doc=1] multiple values encountered for non multiValued field pivot_i: 
[42, 441972300, 21]
at 
__randomizedtesting.SeedInfo.seed([19463963CD9DAC57:911206B96361C1AF]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:561)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:156)
at 
org.apache.solr.cloud.TestCloudPivotFacet.test(TestCloudPivotFacet.java:134)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12839) add a 'resort' option to JSON faceting

2018-10-22 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659854#comment-16659854
 ] 

Hoss Man commented on SOLR-12839:
-

{quote}"foo desc, bar asc 50" was an example of a single sort with tiebreak and 
a limit (no resort). If one wanted a single string version ";" would be the 
divider. For example adding a resort with a tiebreak: "foo desc, bar asc 50; 
baz desc, qux asc 10"
{quote}
Ok ... i realize now that you were discussing 2 diff ideas and giving 2 diff 
examples and i was conflating them – but i'm still not certain what you're 
saying the *behavior* of these examples would be, particulalry because 
(independent of the idea of resorting *AND* independent of the idea of 
supporting tiebreakers on sort/resort syntax) you *ALSO* seem to be suggesting 
a numeric "limit" that would be inlined as part of the sort/resort syntax – and 
this confuses me in 2 orthoginal ways:
 * are you suggesting this would be an alternative for the existing {{limit}} 
param on these facets?
 ** if so, what would be behavior if someone tired to do both? use the "inline 
limit" and use a "limit" param?
 ** if not, then what do you mean by "limit" in the above sentence?
 * assuming you did mean as a replacement/override of the existing {{limit}} 
param, i don't understand your example and what the value add of asking solr to 
resort the "top 10" by criteria "baz desc, qux asc" if we're already returning 
the "top 50"

{quote}If there are use cases for starting with N sorted things and reducing 
that to K with a different sort, then it's just sort of recursive. Why would 
there be use cases for one resort and not two resorts?
 ...
 One use case that comes to mind are stock screens I've seen that consist of 
multiple sorting and "take top N" steps.

Example: Sort by current dividend yield and take the top 100, then sort those 
by low PE and take the top 50, then sort those by total return 1 year and take 
the top 10.
{quote}
...again: if this is a situation where solr is returning the top 100 buckets, 
what's the value add in having solr resort the top 50 (and then the top 10 
again) instead of just letting the client manipulate & re-order those same 
buckets?

I feel like maybe there is a disconnect in the _principle_ of the ideas we are 
discussing?

As I mentioned when i created this issue, the overall goal i'm trying to 
address is to mirror the concept of the "reranking query" at a facet bucket 
level ... for addressing the performance cost of sorting by something 
complex/expensive.
 * Today you can ask solr:
 ** Compute {{expensive_function()}} for every bucket that exists, and sort all 
the buckets by that function – then return the top {{$limit}} buckets"
 * I want to be able to tell solr:
 ** "Compute {{cheaper_aproximation_of_expensive_function()}} for every bucket 
that exists, sort all the buckets by that function, and compute 
{{expensive_function()}} only for the top candidate buckets – then (once 
refinement/merging is complete) resort just the fully populated buckets by 
{{expensive_function()}}

...note in particular that I'm not even suggesting any sort of new 
{{resort_limit}} option or any hard and fast guarantees on the number of 
buckets that are "resorted" – just a way to tell solr "during the first pass, 
you can use this cheap function instead of the final expensive function i 
really care about" ... in essence just a "performance hint" or "save some CPU 
cycles" type feature

What you're describing on the other hand seems to be more akin to a "i want 
specific operations to be performed on my buckets" type feature ... the 
examples you're describing sound almost like a subset of a more robust 
scripting type functionality, or at the very least a multi stage "post 
processing" that might include filtering or collapsing of buckets?

...Lemme come back to this conceptual disconnect in a minute...

{quote}Anyway we don't have to worry about multiple resorts now as long as we 
can unambiguously upgrade if desired later (i.e. whatever the resort spec looks 
like, if we can unambiguously wrap an array around it later and specify 
multiple of them, then we're good)
{quote}
right ... but if you're trying to future proof the API, there's also the 
question of "tiebreakers" look like when using the (existing) JSON object 
syntax for sorting instead of just the shorthand string syntax.

ie, if you completely ignore the concept of "resorting", today we support 
this...
{noformat}
json.facet={
  categories : {
type : terms,
field : cat,
limit : 5,
facet : { x : "sum(div(popularity,price))" },
// can use short hand of "x desc"
sort : { x : desc }, 
  }
}
{noformat}
...and if you assume you want to have multiple tiebreaker sorts then that would 
be something like...
{noformat}
json.facet={
  categories : {
type : terms,
field : cat,
limit : 5,
facet : { x : 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23076 - Still Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23076/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10004_solr, 127.0.0.1:10006_solr, 127.0.0.1:10008_solr, 
127.0.0.1:10005_solr, 127.0.0.1:10007_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/5)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10008_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10004_solr, 127.0.0.1:10006_solr, 127.0.0.1:10008_solr, 
127.0.0.1:10005_solr, 127.0.0.1:10007_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/5)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10008_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([BDB0FC9208B37A44:3D9099BC19F092E2]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
   

[jira] [Commented] (SOLR-11522) Suggestions/recommendations to rebalance replicas

2018-10-22 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659852#comment-16659852
 ] 

Noble Paul commented on SOLR-11522:
---

bq. why aren't all the callers of _get just using toMap ...

{{toMap()}} is extremely expensive and must be avoided if possible

bq. converting hte entire object to a Map would be just as efficient as only 
"writing" that single entry,

NO. 
A Map is a very expensive object . The writeMap() is just multiple method calls 
(no Objects  are created). it doesn't necessarily "write" to anything. 
Essentially, the cost of a {{MapWriter._get("key")}} is same as a 
{{NamedList#get("key")}}

bq...at least 3 times slower then if the test just did something like...

Yes. But the cost is negligible. get operations are pretty cheap (they are only 
as costly as a {{NamedList.get()}} ) .there are no new Objects created . It was 
done for readability of tests. 

bq.If the answer is: "Because we want impls of MapWriter to be able to provide 
a more efficient impl." then why have such a terrible inefficient default impl 
at all?

The default impl is generic .(it is not "inefficient", it is actually quite 
performant.) . If the MapWriter is backed by a Map, the lookup is slightly 
faster . O(log\(n)) vs O\(n)

bq.At the very least, this method should have a more descriptive name and 
better javadocs (as should Utils.getObjectByPath that makes it clear what the 
performance tradeoffs are here.

The better solution is to move the {{_get*}} methods to another interface and 
{{MapWriter}} implement that . Yes, better javadocs are definitely required


> Suggestions/recommendations to rebalance replicas
> -
>
> Key: SOLR-11522
> URL: https://issues.apache.org/jira/browse/SOLR-11522
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> It is possible that a cluster is unbalanced even if it is not breaking any of 
> the policy rules. Some nodes may have very little load while some others may 
> be heavily loaded. So, it is possible to move replicas around so that the 
> load is more evenly distributed. This is going to be driven by preferences. 
> The way we arrive at these suggestions is going to be as follows
>  # Sort the nodes according to the given preferences
>  # Choose a replica from the most loaded node ({{source-node}}) 
>  # try adding them to the least loaded node ({{target-node}})
>  # See if it breaks any policy rules. If yes , try another {{target-node}} 
> (go to #3)
>  # If no policy rules are being broken, present this as a {{suggestion}} . 
> The suggestion contains the following information
>  #* The {{source-node}} and {{target-node}} names
>  #* The actual v2 command that can be run to effect the operation
>  # Go to step #1
>  # Do this until the a replicas can be moved in such a way that the {{target 
> node}} is more loaded than the {{source-node}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12894) Solr documention for Java Vendors

2018-10-22 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659849#comment-16659849
 ] 

Shawn Heisey commented on SOLR-12894:
-

Since Oracle has changed the license with version 11, we should probably start 
recommending OpenJDK, while stating that Oracle Java will work.


> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12894) Solr documention for Java Vendors

2018-10-22 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659840#comment-16659840
 ] 

Varun Thacker commented on SOLR-12894:
--

Files we'd need to change :
 * SYSTEM_REQUIREMENTS.mdtext
 * solr-system-requirements.adoc

 

> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2118 - Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2118/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([9BDEAA54A11634A5:138A958E0FEA595D]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.SystemLogListenerTest.test

Error Message:
wrong number of events added to .system expected:<9> but was:<4>

Stack Trace:
java.lang.AssertionError: wrong number of events added to .system expected:<9> 
but was:<4>
at 

[jira] [Comment Edited] (SOLR-12884) Admin UI, admin/luke and *Point fields

2018-10-22 Thread Christopher Ball (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657963#comment-16657963
 ] 

Christopher Ball edited comment on SOLR-12884 at 10/22/18 11:14 PM:


Could this be an opportunity . . . for Solr to eat its own dog food?

How about using Streaming Expressions - for example the following expression 
provides a frequency table for a numeric field:
{code:java}
let (a=search(MyCollection, q=":", fl="myWordCount_l", fq="myWordCount_l:[0 TO 
*]",  rows=1000, sort="myWordCount_l asc"),
 b=col(a, myWordCount_l),
 c=freqTable(b)){code}
With the addition of a filter function (either an exponential function or just 
a list of step points), it would be on par with the data being provided from 
Luke. 

@[~joel.bernstein] - thoughts?


was (Author: christopherball):
Could this be an opportunity . . . for Solr to eat its own dog food?

How about using Streaming Expressions - for example the following expression 
provides a frequency table for a numeric field:
{code:java}
let (a=search(MyCollection, q=":", fl="myWordCount_l", fq="myWordCount_l:[0 TO 
*]",  rows=1000, sort="myWordCount_l asc"),
 b=col(a, myWordCount_l),
 c=freqTable(b)){code}
With the addition of a filter function (either an exponential function or just 
a list of step points), it would be on par with the data being provided from 
Luke. 

@[~joel.bernstein] - thoughts?

> Admin UI, admin/luke and *Point fields
> --
>
> Key: SOLR-12884
> URL: https://issues.apache.org/jira/browse/SOLR-12884
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Erick Erickson
>Priority: Major
>
> One of the conference attendees noted that you go to the schema browser and 
> click on, say, a pint field, then click "load term info", nothing is shown.
> admin/luke similarly doesn't show much interesting, here's the response for a 
> pint .vs. a tint field:
> "popularity":\{ "type":"pint", "schema":"I-SD-OF--"},
> "popularityt":{ "type":"tint", "schema":"I-S--OF--",
>                        "index":"-TS--", "docs":15},
>  
> What, if anything, should we do in these two cases? Since  the points-based 
> numerics don't have terms like Trie* fields, I don't think we _can_ show much 
> more so the above makes sense, it's just jarring to end users and looks like 
> a bug.
> WDYT about putting in some useful information though. Say for the Admin UI 
> for points-based "terms cannot be shown for points-based fields" or some such?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12884) Admin UI, admin/luke and *Point fields

2018-10-22 Thread Christopher Ball (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657963#comment-16657963
 ] 

Christopher Ball edited comment on SOLR-12884 at 10/22/18 11:13 PM:


Could this be an opportunity . . . for Solr to eat its own dog food?

How about using Streaming Expressions - for example the following expression 
provides a frequency table for a numeric field:
{code:java}
let (a=search(MyCollection, q=":", fl="myWordCount_l", fq="myWordCount_l:[0 TO 
*]",  rows=1000, sort="myWordCount_l asc"),
 b=col(a, myWordCount_l),
 c=freqTable(b)){code}
With the addition of a filter function (either an exponential function or just 
a list of step points), it would be on par with the data being provided from 
Luke. 

@[~joel.bernstein] - thoughts?


was (Author: christopherball):
Could this be an opportunity . . . for Solr to eat its own dog food?

How about using Streaming Expressions - for example the following expression 
provides a frequency table for a numeric field:

let (a=search(MyCollection,
 q="*:*",
 fl="myWordCount_l",
 fq="myWordCount_l:[0 TO *]",
 rows=1000,
 sort="myWordCount_l asc"),
 b=col(a, myWordCount_l),
 c=freqTable(b))

With the addition of a filter function (either an exponential function or just 
a list of step points), it would be on par with the data being provided from 
Luke. 

@[~joel.bernstein] - thoughts?

> Admin UI, admin/luke and *Point fields
> --
>
> Key: SOLR-12884
> URL: https://issues.apache.org/jira/browse/SOLR-12884
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Erick Erickson
>Priority: Major
>
> One of the conference attendees noted that you go to the schema browser and 
> click on, say, a pint field, then click "load term info", nothing is shown.
> admin/luke similarly doesn't show much interesting, here's the response for a 
> pint .vs. a tint field:
> "popularity":\{ "type":"pint", "schema":"I-SD-OF--"},
> "popularityt":{ "type":"tint", "schema":"I-S--OF--",
>                        "index":"-TS--", "docs":15},
>  
> What, if anything, should we do in these two cases? Since  the points-based 
> numerics don't have terms like Trie* fields, I don't think we _can_ show much 
> more so the above makes sense, it's just jarring to end users and looks like 
> a bug.
> WDYT about putting in some useful information though. Say for the Admin UI 
> for points-based "terms cannot be shown for points-based fields" or some such?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #482: LUCENE-8539: fix some typos and improve style

2018-10-22 Thread diegoceccarelli
GitHub user diegoceccarelli opened a pull request:

https://github.com/apache/lucene-solr/pull/482

LUCENE-8539: fix some typos and improve style



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/diegoceccarelli/lucene-solr LUCENE-8539

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/482.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #482


commit b007debb2ea995c6878c0d74385bf31951710b5a
Author: Diego Ceccarelli 
Date:   2018-10-22T23:08:51Z

LUCENE-8539: fix some typos and improve style




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 189 - Still Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/189/

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.SystemLogListenerTest.test

Error Message:
Trigger was not fired 

Stack Trace:
java.lang.AssertionError: Trigger was not fired 
at 
__randomizedtesting.SeedInfo.seed([10541E1DE2A749DF:980021C74C5B2427]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.SystemLogListenerTest.test(SystemLogListenerTest.java:151)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing: 1) 
Thread[id=8514, name=SolrRrdBackendFactory-3700-thread-1, state=WAITING, 
group=TGRP-TestSimExtremeIndexing] at 

[jira] [Created] (LUCENE-8539) Fix typos and style in TestStopFilter

2018-10-22 Thread Diego Ceccarelli (JIRA)
Diego Ceccarelli created LUCENE-8539:


 Summary: Fix typos and style in TestStopFilter
 Key: LUCENE-8539
 URL: https://issues.apache.org/jira/browse/LUCENE-8539
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Reporter: Diego Ceccarelli


This patch fixes some typos in TestStopFilter, it contains also some 
refactoring of the tests to make them more clear. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12879) Query Parser for MinHash/LSH

2018-10-22 Thread Andy Hind (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659793#comment-16659793
 ] 

Andy Hind commented on SOLR-12879:
--

I don't think there is any reason the patch would not go back to 7.x. It has no 
dependencies other than the analyser. It started life on 6.x, where it needed 
to disable query co-cordination.

The parser is mostly intended to be used with q and fg parameters. A default 
wire up would be great.

I would not be surprised if someone comes up with a use in streaming as it 
provides another distance measure.

I will look at adding the docs. The analyser should also have some explanation. 

 

 

> Query Parser for MinHash/LSH
> 
>
> Key: SOLR-12879
> URL: https://issues.apache.org/jira/browse/SOLR-12879
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Andy Hind
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: minhash.patch
>
>
> Following on from https://issues.apache.org/jira/browse/LUCENE-6968, provide 
> a query parser that builds queries that provide a measure of Jaccard 
> similarity. The initial patch includes banded queries that were also proposed 
> on the original issue.
>  
> I have one outstanding questions:
>  * Should the score from the overall query be normalised?
> Note, that the band count is currently approximate and may be one less than 
> in practise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11522) Suggestions/recommendations to rebalance replicas

2018-10-22 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659780#comment-16659780
 ] 

Hoss Man commented on SOLR-11522:
-

{quote}I am mostly worried about the suitability of "get" declared on this 
interface, and I think the costly lookup is symptomatic of this misalignment. 
This interface should be limited to writing IMO.
{quote}
I'm having trouble understanding the purpose/use of these {{_get}} methods 
(they seem to be purely for use in tests???) but in general I'm with david that 
they seem very confusing and badly suited for this interface/name.

If these are purely "utility" methods intended for special case situation where 
tests want to do a "lookup" of a value in one of these "Map like" MapWriter 
instances, why do they need to be in the interface itself? Since the bulk of 
the work to do the "pretend we are writting out the map but really just look 
for one key" is in {{Utils.getVal}} – hidden away as an implementation detail 
behind {{Utils.getObjectByPath}} which already does a lot of instanceof checks 
– why does {{MapWriter._get}} need to exist at all? why can't the caller just 
use {{Utils.getObjectByPath}} directly?

If the answer is: "Because we want impls of {{MapWriter}} to be able to provide 
a more efficient impl." then why have such a terrible inefficient default impl 
at all?

At the very least, this method should have a more descriptive name and better 
javadocs (as should {{Utils.getObjectByPath}} that makes it clear what the 
performance tradeoffs are here.

frankly, looking at the actual uses of {{_get}} in the tests makes me question 
the entire "value add" of these methods – why aren't all the callers of 
{{_get}} just using {{toMap}} (or {{Utils.getObjectByPath}}) and then making 
multiple assertions about the resulting map? AFAICT the way {{Utils.getVal}} 
works means that in the case of a test that does a single assert on a single 
entry, converting hte entire object to a Map would be just as efficient as only 
"writing" that single entry, but in many cases tests are calling {{_get}} on 
several sub-elements in a row, which should be much faster if the test just 
dumped the whole map and then called {{get}} on they keys it wants to assert.

Specifically, isn't this existing test snippet...
{code:java}
  CoreAdminResponse status = CoreAdminRequest.getStatus(corename, 
coreclient);
  assertEquals(collectionName, status._get(asList("status", corename, 
"cloud", "collection"), null));
  assertNotNull(status._get(asList("status", corename, "cloud", "shard"), 
null));
  assertNotNull(status._get(asList("status", corename, "cloud", "replica"), 
null));
{code}
...at least 3 times slower then if the test just did something like...
{code:java}
  CoreAdminResponse status = CoreAdminRequest.getStatus(corename, 
coreclient);
  Map coreMap = Utils.getObjectByPath(status, false, asList("status", 
corename, "cloud");
  assertEquals(collectionName, coreMap.get("collection"));
  assertNotNull(coreMap.get("shard"));
  assertNotNull(coreMap.get("replica"));
{code}
...?

I don't see how the existing test code is any better/faster/more-readable then 
the 2nd (which seems like a much simpler approach, w/o the need to pollute the 
{{MapWriter}} API with a confusing default method)

> Suggestions/recommendations to rebalance replicas
> -
>
> Key: SOLR-11522
> URL: https://issues.apache.org/jira/browse/SOLR-11522
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> It is possible that a cluster is unbalanced even if it is not breaking any of 
> the policy rules. Some nodes may have very little load while some others may 
> be heavily loaded. So, it is possible to move replicas around so that the 
> load is more evenly distributed. This is going to be driven by preferences. 
> The way we arrive at these suggestions is going to be as follows
>  # Sort the nodes according to the given preferences
>  # Choose a replica from the most loaded node ({{source-node}}) 
>  # try adding them to the least loaded node ({{target-node}})
>  # See if it breaks any policy rules. If yes , try another {{target-node}} 
> (go to #3)
>  # If no policy rules are being broken, present this as a {{suggestion}} . 
> The suggestion contains the following information
>  #* The {{source-node}} and {{target-node}} names
>  #* The actual v2 command that can be run to effect the operation
>  # Go to step #1
>  # Do this until the a replicas can be moved in such a way that the {{target 
> node}} is more loaded than the {{source-node}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (LUCENE-8538) Add Simple WKT Shape Parser

2018-10-22 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-8538:
---
Lucene Fields: New,Patch Available  (was: New)

> Add Simple WKT Shape Parser
> ---
>
> Key: LUCENE-8538
> URL: https://issues.apache.org/jira/browse/LUCENE-8538
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8538.patch
>
>
> Similar to {{SimpleGeoJSONPolygonParser}} for creating {{Polygon}} objects 
> from GeoJSON, it would be helpful to have a {{SimpleWKTParser}} for creating 
> lucene geometries from WKT. Not only is this useful for simple tests, but 
> also helps for benchmarking from real world data (e.g., PlanetOSM).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12884) Admin UI, admin/luke and *Point fields

2018-10-22 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659768#comment-16659768
 ] 

Joel Bernstein commented on SOLR-12884:
---

I think you would want to sample:
{code:java}
let (a=random(MyCollection, q="*:*", fl="myWordCount_l", rows=10),
 b=col(a, myWordCount_l),
 c=freqTable(b)){code}
And if you wanted a step function I think you could probably use the *hist* 
function with a set number of bins:
{code:java}
let (a=random(MyCollection, q="*:*", fl="myWordCount_l", rows=10),
 b=col(a, myWordCount_l),
 c=hist(b, 11)){code}

> Admin UI, admin/luke and *Point fields
> --
>
> Key: SOLR-12884
> URL: https://issues.apache.org/jira/browse/SOLR-12884
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Erick Erickson
>Priority: Major
>
> One of the conference attendees noted that you go to the schema browser and 
> click on, say, a pint field, then click "load term info", nothing is shown.
> admin/luke similarly doesn't show much interesting, here's the response for a 
> pint .vs. a tint field:
> "popularity":\{ "type":"pint", "schema":"I-SD-OF--"},
> "popularityt":{ "type":"tint", "schema":"I-S--OF--",
>                        "index":"-TS--", "docs":15},
>  
> What, if anything, should we do in these two cases? Since  the points-based 
> numerics don't have terms like Trie* fields, I don't think we _can_ show much 
> more so the above makes sense, it's just jarring to end users and looks like 
> a bug.
> WDYT about putting in some useful information though. Say for the Admin UI 
> for points-based "terms cannot be shown for points-based fields" or some such?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12894) Solr documention for Java Vendors

2018-10-22 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12894:
-
Description: 
I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
which my answer was yes . This was after I checked with Steve on which OpenJDK 
version runs on his jenkins

For refrerence it currently uses -
{code:java}
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
 

Solr's ref guide (  
[https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
 ) mentions using Oracle 1.8 or higher .

 

We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps even 
have a compatibility matrix

 

Also we should note that Java 9 and 10 are short term releases . Hence remove 
wording that using Java8+ with more spefic versions.

  was:
I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
which my answer was yes . This was after I checked with Steve on which OpenJDK 
version runs on his jenkins

For refrerence it currently uses -
{code:java}
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
 

Solr's ref guide (  
[https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
 ) mentions using Oracle 1.8 or higher .

 

We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps even 
have a compatibility matrix


> Solr documention for Java Vendors
> -
>
> Key: SOLR-12894
> URL: https://issues.apache.org/jira/browse/SOLR-12894
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
> which my answer was yes . This was after I checked with Steve on which 
> OpenJDK version runs on his jenkins
> For refrerence it currently uses -
> {code:java}
> openjdk version "1.8.0_171"
> OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
> OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
>  
> Solr's ref guide (  
> [https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
>  ) mentions using Oracle 1.8 or higher .
>  
> We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps 
> even have a compatibility matrix
>  
> Also we should note that Java 9 and 10 are short term releases . Hence remove 
> wording that using Java8+ with more spefic versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-22 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5004:
---
Fix Version/s: master (8.0)
   7.6

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, 
> SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659762#comment-16659762
 ] 

ASF subversion and git services commented on SOLR-5004:
---

Commit 97f6e23ff26e43b2f5b9412c4a01629737a92e43 in lucene-solr's branch 
refs/heads/branch_7x from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=97f6e23 ]

SOLR-5004: Allow a shard to be split into 'n' sub-shards using the collections 
API


> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, 
> SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 2960 - Failure!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2960/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.servlet.HttpSolrCallGetCoreTest

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([5114C39249EA89C4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.servlet.HttpSolrCallGetCoreTest.setupCluster(HttpSolrCallGetCoreTest.java:53)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth

Error Message:
Could not find collection:second_collection

Stack Trace:
java.lang.AssertionError: Could not find collection:second_collection
at 
__randomizedtesting.SeedInfo.seed([5114C39249EA89C4:ADAE17A6B1CA380E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:157)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth(TestMiniSolrCloudClusterSSL.java:121)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 

[jira] [Created] (SOLR-12894) Solr documention for Java Vendors

2018-10-22 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12894:


 Summary: Solr documention for Java Vendors
 Key: SOLR-12894
 URL: https://issues.apache.org/jira/browse/SOLR-12894
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Varun Thacker


I was asked a question recently - "Is using OpenJDK safe with Solr 7.4" . To 
which my answer was yes . This was after I checked with Steve on which OpenJDK 
version runs on his jenkins

For refrerence it currently uses -
{code:java}
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~bpo8+1-b11)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode){code}
 

Solr's ref guide (  
[https://lucene.apache.org/solr/guide/installing-solr.html#got-java|https://lucene.apache.org/solr/guide/6_6/installing-solr.html#got-java]
 ) mentions using Oracle 1.8 or higher .

 

We should mention that both Oracle JDKs and Open JDKs are tested. Perhaps even 
have a compatibility matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659745#comment-16659745
 ] 

ASF subversion and git services commented on SOLR-5004:
---

Commit d799fd53c7cd3a83442d6010dc48802d2fd8c7fb in lucene-solr's branch 
refs/heads/master from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d799fd5 ]

SOLR-5004: Allow a shard to be split into 'n' sub-shards using the collections 
API


> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, 
> SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23075 - Still Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23075/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseParallelGC

26 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName

Error Message:
Could not find collection:second_collection

Stack Trace:
java.lang.AssertionError: Could not find collection:second_collection
at 
__randomizedtesting.SeedInfo.seed([7FF669394CE75C44:28472C828C1BA355]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName(TestMiniSolrCloudClusterSSL.java:185)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-repro - Build # 1755 - Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1755/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/194/consoleText

[repro] Revision: 4332b0aa6e467c1ae18246fa25301ae9410b4d7f

[repro] Repro line:  ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=B3143940EEE10292 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=es-BO -Dtests.timezone=America/Pangnirtung -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=AutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=B3143940EEE10292 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar-SA 
-Dtests.timezone=America/Cuiaba -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testVersionsAreReturned -Dtests.seed=BBBA7C223892F907 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hu 
-Dtests.timezone=EST -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testRouting -Dtests.seed=BBBA7C223892F907 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hu -Dtests.timezone=EST 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
fcaea07f3c8cba34906ca02f40fb1d2c40badc08
[repro] git fetch
[repro] git checkout 4332b0aa6e467c1ae18246fa25301ae9410b4d7f

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AutoAddReplicasIntegrationTest
[repro]   ScheduledMaintenanceTriggerTest
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 3436 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.AutoAddReplicasIntegrationTest|*.ScheduledMaintenanceTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=B3143940EEE10292 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar-SA 
-Dtests.timezone=America/Cuiaba -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 7687 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 454 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=BBBA7C223892F907 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=hu -Dtests.timezone=EST 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 2133 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest
[repro]   2/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro]   5/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ScheduledMaintenanceTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.ScheduledMaintenanceTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=B3143940EEE10292 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=es-BO 
-Dtests.timezone=America/Pangnirtung -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 1144 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
[repro] git checkout fcaea07f3c8cba34906ca02f40fb1d2c40badc08

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8538) Add Simple WKT Shape Parser

2018-10-22 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659574#comment-16659574
 ] 

Nicholas Knize commented on LUCENE-8538:


[~dsmiley] since this builds lucene specific geometry ({{Polygon}}, {{Line}}, 
{{Rectangle}}) I assume it would end up in core or wherever {{LatLonShape}} 
lands. The attached patch keeps it in sandbox for now, since that's where 
{{LatLonShape}} currently resides.

> Add Simple WKT Shape Parser
> ---
>
> Key: LUCENE-8538
> URL: https://issues.apache.org/jira/browse/LUCENE-8538
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8538.patch
>
>
> Similar to {{SimpleGeoJSONPolygonParser}} for creating {{Polygon}} objects 
> from GeoJSON, it would be helpful to have a {{SimpleWKTParser}} for creating 
> lucene geometries from WKT. Not only is this useful for simple tests, but 
> also helps for benchmarking from real world data (e.g., PlanetOSM).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #464: WIP SOLR-12555: refactor tests in package org...

2018-10-22 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/464#discussion_r227038483
  
--- Diff: solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java ---
@@ -414,56 +391,44 @@ public void testOptimisticLocking() throws Exception {
 version2 = addAndGetVersion(sdoc("id","1", "_version_", 
Long.toString(version)), null);
 assertTrue(version2 > version);
 
-try {
-  // overwriting the previous version should now fail
-  version2 = addAndGetVersion(sdoc("id","1"), params("_version_", 
Long.toString(version)));
-  fail();
-} catch (SolrException se) {
-  assertEquals(409, se.code());
-}
+// overwriting the previous version should now fail
+se = expectThrows(SolrException.class, "overwriting previous version 
should fail",
+() -> addAndGetVersion(sdoc("id","1"), params("_version_", 
Long.toString(version;
+assertEquals(409, se.code());
 
-try {
-  // deleting the previous version should now fail
-  version2 = deleteAndGetVersion("1", params("_version_", 
Long.toString(version)));
-  fail();
-} catch (SolrException se) {
-  assertEquals(409, se.code());
-}
+// deleting the previous version should now fail
+se = expectThrows(SolrException.class, "deleting the previous version 
should now fail",
+() -> deleteAndGetVersion("1", params("_version_", 
Long.toString(version;
+assertEquals(409, se.code());
 
-version = version2;
+final long prevVersion = version2;
--- End diff --

The only problem is that if the variables are not final, they can not be 
used inside the lambda that is passed to expectThrows.
Should I just revert these changes?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8538) Add Simple WKT Shape Parser

2018-10-22 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659528#comment-16659528
 ] 

David Smiley commented on LUCENE-8538:
--

Where would this live?  "spatial" module?  I'm not sure where things go anymore.
Would you like to fork/copy the WKT parsing code out of Spatial4j? 
https://github.com/locationtech/spatial4j/blob/master/src/main/java/org/locationtech/spatial4j/io/WktShapeParser.java
   Parts of that were written by [~cmale].

> Add Simple WKT Shape Parser
> ---
>
> Key: LUCENE-8538
> URL: https://issues.apache.org/jira/browse/LUCENE-8538
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
>
> Similar to {{SimpleGeoJSONPolygonParser}} for creating {{Polygon}} objects 
> from GeoJSON, it would be helpful to have a {{SimpleWKTParser}} for creating 
> lucene geometries from WKT. Not only is this useful for simple tests, but 
> also helps for benchmarking from real world data (e.g., PlanetOSM).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12829) Add plist (parallel list) Streaming Expression

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659520#comment-16659520
 ] 

ASF subversion and git services commented on SOLR-12829:


Commit 319ba2dcbc2bb9f62e7b42dc1cbb8d42a81f392e in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=319ba2d ]

SOLR-12829: Add plist (parallel list) Streaming Expression


> Add plist (parallel list) Streaming Expression
> --
>
> Key: SOLR-12829
> URL: https://issues.apache.org/jira/browse/SOLR-12829
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12829.patch, SOLR-12829.patch
>
>
> The *plist* Streaming Expression wraps any number of streaming expressions 
> and opens them in parallel. The results of each of the streams are then 
> iterated in the order they appear in the list. Since many streams perform 
> heavy pushed down operations when opened, like the FacetStream, this will 
> result in the parallelization of these operations. For example plist could 
> wrap several facet() expressions and open them each in parallel, which would 
> cause the facets to be run in parallel, on different replicas in the cluster. 
> Here is sample syntax:
> {code:java}
> plist(tuple(facet1=facet(...)), 
>   tuple(facet2=facet(...)),
>   tuple(facet3=facet(...))) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #464: WIP SOLR-12555: refactor tests in package org...

2018-10-22 Thread gerlowskija
Github user gerlowskija commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/464#discussion_r227030813
  
--- Diff: solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java ---
@@ -414,56 +391,44 @@ public void testOptimisticLocking() throws Exception {
 version2 = addAndGetVersion(sdoc("id","1", "_version_", 
Long.toString(version)), null);
 assertTrue(version2 > version);
 
-try {
-  // overwriting the previous version should now fail
-  version2 = addAndGetVersion(sdoc("id","1"), params("_version_", 
Long.toString(version)));
-  fail();
-} catch (SolrException se) {
-  assertEquals(409, se.code());
-}
+// overwriting the previous version should now fail
+se = expectThrows(SolrException.class, "overwriting previous version 
should fail",
+() -> addAndGetVersion(sdoc("id","1"), params("_version_", 
Long.toString(version;
+assertEquals(409, se.code());
 
-try {
-  // deleting the previous version should now fail
-  version2 = deleteAndGetVersion("1", params("_version_", 
Long.toString(version)));
-  fail();
-} catch (SolrException se) {
-  assertEquals(409, se.code());
-}
+// deleting the previous version should now fail
+se = expectThrows(SolrException.class, "deleting the previous version 
should now fail",
+() -> deleteAndGetVersion("1", params("_version_", 
Long.toString(version;
+assertEquals(409, se.code());
 
-version = version2;
+final long prevVersion = version2;
--- End diff --

[0] I'm still somewhat leery of changing how the version variables are used 
here.  I agree with what seems like your intent here - that `final` variables 
often make it much easier to reason about Java code.  But with how flaky the 
tests are, I'd rather not introduce such changes here.  


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #464: WIP SOLR-12555: refactor tests in package org...

2018-10-22 Thread gerlowskija
Github user gerlowskija commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/464#discussion_r227022552
  
--- Diff: solr/core/src/test/org/apache/solr/search/QueryEqualityTest.java 
---
@@ -1214,29 +1208,23 @@ public void testPayloadScoreQuery() throws 
Exception {
 // I don't see a precedent to test query inequality in here, so doing 
a `try`
 // There was a bug with PayloadScoreQuery's .equals() method that said 
two queries were equal with different includeSpanScore settings
 
-try {
-  assertQueryEquals
-  ("payload_score"
-  , "{!payload_score f=foo_dpf v=query func=min 
includeSpanScore=false}"
-  , "{!payload_score f=foo_dpf v=query func=min 
includeSpanScore=true}"
-  );
-  fail("queries should not have been equal");
-} catch(AssertionFailedError e) {
-  assertTrue("queries were not equal, as expected", true);
-}
+expectThrows(AssertionFailedError.class, "queries were not equal, as 
expected",
+() -> assertQueryEquals
+("payload_score"
+, "{!payload_score f=foo_dpf v=query func=min 
includeSpanScore=false}"
+, "{!payload_score f=foo_dpf v=query func=min 
includeSpanScore=true}"
+)
+);
   }
 
   public void testPayloadCheckQuery() throws Exception {
-try {
-  assertQueryEquals
-  ("payload_check"
-  , "{!payload_check f=foo_dpf payloads=2}one"
-  , "{!payload_check f=foo_dpf payloads=2}two"
-  );
-  fail("queries should not have been equal");
-} catch(AssertionFailedError e) {
-  assertTrue("queries were not equal, as expected", true);
-}
+expectThrows(AssertionFailedError.class, "queries were not equal, as 
expected",
--- End diff --

[-1] I think this exception message here is backwards.  This assertion 
fails if the queries _were_ equal, but the message indicates that the problem 
is that they were !=.  Using the message from the original `fail()` invocation 
would probably work better here.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #464: WIP SOLR-12555: refactor tests in package org...

2018-10-22 Thread gerlowskija
Github user gerlowskija commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/464#discussion_r227021582
  
--- Diff: solr/core/src/test/org/apache/solr/search/QueryEqualityTest.java 
---
@@ -1190,14 +1190,8 @@ public void testCompares() throws Exception {
 assertFuncEquals("gte(foo_i,2)", "gte(foo_i,2)");
 assertFuncEquals("eq(foo_i,2)", "eq(foo_i,2)");
 
-boolean equals = false;
-try {
-  assertFuncEquals("eq(foo_i,2)", "lt(foo_i,2)");
-  equals = true;
-} catch (AssertionError e) {
-  //expected
-}
-assertFalse(equals);
+expectThrows(AssertionError.class, "expected error, functions are not 
equal",
--- End diff --

[0] Not suggesting you change it here, butkindof weird that there's 
just not an `assertFuncNotEquals`


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #464: WIP SOLR-12555: refactor tests in package org...

2018-10-22 Thread gerlowskija
Github user gerlowskija commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/464#discussion_r227024512
  
--- Diff: solr/core/src/test/org/apache/solr/search/QueryEqualityTest.java 
---
@@ -1260,16 +1248,14 @@ public void testBoolQuery() throws Exception {
 "must='{!lucene}foo_s:c' filter='{!lucene}foo_s:d' 
filter='{!lucene}foo_s:e'}",
 "{!bool must='{!lucene}foo_s:c' filter='{!lucene}foo_s:d' " +
 "must_not='{!lucene}foo_s:a' should='{!lucene}foo_s:b' 
filter='{!lucene}foo_s:e'}");
-try {
-  assertQueryEquals
-  ("bool"
-  , "{!bool must='{!lucene}foo_s:a'}"
-  , "{!bool should='{!lucene}foo_s:a'}"
-  );
-  fail("queries should not have been equal");
-} catch(AssertionFailedError e) {
-  assertTrue("queries were not equal, as expected", true);
-}
+
+expectThrows(AssertionFailedError.class, "queries were not equal, as 
expected",
--- End diff --

[-1] ditto re: wrong String message in `expectThrows` here


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #464: WIP SOLR-12555: refactor tests in package org...

2018-10-22 Thread gerlowskija
Github user gerlowskija commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/464#discussion_r227024256
  
--- Diff: solr/core/src/test/org/apache/solr/search/QueryEqualityTest.java 
---
@@ -1214,29 +1208,23 @@ public void testPayloadScoreQuery() throws 
Exception {
 // I don't see a precedent to test query inequality in here, so doing 
a `try`
 // There was a bug with PayloadScoreQuery's .equals() method that said 
two queries were equal with different includeSpanScore settings
 
-try {
-  assertQueryEquals
-  ("payload_score"
-  , "{!payload_score f=foo_dpf v=query func=min 
includeSpanScore=false}"
-  , "{!payload_score f=foo_dpf v=query func=min 
includeSpanScore=true}"
-  );
-  fail("queries should not have been equal");
-} catch(AssertionFailedError e) {
-  assertTrue("queries were not equal, as expected", true);
-}
+expectThrows(AssertionFailedError.class, "queries were not equal, as 
expected",
--- End diff --

[-1] I think this exception message here is backwards.  This assertion 
fails if the queries _were_ equal, but the message implies that the problem is 
that they were !=.  Using the message from the original `fail()` invocation 
would probably work better here ("queries should not have been equal")


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #464: WIP SOLR-12555: refactor tests in package org...

2018-10-22 Thread gerlowskija
Github user gerlowskija commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/464#discussion_r227025974
  
--- Diff: 
solr/core/src/test/org/apache/solr/search/TestExtendedDismaxParser.java ---
@@ -656,45 +656,38 @@ public void testAliasingBoost() throws Exception {
   public void testCyclicAliasing() throws Exception {
 try {
   ignoreException(".*Field aliases lead to a cycle.*");
-  try {
-h.query(req("defType","edismax", "q","blarg", "qf","who", 
"f.who.qf","name","f.name.qf","who"));
-fail("Simple cyclic alising not detected");
-  } catch (SolrException e) {
-assertTrue(e.getCause().getMessage().contains("Field aliases lead 
to a cycle"));
-  }
-  
-  try {
-h.query(req("defType","edismax", "q","blarg", "qf","who", 
"f.who.qf","name","f.name.qf","myalias", "f.myalias.qf","who"));
-fail("Cyclic alising not detected");
-  } catch (SolrException e) {
-assertTrue(e.getCause().getMessage().contains("Field aliases lead 
to a cycle"));
-  }
-  
+
+  SolrException e = expectThrows(SolrException.class, "Simple cyclic 
alising not detected",
+  () -> h.query(req("defType","edismax", "q","blarg", "qf","who", 
"f.who.qf","name","f.name.qf","who")));
+  assertCyclicDetectionErrorMessage(e);
+
+  e = expectThrows(SolrException.class, "Cyclic alising not detected",
+  () -> h.query(req("defType","edismax", "q","blarg", "qf","who", 
"f.who.qf","name","f.name.qf","myalias", "f.myalias.qf","who")));
+  assertCyclicDetectionErrorMessage(e);
+
   try {
 h.query(req("defType","edismax", "q","blarg", "qf","field1", 
"f.field1.qf","field2 field3","f.field2.qf","field4 field5", 
"f.field4.qf","field5", "f.field5.qf","field6", "f.field3.qf","field6"));
-  } catch (SolrException e) {
-assertFalse("This is not cyclic alising", 
e.getCause().getMessage().contains("Field aliases lead to a cycle"));
-assertTrue(e.getCause().getMessage().contains("not a valid field 
name"));
-  }
-  
-  try {
-h.query(req("defType","edismax", "q","blarg", "qf","field1", 
"f.field1.qf","field2 field3", "f.field2.qf","field4 field5", 
"f.field4.qf","field5", "f.field5.qf","field4"));
-fail("Cyclic alising not detected");
-  } catch (SolrException e) {
-assertTrue(e.getCause().getMessage().contains("Field aliases lead 
to a cycle"));
-  }
-  
-  try {
-h.query(req("defType","edismax", "q","who:(Zapp Pig)", 
"qf","text", "f.who.qf","name","f.name.qf","myalias", "f.myalias.qf","who"));
-fail("Cyclic alising not detected");
-  } catch (SolrException e) {
-assertTrue(e.getCause().getMessage().contains("Field aliases lead 
to a cycle"));
+  } catch (SolrException ex) {
--- End diff --

[Q] Is there a reason that this example also couldn't be changed into an 
`expectThrows`?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8538) Add Simple WKT Shape Parser

2018-10-22 Thread Nicholas Knize (JIRA)
Nicholas Knize created LUCENE-8538:
--

 Summary: Add Simple WKT Shape Parser
 Key: LUCENE-8538
 URL: https://issues.apache.org/jira/browse/LUCENE-8538
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Nicholas Knize


Similar to {{SimpleGeoJSONPolygonParser}} for creating {{Polygon}} objects from 
GeoJSON, it would be helpful to have a {{SimpleWKTParser}} for creating lucene 
geometries from WKT. Not only is this useful for simple tests, but also helps 
for benchmarking from real world data (e.g., PlanetOSM).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12829) Add plist (parallel list) Streaming Expression

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659489#comment-16659489
 ] 

ASF subversion and git services commented on SOLR-12829:


Commit fcaea07f3c8cba34906ca02f40fb1d2c40badc08 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fcaea07 ]

SOLR-12829: Add plist (parallel list) Streaming Expression


> Add plist (parallel list) Streaming Expression
> --
>
> Key: SOLR-12829
> URL: https://issues.apache.org/jira/browse/SOLR-12829
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12829.patch, SOLR-12829.patch
>
>
> The *plist* Streaming Expression wraps any number of streaming expressions 
> and opens them in parallel. The results of each of the streams are then 
> iterated in the order they appear in the list. Since many streams perform 
> heavy pushed down operations when opened, like the FacetStream, this will 
> result in the parallelization of these operations. For example plist could 
> wrap several facet() expressions and open them each in parallel, which would 
> cause the facets to be run in parallel, on different replicas in the cluster. 
> Here is sample syntax:
> {code:java}
> plist(tuple(facet1=facet(...)), 
>   tuple(facet2=facet(...)),
>   tuple(facet3=facet(...))) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues

2018-10-22 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659478#comment-16659478
 ] 

David Smiley commented on LUCENE-8374:
--

Feel free to post version specific patches and/or create feature branches if it 
suites you.  I'm just telling you how to use the JIRA "fix version" field; 
that's all.  I know you're new to community development here so I'm just trying 
to help out.

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374_branch_7_3.patch, 
> LUCENE-8374_branch_7_3.patch.20181005, LUCENE-8374_branch_7_4.patch, 
> LUCENE-8374_branch_7_5.patch
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with existing 
> indexes.
> h2. The lookup structure inside each block
> If {{ALL}} of the 2^16 values are defined, the structure is empty and the 
> ordinal is simply the requested docID with some modulo and multiply math. 
> Nothing to improve there.
> If the block is {{DENSE}} (2^12 to 2^16 values are defined), a bitmap is used 
> and the number of set bits up to the wanted index (the docID modulo the block 
> origo) are counted. That bitmap is a long[1024], meaning that worst case is 
> to lookup and count all set bits for 1024 longs!
> One known solution to this is to use a [rank 
> structure|[https://en.wikipedia.org/wiki/Succinct_data_structure]]. I 
> [implemented 
> it|[https://github.com/tokee/lucene-solr/blob/solr5894/solr/core/src/java/org/apache/solr/search/sparse/count/plane/RankCache.java]]
>  for a related project and with that (), the rank-overhead for a {{DENSE}} 
> block would be long[32] and would ensure a maximum of 9 lookups. It is not 
> trivial to build the rank-structure and caching it (assuming all blocks are 
> dense) for 6M documents would require 22 KB (3.17% overhead). It would be far 
> better to generate the rank-structure at index time and store it immediately 
> before the bitset (this is possible with streaming as each block is fully 
> resolved before flushing), but of course that would require a change to the 
> codec.
> If {{SPARSE}} (< 2^12 values ~= 6%) are defined, the docIDs are simply in the 
> form of a list. As a comment in the code suggests, a binary search through 
> these would be faster, although that would mean seeking backwards. If that is 
> not acceptable, I don't have any immediate idea for avoiding 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 967 - Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/967/

1 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.addReplicaTest

Error Message:
Error from server at http://127.0.0.1:45903/solr: KeeperErrorCode = NoNode for 
/overseer/collection-queue-work/qnr-26

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:45903/solr: KeeperErrorCode = NoNode for 
/overseer/collection-queue-work/qnr-26
at 
__randomizedtesting.SeedInfo.seed([2A96EC61ED923D17:B936BB47639A76C6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.addReplicaTest(CollectionsAPIDistributedZkTest.java:669)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

[GitHub] lucene-solr issue #462: LUCENE-8523: Fix typo for JapaneseNumberFilterFactor...

2018-10-22 Thread ajhalani
Github user ajhalani commented on the issue:

https://github.com/apache/lucene-solr/pull/462
  
Merged via commit e14bacfac48501f827997ba0ac8cb20702834fef. Thanks Alan. 


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #462: LUCENE-8523: Fix typo for JapaneseNumberFilte...

2018-10-22 Thread ajhalani
Github user ajhalani closed the pull request at:

https://github.com/apache/lucene-solr/pull/462


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: precommit, java8 and Solr 5.x

2018-10-22 Thread Erick Erickson
Thanks. I think that's also the javac.source, but specifying it fully
doesn't change the behavior. I did manage to get it to work just now
so I'm good.

Best,
Erick
On Mon, Oct 22, 2018 at 11:14 AM Alexandre Rafalovitch
 wrote:
>
> Have you tried -Dant.build.javac.source=1.8? Based on
> http://ant.apache.org/manual/running.html#sysprops (though maybe those
> are environmental properties).
>
> Regards,
>Alex.
> On Mon, 22 Oct 2018 at 11:03, Erick Erickson  wrote:
> >
> > Is there a magic flag to convince the precommit step to allow Java 8
> > constructs (lambdas in this case) when compiling a 5x version? I need
> > to backport some code.
> >
> > I tried this:
> > ant -Djavac.source=1.8 -Djavac.target=1.8 -Dsource=1.8 precommit
> > and
> > ant -Djavac.source=1.8 -Djavac.target=1.8 -Dsource=8 precommit
> >
> > to no avail. It all compiles fine, but generates this error:
> >
> > -ecj-javadoc-lint-src:
> > .
> > .
> > .
> > [ecj-lint] Lambda expressions are allowed only at source level 1.8 or above
> >
> > Meanwhile I'll dig of course.
> >
> > Thanks,
> > Erick
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-22 Thread Anshum Gupta (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659348#comment-16659348
 ] 

Anshum Gupta commented on SOLR-5004:


Nevermind, seems like I updated that patch on Friday and forgot. The Jira page 
had been open over the weekend and I didn't refresh it.

There's a failing core test that has nothing to do with this change so I'll 
just beast that out and commit if all looks good.

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, 
> SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-22 Thread Anshum Gupta (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659341#comment-16659341
 ] 

Anshum Gupta commented on SOLR-5004:


[~cpoerschke] - Here's an updated patch that accounts for the suggestions you 
had. There were a few things that I've fixed e.g. the section about core admin 
in the documentation shouldn't have any changes, and error messages w.r.t. 
using multiple conflicting params in parallel and a test for the same. I'll 
commit it once the test run completes.

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, 
> SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-22 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5004:
---
Attachment: SOLR-5004.04.patch

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, 
> SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12866) Reproducing TestLocalFSCloudBackupRestore and TestHdfsCloudBackupRestore failures

2018-10-22 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659229#comment-16659229
 ] 

Steve Rowe commented on SOLR-12866:
---

Another reproducing seed for TestLocalFSCloudBackupRestore, from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/890/] (reproduced for 
me on Linux on branch_7x with java8):

{noformat}
Checking out Revision 36ce83bc9add02a900e38b396b42c3c729846598 
(refs/remotes/origin/branch_7x)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestLocalFSCloudBackupRestore -Dtests.method=test 
-Dtests.seed=1F865FE108D99C42 -Dtests.slow=true -Dtests.locale=nnh 
-Dtests.timezone=Canada/Saskatchewan -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 26.7s J1 | TestLocalFSCloudBackupRestore.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Node 
127.0.0.1:62709_solr has 7 replicas. Expected num replicas : 6. state: 
   [junit4]> 
DocCollection(backuprestore_restored//collections/backuprestore_restored/state.json/38)={
   [junit4]>   "pullReplicas":1,
   [junit4]>   "replicationFactor":2,
   [junit4]>   "shards":{
   [junit4]> "shard2":{
   [junit4]>   "range":"0-7fff",
   [junit4]>   "state":"active",
   [junit4]>   "replicas":{
   [junit4]> "core_node122":{
   [junit4]>   "core":"backuprestore_restored_shard2_replica_n121",
   [junit4]>   "base_url":"http://127.0.0.1:62709/solr;,
   [junit4]>   "node_name":"127.0.0.1:62709_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "force_set_state":"false",
   [junit4]>   "leader":"true"},
   [junit4]> "core_node128":{
   [junit4]>   "core":"backuprestore_restored_shard2_replica_n127",
   [junit4]>   "base_url":"http://127.0.0.1:62709/solr;,
   [junit4]>   "node_name":"127.0.0.1:62709_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "force_set_state":"false"},
   [junit4]> "core_node130":{
   [junit4]>   "core":"backuprestore_restored_shard2_replica_t129",
   [junit4]>   "base_url":"http://127.0.0.1:62709/solr;,
   [junit4]>   "node_name":"127.0.0.1:62709_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"TLOG",
   [junit4]>   "force_set_state":"false"},
   [junit4]> "core_node132":{
   [junit4]>   "core":"backuprestore_restored_shard2_replica_p131",
   [junit4]>   "base_url":"http://127.0.0.1:62710/solr;,
   [junit4]>   "node_name":"127.0.0.1:62710_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"PULL",
   [junit4]>   "force_set_state":"false"}},
   [junit4]>   "stateTimestamp":"1540067298009810113"},
   [junit4]> "shard1_1":{
   [junit4]>   "range":"c000-",
   [junit4]>   "state":"active",
   [junit4]>   "replicas":{
   [junit4]> "core_node124":{
   [junit4]>   
"core":"backuprestore_restored_shard1_1_replica_n123",
   [junit4]>   "base_url":"http://127.0.0.1:62710/solr;,
   [junit4]>   "node_name":"127.0.0.1:62710_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "force_set_state":"false",
   [junit4]>   "leader":"true"},
   [junit4]> "core_node134":{
   [junit4]>   
"core":"backuprestore_restored_shard1_1_replica_n133",
   [junit4]>   "base_url":"http://127.0.0.1:62709/solr;,
   [junit4]>   "node_name":"127.0.0.1:62709_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "force_set_state":"false"},
   [junit4]> "core_node136":{
   [junit4]>   
"core":"backuprestore_restored_shard1_1_replica_t135",
   [junit4]>   "base_url":"http://127.0.0.1:62709/solr;,
   [junit4]>   "node_name":"127.0.0.1:62709_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"TLOG",
   [junit4]>   "force_set_state":"false"},
   [junit4]> "core_node138":{
   [junit4]>   
"core":"backuprestore_restored_shard1_1_replica_p137",
   [junit4]>   "base_url":"http://127.0.0.1:62710/solr;,
   [junit4]>   "node_name":"127.0.0.1:62710_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"PULL",
   [junit4]>   "force_set_state":"false"}},
   [junit4]>   "stateTimestamp":"1540067298009839609"},
   [junit4]> "shard1_0":{
   [junit4]>   

[jira] [Commented] (SOLR-9425) TestSolrConfigHandlerConcurrent failure: NullPointerException

2018-10-22 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659217#comment-16659217
 ] 

Christine Poerschke commented on SOLR-9425:
---

I think this is a problem with the test where two threads could do 
{{collectErrors.add(...);}} at the same time. Attached proposed patch, which 
somehow seems too easy though?

> TestSolrConfigHandlerConcurrent failure: NullPointerException
> -
>
> Key: SOLR-9425
> URL: https://issues.apache.org/jira/browse/SOLR-9425
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Major
> Attachments: Lucene-Solr-tests-master.7870.consoleText.txt, 
> SOLR-9425.patch
>
>
> From my Jenkins - does not reproduce for me - I'll attach the Jenkins log: 
> {noformat}
> Checking out Revision f8536ce72606af6c75cf9137f354da57bb0f3dbc 
> (refs/remotes/origin/master)
> [...]
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSolrConfigHandlerConcurrent -Dtests.method=test 
> -Dtests.seed=7B5B674C5C76D216 -Dtests.slow=true -Dtests.locale=lt 
> -Dtests.timezone=Antarctica/Troll -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>   [junit4] ERROR   19.9s J6  | TestSolrConfigHandlerConcurrent.test <<<
>   [junit4]> Throwable #1: java.lang.NullPointerException
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([7B5B674C5C76D216:F30F5896F28ABFEE]:0)
>   [junit4]>   at 
> org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:109)
>   [junit4]>   at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
>   [junit4]>   at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking through email notifications, this happens fairly regularly on ASF and 
> Policeman Jenkins in addition to mine - the earliest I found was on February 
> 12, 2015.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9425) TestSolrConfigHandlerConcurrent failure: NullPointerException

2018-10-22 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-9425:
--
Attachment: SOLR-9425.patch

> TestSolrConfigHandlerConcurrent failure: NullPointerException
> -
>
> Key: SOLR-9425
> URL: https://issues.apache.org/jira/browse/SOLR-9425
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Major
> Attachments: Lucene-Solr-tests-master.7870.consoleText.txt, 
> SOLR-9425.patch
>
>
> From my Jenkins - does not reproduce for me - I'll attach the Jenkins log: 
> {noformat}
> Checking out Revision f8536ce72606af6c75cf9137f354da57bb0f3dbc 
> (refs/remotes/origin/master)
> [...]
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSolrConfigHandlerConcurrent -Dtests.method=test 
> -Dtests.seed=7B5B674C5C76D216 -Dtests.slow=true -Dtests.locale=lt 
> -Dtests.timezone=Antarctica/Troll -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>   [junit4] ERROR   19.9s J6  | TestSolrConfigHandlerConcurrent.test <<<
>   [junit4]> Throwable #1: java.lang.NullPointerException
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([7B5B674C5C76D216:F30F5896F28ABFEE]:0)
>   [junit4]>   at 
> org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:109)
>   [junit4]>   at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
>   [junit4]>   at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking through email notifications, this happens fairly regularly on ASF and 
> Policeman Jenkins in addition to mine - the earliest I found was on February 
> 12, 2015.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_172) - Build # 846 - Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/846/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([B3941D605F5F4EFE:3BC022BAF1A32306]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13198 lines...]
   [junit4] Suite: org.apache.solr.cloud.MoveReplicaTest
   [junit4]   2> 1718510 INFO  
(SUITE-MoveReplicaTest-seed#[B3941D605F5F4EFE]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & 

Re: Lucene/Solr 8.0

2018-10-22 Thread Cassandra Targett
I'm a bit delayed, but +1 on the 7.6 and 8.0 plan from me too.

On Fri, Oct 19, 2018 at 7:18 AM Erick Erickson 
wrote:

> +1, this gives us all a chance to prioritize getting the blockers out
> of the way in a careful manner.
> On Fri, Oct 19, 2018 at 7:56 AM jim ferenczi 
> wrote:
> >
> > +1 too. With this new perspective we could create the branch just after
> the 7.6 release and target the 8.0 release for January 2019 which gives
> almost 3 month to finish the blockers ?
> >
> > Le jeu. 18 oct. 2018 à 23:56, David Smiley  a
> écrit :
> >>
> >> +1 to a 7.6 —lots of stuff in there
> >> On Thu, Oct 18, 2018 at 4:47 PM Nicholas Knize 
> wrote:
> >>>
> >>> If we're planning to postpone cutting an 8.0 branch until a few weeks
> from now then I'd like to propose (and volunteer to RM) a 7.6 release
> targeted for late November or early December (following the typical 2 month
> release pattern). It feels like this might give a little breathing room for
> finishing up 8.0 blockers? And looking at the change log there appear to be
> a healthy list of features, bug fixes, and improvements to both Solr and
> Lucene that warrant a 7.6 release? Personally I wouldn't mind releasing the
> LatLonShape encoding changes in LUCENE-8521 and selective indexing work
> done in LUCENE-8496. Any objections or thoughts?
> >>>
> >>> - Nick
> >>>
> >>>
> >>> On Thu, Oct 18, 2018 at 5:32 AM Đạt Cao Mạnh 
> wrote:
> 
>  Thanks Cassandra and Jim,
> 
>  I created a blocker issue for Solr 8.0 SOLR-12883, currently in
> jira/http2 branch there are a draft-unmature implementation of SPNEGO
> authentication which enough to makes the test pass, this implementation
> will be removed when SOLR-12883 gets resolved . Therefore I don't see any
> problem on merging jira/http2 to master branch in the next week.
> 
>  On Thu, Oct 18, 2018 at 2:33 AM jim ferenczi 
> wrote:
> >
> > > But if you're working with a different assumption - that just the
> existence of the branch does not stop Dat from still merging his work and
> the work being included in 8.0 - then I agree, waiting for him to merge
> doesn't need to stop the creation of the branch.
> >
> > Yes that's my reasoning. This issue is a blocker so we won't release
> without it but we can work on the branch in the meantime and let other
> people work on new features that are not targeted to 8.
> >
> > Le mer. 17 oct. 2018 à 20:51, Cassandra Targett <
> casstarg...@gmail.com> a écrit :
> >>
> >> OK - I was making an assumption that the timeline for the first 8.0
> RC would be ASAP after the branch is created.
> >>
> >> It's a common perception that making a branch freezes adding new
> features to the release, perhaps in an unofficial way (more of a courtesy
> rather than a rule). But if you're working with a different assumption -
> that just the existence of the branch does not stop Dat from still merging
> his work and the work being included in 8.0 - then I agree, waiting for him
> to merge doesn't need to stop the creation of the branch.
> >>
> >> If, however, once the branch is there people object to Dat merging
> his work because it's "too late", then the branch shouldn't be created yet
> because we want to really try to clear that blocker for 8.0.
> >>
> >> Cassandra
> >>
> >> On Wed, Oct 17, 2018 at 12:13 PM jim ferenczi <
> jim.feren...@gmail.com> wrote:
> >>>
> >>> Ok thanks for answering.
> >>>
> >>> > - I think Solr needs a couple more weeks since the work Dat is
> doing isn't quite done yet.
> >>>
> >>> We can wait a few more weeks to create the branch but I don't
> think that one action (creating the branch) prevents the other (the work
> Dat is doing).
> >>> HTTP/2 is one of the blocker for the release but it can be done in
> master and backported to the appropriate branch as any other feature ? We
> just need an issue with the blocker label to ensure that
> >>> we don't miss it ;). Creating the branch early would also help in
> case you don't want to release all the work at once in 8.0.0.
> >>> Next week was just a proposal, what I meant was soon because we
> target a release in a few months.
> >>>
> >>>
> >>> Le mer. 17 oct. 2018 à 17:52, Cassandra Targett <
> casstarg...@gmail.com> a écrit :
> 
>  IMO next week is a bit too soon for the branch - I think Solr
> needs a couple more weeks since the work Dat is doing isn't quite done yet.
> 
>  Solr needs the HTTP/2 work Dat has been doing, and he told me
> yesterday he feels it is nearly ready to be merged into master. However, it
> does require a new release of Jetty to Solr is able to retain Kerberos
> authentication support (Dat has been working with that team to help test
> the changes Jetty needs to support Kerberos with HTTP/2). They should get
> that release out soon, but we are dependent on them a little bit.
> 
>  He can hopefully reply with 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23074 - Still Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23074/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseParallelGC

20 tests failed.
FAILED:  org.apache.solr.TestTolerantSearch.testGetFieldsPhaseError

Error Message:
Error from server at https://127.0.0.1:37367/solr/collection1: 
java.lang.NullPointerException  at 
org.apache.solr.handler.component.MoreLikeThisComponent.handleResponses(MoreLikeThisComponent.java:162)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:426)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:724)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:531)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)  at 
org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:291)  at 
org.eclipse.jetty.io.ssl.SslConnection$3.succeeded(SslConnection.java:151)  at 
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)  at 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
  at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680) 
 at java.base/java.lang.Thread.run(Thread.java:835) 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:37367/solr/collection1: 
java.lang.NullPointerException
at 
org.apache.solr.handler.component.MoreLikeThisComponent.handleResponses(MoreLikeThisComponent.java:162)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:426)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
at 

[jira] [Commented] (SOLR-12879) Query Parser for MinHash/LSH

2018-10-22 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659170#comment-16659170
 ] 

Christine Poerschke commented on SOLR-12879:


Late to the party here. Hello.

* Would it be possible to backport to branch_7x too? LUCENE-6968 mentioned 
above appears to be in 7.0 but perhaps there are other dependencies? During the 
Lucene Hackday in Montreal [~andyhind] explained a little on what this logic is 
about and I think this could be of interest to folks on the upcoming 7.6 
release too.

* Is the intended use case for this query parser primarily direct e.g. via the 
{{q}} and {{fq}} parameters or indirect somehow e.g. via streaming expressions? 
If the use case is direct:
** the parser could potentially be given a default name of (say) {{minhash}} 
and included in the standard plugins i.e. 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/core/src/java/org/apache/solr/search/QParserPlugin.java#L46
*** Users (and tests) would not need to configure {{}} then.
** the parser could be included in the Solr Reference Guide e.g. the 
http://lucene.apache.org/solr/guide/7_5/other-parsers.html section which is 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/solr-ref-guide/src/other-parsers.adoc
 in version control.

* The solr/CHANGES.txt entry lacks the customary attribution, just an oversight 
I'm sure and easily fixed.

> Query Parser for MinHash/LSH
> 
>
> Key: SOLR-12879
> URL: https://issues.apache.org/jira/browse/SOLR-12879
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: master (8.0)
>Reporter: Andy Hind
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: minhash.patch
>
>
> Following on from https://issues.apache.org/jira/browse/LUCENE-6968, provide 
> a query parser that builds queries that provide a measure of Jaccard 
> similarity. The initial patch includes banded queries that were also proposed 
> on the original issue.
>  
> I have one outstanding questions:
>  * Should the score from the overall query be normalised?
> Note, that the band count is currently approximate and may be one less than 
> in practise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8462) New Arabic snowball stemmer

2018-10-22 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659120#comment-16659120
 ] 

Jim Ferenczi commented on LUCENE-8462:
--

[~thetaphi] [~rcmuir] it seems that the patch is ready to be merged ? What do 
you think ? Updating snowball seems like a bigger task so I agree with [~ryadh] 
that it could be done separately.

> New Arabic snowball stemmer
> ---
>
> Key: LUCENE-8462
> URL: https://issues.apache.org/jira/browse/LUCENE-8462
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryadh Dahimene
>Priority: Trivial
>  Labels: Arabic, snowball, stemmer
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Added a new Arabic snowball stemmer based on 
> [https://github.com/snowballstem/snowball/blob/master/algorithms/arabic.sbl]
> As well an Arabic test dataset in `TestSnowballVocabData.zip` from the 
> -snowball-data- generated from the input file available here 
> -[https://github.com/snowballstem/snowball-data/tree/master/arabic]-
> [https://github.com/ibnmalik/golden-corpus-arabic/blob/develop/core/words.txt]
>  
> It also updates the {{ant patch-snowball}} target to be compatible with
> the java classes generated by the last snowball version (tree:
> 1964ce688cbeca505263c8f77e16ed923296ce7a). The {{ant patch-snowball}} target
> is retro-compatible with the version of snowball stemmers used in
> lucene 7.x and ignores already patched classes.
>  
> Link to the corresponding Github PR:
> [https://github.com/apache/lucene-solr/pull/449]
>  Edited: updated the corpus link, PR link and description
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7864) timeAllowed causing ClassCastException

2018-10-22 Thread Isabelle Giguere (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isabelle Giguere updated SOLR-7864:
---
Attachment: SOLR-7864_tag_7.5.0.patch

> timeAllowed causing ClassCastException
> --
>
> Key: SOLR-7864
> URL: https://issues.apache.org/jira/browse/SOLR-7864
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2
>Reporter: Markus Jelsma
>Priority: Major
> Attachments: SOLR-7864.patch, SOLR-7864.patch, SOLR-7864_extra.patch, 
> SOLR-7864_tag_7.5.0.patch
>
>
> If timeAllowed kicks in, following exception is thrown and user gets HTTP 500.
> {code}
> 65219 [qtp2096057945-19] ERROR org.apache.solr.servlet.SolrDispatchFilter  [  
>  search] – null:java.lang.ClassCastException: 
> org.apache.solr.response.ResultContext cannot be cast to 
> org.apache.solr.common.SolrDocumentList
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:275)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:497)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7864) timeAllowed causing ClassCastException

2018-10-22 Thread Isabelle Giguere (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isabelle Giguere updated SOLR-7864:
---
Attachment: (was: SOLR-7864_tag_7.5.0.patch)

> timeAllowed causing ClassCastException
> --
>
> Key: SOLR-7864
> URL: https://issues.apache.org/jira/browse/SOLR-7864
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2
>Reporter: Markus Jelsma
>Priority: Major
> Attachments: SOLR-7864.patch, SOLR-7864.patch, SOLR-7864_extra.patch, 
> SOLR-7864_tag_7.5.0.patch
>
>
> If timeAllowed kicks in, following exception is thrown and user gets HTTP 500.
> {code}
> 65219 [qtp2096057945-19] ERROR org.apache.solr.servlet.SolrDispatchFilter  [  
>  search] – null:java.lang.ClassCastException: 
> org.apache.solr.response.ResultContext cannot be cast to 
> org.apache.solr.common.SolrDocumentList
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:275)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:497)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1753 - Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1753/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1675/consoleText

[repro] Revision: 5de63322098e21438e734dc918040dc8d78122ac

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=LIROnShardRestartTest 
-Dtests.method=testAllReplicasInLIR -Dtests.seed=A704C8B907BBB14C 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-IQ -Dtests.timezone=America/Rankin_Inlet 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=LIROnShardRestartTest 
-Dtests.method=testSeveralReplicasInLIR -Dtests.seed=A704C8B907BBB14C 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-IQ -Dtests.timezone=America/Rankin_Inlet 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testParallelUpdateQTime -Dtests.seed=CDE850A40F69C4BC 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=sr-BA -Dtests.timezone=Africa/Khartoum -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
c31a95d26170c7ffbd7e3177288891d6a14f4ab1
[repro] git fetch
[repro] git checkout 5de63322098e21438e734dc918040dc8d78122ac

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro]solr/core
[repro]   LIROnShardRestartTest
[repro] ant compile-test

[...truncated 2559 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=CDE850A40F69C4BC -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=sr-BA -Dtests.timezone=Africa/Khartoum -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 143 lines...]
[repro] ant compile-test

[...truncated 1352 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.LIROnShardRestartTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=A704C8B907BBB14C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-IQ -Dtests.timezone=America/Rankin_Inlet 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 10395 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro]   5/5 failed: org.apache.solr.cloud.LIROnShardRestartTest

[repro] Re-testing 100% failures at the tip of master
[repro] git fetch
[repro] git checkout master

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 20 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   LIROnShardRestartTest
[repro] ant compile-test

[...truncated 3423 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.LIROnShardRestartTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=A704C8B907BBB14C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-IQ -Dtests.timezone=America/Rankin_Inlet 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 10182 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   5/5 failed: org.apache.solr.cloud.LIROnShardRestartTest

[repro] Re-testing 100% 

[jira] [Comment Edited] (SOLR-12884) Admin UI, admin/luke and *Point fields

2018-10-22 Thread Christopher Ball (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657963#comment-16657963
 ] 

Christopher Ball edited comment on SOLR-12884 at 10/22/18 3:22 PM:
---

Could this be an opportunity . . . for Solr to eat its own dog food?

How about using Streaming Expressions - for example the following expression 
provides a frequency table for a numeric field:

let (a=search(MyCollection,
 q="*:*",
 fl="myWordCount_l",
 fq="myWordCount_l:[0 TO *]",
 rows=1000,
 sort="myWordCount_l asc"),
 b=col(a, myWordCount_l),
 c=freqTable(b))

With the addition of a filter function (either an exponential function or just 
a list of step points), it would be on par with the data being provided from 
Luke. 

@[~joel.bernstein] - thoughts?


was (Author: christopherball):
Could this be an opportunity . . . for Solr to eat its own dog food?

How about using Streaming Expressions . . . For example the following 
expression provides a frequency table for a numeric field:

let (a=search(MyCollection,
 q="*:*",
 fl="myWordCount_l",
 fq="myWordCount_l:[0 TO *]",
 rows=1000,
 sort="myWordCount_l asc"),
 b=col(a, myWordCount_l),
 c=freqTable(b))

With the addition of a filter function (either an exponential function or just 
a list of step points), it would be on par with the data being provided from 
Luke.

> Admin UI, admin/luke and *Point fields
> --
>
> Key: SOLR-12884
> URL: https://issues.apache.org/jira/browse/SOLR-12884
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Erick Erickson
>Priority: Major
>
> One of the conference attendees noted that you go to the schema browser and 
> click on, say, a pint field, then click "load term info", nothing is shown.
> admin/luke similarly doesn't show much interesting, here's the response for a 
> pint .vs. a tint field:
> "popularity":\{ "type":"pint", "schema":"I-SD-OF--"},
> "popularityt":{ "type":"tint", "schema":"I-S--OF--",
>                        "index":"-TS--", "docs":15},
>  
> What, if anything, should we do in these two cases? Since  the points-based 
> numerics don't have terms like Trie* fields, I don't think we _can_ show much 
> more so the above makes sense, it's just jarring to end users and looks like 
> a bug.
> WDYT about putting in some useful information though. Say for the Admin UI 
> for points-based "terms cannot be shown for points-based fields" or some such?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: precommit, java8 and Solr 5.x

2018-10-22 Thread Alexandre Rafalovitch
Have you tried -Dant.build.javac.source=1.8? Based on
http://ant.apache.org/manual/running.html#sysprops (though maybe those
are environmental properties).

Regards,
   Alex.
On Mon, 22 Oct 2018 at 11:03, Erick Erickson  wrote:
>
> Is there a magic flag to convince the precommit step to allow Java 8
> constructs (lambdas in this case) when compiling a 5x version? I need
> to backport some code.
>
> I tried this:
> ant -Djavac.source=1.8 -Djavac.target=1.8 -Dsource=1.8 precommit
> and
> ant -Djavac.source=1.8 -Djavac.target=1.8 -Dsource=8 precommit
>
> to no avail. It all compiles fine, but generates this error:
>
> -ecj-javadoc-lint-src:
> .
> .
> .
> [ecj-lint] Lambda expressions are allowed only at source level 1.8 or above
>
> Meanwhile I'll dig of course.
>
> Thanks,
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 194 - Still Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/194/

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple

Error Message:
Error from server at https://127.0.0.1:41849/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:41849/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([B3143940EEE10292:8BA71DBEC912D643]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple(AutoAddReplicasIntegrationTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?

2018-10-22 Thread Isabelle Giguere (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659065#comment-16659065
 ] 

Isabelle Giguere commented on SOLR-7642:


Attached proposition (not tested ;) ), blending both [~smolloy] and [~janhoy] 
's ideas.

It might be overkill ... what does anyone think ?
First, creating the chroot must be allowed (if createZkRoot=true), then, the ZK 
root must match the authorized root (set by solr.zkChroot).
The name 'createZkRoot' could be changed to solr.createZkRoot, for uniformity.

> Should launching Solr in cloud mode using a ZooKeeper chroot create the 
> chroot znode if it doesn't exist?
> -
>
> Key: SOLR-7642
> URL: https://issues.apache.org/jira/browse/SOLR-7642
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Priority: Minor
> Attachments: SOLR-7642.patch, SOLR-7642.patch, 
> SOLR-7642_tag_7.5.0.patch, SOLR-7642_tag_7.5.0_proposition.patch
>
>
> If you launch Solr for the first time in cloud mode using a ZooKeeper 
> connection string that includes a chroot leads to the following 
> initialization error:
> {code}
> ERROR - 2015-06-05 17:15:50.410; [   ] org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/lan
> at 
> org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
> {code}
> The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script 
> to create the chroot znode (bootstrap action does this).
> I'm wondering if we shouldn't just create the znode if it doesn't exist? Or 
> is that some violation of using a chroot?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2018-10-22 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659061#comment-16659061
 ] 

Tomoko Uchida commented on LUCENE-2562:
---

bq. Please don't wait for a release; Lucene/Solr development happens 
independently from release cycles.

I understand, thanks!

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



precommit, java8 and Solr 5.x

2018-10-22 Thread Erick Erickson
Is there a magic flag to convince the precommit step to allow Java 8
constructs (lambdas in this case) when compiling a 5x version? I need
to backport some code.

I tried this:
ant -Djavac.source=1.8 -Djavac.target=1.8 -Dsource=1.8 precommit
and
ant -Djavac.source=1.8 -Djavac.target=1.8 -Dsource=8 precommit

to no avail. It all compiles fine, but generates this error:

-ecj-javadoc-lint-src:
.
.
.
[ecj-lint] Lambda expressions are allowed only at source level 1.8 or above

Meanwhile I'll dig of course.

Thanks,
Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?

2018-10-22 Thread Isabelle Giguere (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isabelle Giguere updated SOLR-7642:
---
Attachment: SOLR-7642_tag_7.5.0_proposition.patch

> Should launching Solr in cloud mode using a ZooKeeper chroot create the 
> chroot znode if it doesn't exist?
> -
>
> Key: SOLR-7642
> URL: https://issues.apache.org/jira/browse/SOLR-7642
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Priority: Minor
> Attachments: SOLR-7642.patch, SOLR-7642.patch, 
> SOLR-7642_tag_7.5.0.patch, SOLR-7642_tag_7.5.0_proposition.patch
>
>
> If you launch Solr for the first time in cloud mode using a ZooKeeper 
> connection string that includes a chroot leads to the following 
> initialization error:
> {code}
> ERROR - 2015-06-05 17:15:50.410; [   ] org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/lan
> at 
> org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
> {code}
> The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script 
> to create the chroot znode (bootstrap action does this).
> I'm wondering if we shouldn't just create the znode if it doesn't exist? Or 
> is that some violation of using a chroot?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2018-10-22 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659057#comment-16659057
 ] 

Steve Rowe commented on LUCENE-2562:


bq. I will create a pull request for review after the next release, Luke 7.6.0. 
(or 8.0.0?) 

Please don't wait for a release; Lucene/Solr development happens independently 
from release cycles.

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2958 - Still Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2958/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at http://127.0.0.1:35823/solr/collection1_shard2_replica_n2: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update  HTTP ERROR 
404 Problem accessing /solr/collection1_shard2_replica_n2/update. 
Reason: Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:35823/solr/collection1_shard2_replica_n2: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n2/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([1CE4563C1A91FE4D:DE536A5419D10E35]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2018-10-22 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659056#comment-16659056
 ] 

Tomoko Uchida commented on LUCENE-2562:
---

Hi Steve,
 thank you for the quick response.
{quote}Is there something specific you're concerned about?
{quote}
No, I am just wondering that there is something I have to consider.
 I will create a pull request for review after the next release, Luke 7.6.0. 
(or 8.0.0?) :)

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2018-10-22 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659047#comment-16659047
 ] 

Steve Rowe commented on LUCENE-2562:


bq. as we announced in the Lucene/Solr mailing lists, Luke was re-implemented 
on top of Swing.

Wow, that was fast, congratulations!

bq. The code is licensed under ALv2 and Swing is the part of JDK, so I think 
there is no obstacle to make it a Lucene submodule.

+1

bq. The draft patch will be ready for review in the next few weeks or so but I 
am not sure about when I should cut the feature branch for it. (Seems like 
Lucene 8.0 release workflow will be kicked off soon.) Don't I have to care 
about the major version release procedure?

Once you put up a patch here I can commit it to the branch I already cut for 
this issue: {{jira/lucene-2562}} (mentioned in a previous comment of mine on 
this issue).  Depending on when everything's wrapped up, it may or may not make 
the 8.0 release; if not, then it will be part of whichever release is upcoming. 
 So no, I don't think you need to worry about release procedures for this 
issue.  Is there something specific you're concerned about?

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues

2018-10-22 Thread Toke Eskildsen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659044#comment-16659044
 ] 

Toke Eskildsen commented on LUCENE-8374:


[~dsmiley] the idea for making patches for older versions was to make it easier 
to measure its effect on existing setups: Testing an existing Solr 7.3 vs. a 
patched Solr 7.3 is much cleaner than testing a Solr 7.3 vs. patched master.

If that collides with the established workflow here, can you suggest how I can 
support easy testing?

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374_branch_7_3.patch, 
> LUCENE-8374_branch_7_3.patch.20181005, LUCENE-8374_branch_7_4.patch, 
> LUCENE-8374_branch_7_5.patch
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with existing 
> indexes.
> h2. The lookup structure inside each block
> If {{ALL}} of the 2^16 values are defined, the structure is empty and the 
> ordinal is simply the requested docID with some modulo and multiply math. 
> Nothing to improve there.
> If the block is {{DENSE}} (2^12 to 2^16 values are defined), a bitmap is used 
> and the number of set bits up to the wanted index (the docID modulo the block 
> origo) are counted. That bitmap is a long[1024], meaning that worst case is 
> to lookup and count all set bits for 1024 longs!
> One known solution to this is to use a [rank 
> structure|[https://en.wikipedia.org/wiki/Succinct_data_structure]]. I 
> [implemented 
> it|[https://github.com/tokee/lucene-solr/blob/solr5894/solr/core/src/java/org/apache/solr/search/sparse/count/plane/RankCache.java]]
>  for a related project and with that (), the rank-overhead for a {{DENSE}} 
> block would be long[32] and would ensure a maximum of 9 lookups. It is not 
> trivial to build the rank-structure and caching it (assuming all blocks are 
> dense) for 6M documents would require 22 KB (3.17% overhead). It would be far 
> better to generate the rank-structure at index time and store it immediately 
> before the bitset (this is possible with streaming as each block is fully 
> resolved before flushing), but of course that would require a change to the 
> codec.
> If {{SPARSE}} (< 2^12 values ~= 6%) are defined, the docIDs are simply in the 
> form of a list. As a comment in the code suggests, a binary search through 
> these would be faster, although that would mean 

[jira] [Comment Edited] (SOLR-10894) Streaming expressions handling of escaped special characters bug

2018-10-22 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659039#comment-16659039
 ] 

Gus Heck edited comment on SOLR-10894 at 10/22/18 2:27 PM:
---

My testing seems to indicate that this is now working at the expression parser 
level, though the annotate seems to say the relevant code came in in 2015, so 
maybe this crops up as a url encoding issue when we distribute the query (note 
the '+' )? Also note that the supplied slash escaped expression doesn't work in 
java since + isn't a legal escape sequence. Attaching patch verifying this 
works at the expression level in java (minus the illegal escape sequence) 


was (Author: gus_heck):
My testing seems to indicate that this is now working at the expression parser 
level, though the annotate seems to say the relevant code came in in 2015, so 
maybe this crops up as a url encoding issue when we distribute the query (note 
the '+' )? Attaching patch verifying this works at the expression level in java 
(minus the illegal escape sequence) Also note that the supplied slash escaped 
expression doesn't work in java since \+ isn't a legal escape sequence

> Streaming expressions handling of escaped special characters bug
> 
>
> Key: SOLR-10894
> URL: https://issues.apache.org/jira/browse/SOLR-10894
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Houston Putman
>Priority: Major
> Attachments: SOLR-10894.patch
>
>
> Streaming expressions expect all special characters in named parameter values 
> to be singly escaped. Since queries can contain strings surrounded by double 
> quotes, double-escaping is necessary.
> Given the following query: 
> {{summary:"\"This is a summary\"\+"}}
> A streaming expression would require surrounding the query with double 
> quotes, therefore every special character in the query should be escaped: 
> {{select(collection,q="\"\\\"This is a summary\\\"\\\+\"",)}}
> Streaming expressions should unescape the strings contained within double 
> quotes, however currently they are only unescaping {{\" -> "}}. Therefore it 
> is impossible to query for text fields containing double quotes. Also other 
> special characters are not unescaped; this inconsistency causes confusion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10894) Streaming expressions handling of escaped special characters bug

2018-10-22 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659039#comment-16659039
 ] 

Gus Heck commented on SOLR-10894:
-

My testing seems to indicate that this is now working at the expression parser 
level, though the annotate seems to say the relevant code came in in 2015, so 
maybe this crops up as a url encoding issue when we distribute the query (note 
the '+' )? Attaching patch verifying this works at the expression level in java 
(minus the illegal escape sequence) Also note that the supplied slash escaped 
expression doesn't work in java since \+ isn't a legal escape sequence

> Streaming expressions handling of escaped special characters bug
> 
>
> Key: SOLR-10894
> URL: https://issues.apache.org/jira/browse/SOLR-10894
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Houston Putman
>Priority: Major
> Attachments: SOLR-10894.patch
>
>
> Streaming expressions expect all special characters in named parameter values 
> to be singly escaped. Since queries can contain strings surrounded by double 
> quotes, double-escaping is necessary.
> Given the following query: 
> {{summary:"\"This is a summary\"\+"}}
> A streaming expression would require surrounding the query with double 
> quotes, therefore every special character in the query should be escaped: 
> {{select(collection,q="\"\\\"This is a summary\\\"\\\+\"",)}}
> Streaming expressions should unescape the strings contained within double 
> quotes, however currently they are only unescaping {{\" -> "}}. Therefore it 
> is impossible to query for text fields containing double quotes. Also other 
> special characters are not unescaped; this inconsistency causes confusion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10894) Streaming expressions handling of escaped special characters bug

2018-10-22 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-10894:

Attachment: SOLR-10894.patch

> Streaming expressions handling of escaped special characters bug
> 
>
> Key: SOLR-10894
> URL: https://issues.apache.org/jira/browse/SOLR-10894
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Houston Putman
>Priority: Major
> Attachments: SOLR-10894.patch
>
>
> Streaming expressions expect all special characters in named parameter values 
> to be singly escaped. Since queries can contain strings surrounded by double 
> quotes, double-escaping is necessary.
> Given the following query: 
> {{summary:"\"This is a summary\"\+"}}
> A streaming expression would require surrounding the query with double 
> quotes, therefore every special character in the query should be escaped: 
> {{select(collection,q="\"\\\"This is a summary\\\"\\\+\"",)}}
> Streaming expressions should unescape the strings contained within double 
> quotes, however currently they are only unescaping {{\" -> "}}. Therefore it 
> is impossible to query for text fields containing double quotes. Also other 
> special characters are not unescaped; this inconsistency causes confusion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-2562) Make Luke a Lucene/Solr Module

2018-10-22 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659008#comment-16659008
 ] 

Tomoko Uchida edited comment on LUCENE-2562 at 10/22/18 2:13 PM:
-

Hi,

as we announced in the Lucene/Solr mailing lists, Luke was re-implemented on 
top of Swing.
[https://github.com/DmitryKey/luke|https://github.com/DmitryKey/luke]

The code is licensed under ALv2 and Swing is the part of JDK, so I think there 
is no obstacle to make it a Lucene submodule.

I would like to create another patch and restart this issue, after just fixing 
styles and colors.
The draft patch will be ready for review in the next few weeks or so but I am 
not sure about when I should cut the feature branch for it. (Seems like Lucene 
8.0 release workflow will be kicked off soon.) Don't I have to care about the 
major version release procedure?
Can you please give me some advice?

Thanks.


was (Author: tomoko uchida):
Hi,

as we announced in the Lucene/Solr mailing lists, Luke was re-implemented on 
top of Swing.
[https://github.com/DmitryKey/luke|https://github.com/DmitryKey/luke]

The code is licensed under ALv2 and Swing is the part of JDK, so I think there 
is no obstacle to make it a Lucene submodule.

I would like to create another patch and restart this issue, after just fixing 
styles and colors.
The draft patch will be ready for review in the next few weeks or so but I am 
not sure about when I should cut the feature branch for it. (Seems like Lucene 
8.0 release workflow will be kicked off soon.)
Can you please give me some advice?

Thanks.

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12732) TestLogWatcher failure on Jenkins

2018-10-22 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659013#comment-16659013
 ] 

Erick Erickson commented on SOLR-12732:
---

Rats, I still see one failure on Oct 8.

> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-2562) Make Luke a Lucene/Solr Module

2018-10-22 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659008#comment-16659008
 ] 

Tomoko Uchida edited comment on LUCENE-2562 at 10/22/18 1:51 PM:
-

Hi,

as we announced in the Lucene/Solr mailing lists, Luke was re-implemented on 
top of Swing.
[https://github.com/DmitryKey/luke|https://github.com/DmitryKey/luke]

The code is licensed under ALv2 and Swing is the part of JDK, so I think there 
is no obstacle to make it a Lucene submodule.

I would like to create another patch and restart this issue, after just fixing 
styles and colors.
The draft patch will be ready for review in the next few weeks or so but I am 
not sure about when I should cut the feature branch for it. (Seems like Lucene 
8.0 release workflow will be kicked off soon.)
Can you please give me some advice?

Thanks.


was (Author: tomoko uchida):
Hi,

as we announced in the Lucene/Solr mailing lists, Luke was re-implemented on 
top of Swing.
[https://github.com/DmitryKey/luke|http://example.com]

The code is licensed under ALv2 and Swing is the part of JDK, so I think there 
is no obstacle to make it a Lucene submodule.

I would like to create another patch and restart this issue, after just fixing 
styles and colors.
The draft patch will be ready for review in the next few weeks or so but I am 
not sure about when I should cut the feature branch for it. (Seems like Lucene 
8.0 release workflow will be kicked off soon.)
Can you please give me some advice?

Thanks.

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2018-10-22 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659008#comment-16659008
 ] 

Tomoko Uchida commented on LUCENE-2562:
---

Hi,

as we announced in the Lucene/Solr mailing lists, Luke was re-implemented on 
top of Swing.
[https://github.com/DmitryKey/luke|http://example.com]

The code is licensed under ALv2 and Swing is the part of JDK, so I think there 
is no obstacle to make it a Lucene submodule.

I would like to create another patch and restart this issue, after just fixing 
styles and colors.
The draft patch will be ready for review in the next few weeks or so but I am 
not sure about when I should cut the feature branch for it. (Seems like Lucene 
8.0 release workflow will be kicked off soon.)
Can you please give me some advice?

Thanks.

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12638) Support atomic updates of nested/child documents for nested-enabled schema

2018-10-22 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658983#comment-16658983
 ] 

mosh commented on SOLR-12638:
-

{quote}Or, perhaps we insist the user send a _route_ parameter to /update which 
is otherwise only used in searches?{quote}
I like this option a lot better, since it makes the updated doc look cleaner.
Adding another field to the update command can seem a little confusing IMO,
since that field is not used to update the document in any way.
I just pushed a commit implementing this.

> Support atomic updates of nested/child documents for nested-enabled schema
> --
>
> Key: SOLR-12638
> URL: https://issues.apache.org/jira/browse/SOLR-12638
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12638-delete-old-block-no-commit.patch, 
> SOLR-12638-nocommit.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> I have been toying with the thought of using this transformer in conjunction 
> with NestedUpdateProcessor and AtomicUpdate to allow SOLR to completely 
> re-index the entire nested structure. This is just a thought, I am still 
> thinking about implementation details. Hopefully I will be able to post a 
> more concrete proposal soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23073 - Still Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23073/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseSerialGC

39 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([25466C0BE7E4BFF2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([25466C0BE7E4BFF2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (SOLR-12846) Policy rules do not support host variable

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658967#comment-16658967
 ] 

ASF subversion and git services commented on SOLR-12846:


Commit 4332b0aa6e467c1ae18246fa25301ae9410b4d7f in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4332b0a ]

SOLR-12846: Added support for "host" variable in autoscaling policy rules


> Policy rules do not support host variable
> -
>
> Key: SOLR-12846
> URL: https://issues.apache.org/jira/browse/SOLR-12846
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> The policy documentation says that there is a host variable supported in 
> policy rules at 
> https://lucene.apache.org/solr/guide/7_5/solrcloud-autoscaling-policy-preferences.html#node-selection-attributes
> But there is no mention of it in the code. Perhaps it got lost during 
> refactorings and there were no tests for it? In any case, we should add it 
> back. It'd be great if we can support #EACH for host so that one can write a 
> rule to distribute replicas across hosts and not just nodes. This would be 
> very useful when one runs multiple Solr JVMs in the same physical node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues

2018-10-22 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658966#comment-16658966
 ] 

David Smiley commented on LUCENE-8374:
--

Toke, the JIRA "fix version" should reflect the Git branches you commit to, or 
plan to commit to.  And 7.5 and 7.3 are not branches you are allowed to commit 
to since this is not a bug fix.  Once you do commit to branch_7x, you can put a 
fix version of "7.6" if 7.6 release branch hasn't been created yet.  If the 
release branch has been created, you can do 7.7 even if 7.7 is never ultimately 
released.  We don't have a "trunk" but we do have a "master".  BTW I don't 
bother assigning "master" to fix version if I'm also going to commit to another 
branch, since it's implied.  In other words, it's implied that if the fix 
version is 7.6 that the feature/fix will be fixed in 8 as well.

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374_branch_7_3.patch, 
> LUCENE-8374_branch_7_3.patch.20181005, LUCENE-8374_branch_7_4.patch, 
> LUCENE-8374_branch_7_5.patch
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with existing 
> indexes.
> h2. The lookup structure inside each block
> If {{ALL}} of the 2^16 values are defined, the structure is empty and the 
> ordinal is simply the requested docID with some modulo and multiply math. 
> Nothing to improve there.
> If the block is {{DENSE}} (2^12 to 2^16 values are defined), a bitmap is used 
> and the number of set bits up to the wanted index (the docID modulo the block 
> origo) are counted. That bitmap is a long[1024], meaning that worst case is 
> to lookup and count all set bits for 1024 longs!
> One known solution to this is to use a [rank 
> structure|[https://en.wikipedia.org/wiki/Succinct_data_structure]]. I 
> [implemented 
> it|[https://github.com/tokee/lucene-solr/blob/solr5894/solr/core/src/java/org/apache/solr/search/sparse/count/plane/RankCache.java]]
>  for a related project and with that (), the rank-overhead for a {{DENSE}} 
> block would be long[32] and would ensure a maximum of 9 lookups. It is not 
> trivial to build the rank-structure and caching it (assuming all blocks are 
> dense) for 6M documents would require 22 KB (3.17% overhead). It would be far 
> better to generate the rank-structure at index time and store it immediately 
> before the bitset 

[jira] [Commented] (SOLR-12846) Policy rules do not support host variable

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658963#comment-16658963
 ] 

ASF subversion and git services commented on SOLR-12846:


Commit c31a95d26170c7ffbd7e3177288891d6a14f4ab1 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c31a95d ]

SOLR-12846: Added support for "host" variable in autoscaling policy rules


> Policy rules do not support host variable
> -
>
> Key: SOLR-12846
> URL: https://issues.apache.org/jira/browse/SOLR-12846
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> The policy documentation says that there is a host variable supported in 
> policy rules at 
> https://lucene.apache.org/solr/guide/7_5/solrcloud-autoscaling-policy-preferences.html#node-selection-attributes
> But there is no mention of it in the code. Perhaps it got lost during 
> refactorings and there were no tests for it? In any case, we should add it 
> back. It'd be great if we can support #EACH for host so that one can write a 
> rule to distribute replicas across hosts and not just nodes. This would be 
> very useful when one runs multiple Solr JVMs in the same physical node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8374) Reduce reads for sparse DocValues

2018-10-22 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8374:
-
Fix Version/s: (was: 7.5)
   (was: 7.3.1)
   (was: trunk)

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374_branch_7_3.patch, 
> LUCENE-8374_branch_7_3.patch.20181005, LUCENE-8374_branch_7_4.patch, 
> LUCENE-8374_branch_7_5.patch
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with existing 
> indexes.
> h2. The lookup structure inside each block
> If {{ALL}} of the 2^16 values are defined, the structure is empty and the 
> ordinal is simply the requested docID with some modulo and multiply math. 
> Nothing to improve there.
> If the block is {{DENSE}} (2^12 to 2^16 values are defined), a bitmap is used 
> and the number of set bits up to the wanted index (the docID modulo the block 
> origo) are counted. That bitmap is a long[1024], meaning that worst case is 
> to lookup and count all set bits for 1024 longs!
> One known solution to this is to use a [rank 
> structure|[https://en.wikipedia.org/wiki/Succinct_data_structure]]. I 
> [implemented 
> it|[https://github.com/tokee/lucene-solr/blob/solr5894/solr/core/src/java/org/apache/solr/search/sparse/count/plane/RankCache.java]]
>  for a related project and with that (), the rank-overhead for a {{DENSE}} 
> block would be long[32] and would ensure a maximum of 9 lookups. It is not 
> trivial to build the rank-structure and caching it (assuming all blocks are 
> dense) for 6M documents would require 22 KB (3.17% overhead). It would be far 
> better to generate the rank-structure at index time and store it immediately 
> before the bitset (this is possible with streaming as each block is fully 
> resolved before flushing), but of course that would require a change to the 
> codec.
> If {{SPARSE}} (< 2^12 values ~= 6%) are defined, the docIDs are simply in the 
> form of a list. As a comment in the code suggests, a binary search through 
> these would be faster, although that would mean seeking backwards. If that is 
> not acceptable, I don't have any immediate idea for avoiding the full 
> iteration.
> I propose implementing query-time caching of both block-jumps and inner-block 
> lookups for {{DENSE}} (using rank) as first improvement and an index-time 
> 

[JENKINS] Lucene-Solr-repro - Build # 1750 - Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1750/

[...truncated 34 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2890/consoleText

[repro] Revision: 5de63322098e21438e734dc918040dc8d78122ac

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=raceConditionOnDeleteAndRegisterReplica 
-Dtests.seed=B8C5D9E3DF2CE8A3 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=uk-UA -Dtests.timezone=Asia/Novosibirsk -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=deleteLiveReplicaTest -Dtests.seed=B8C5D9E3DF2CE8A3 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=uk-UA 
-Dtests.timezone=Asia/Novosibirsk -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
8c70811f3a2a4deab8186b187909ac5c3615e6fb
[repro] git fetch
[repro] git checkout 5de63322098e21438e734dc918040dc8d78122ac

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   DeleteReplicaTest
[repro] ant compile-test

[...truncated 3423 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.DeleteReplicaTest" -Dtests.showOutput=onerror  
-Dtests.seed=B8C5D9E3DF2CE8A3 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=uk-UA -Dtests.timezone=Asia/Novosibirsk -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 25078 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.cloud.DeleteReplicaTest
[repro] git checkout 8c70811f3a2a4deab8186b187909ac5c3615e6fb

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-master - Build # 2891 - Still Unstable

2018-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2891/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/52)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10001_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/52)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10001_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([DDFB0580FC464535:5DDB60AEED05AD93]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10465) setIdField should be deprecated in favor of SolrClientBuilder methods

2018-10-22 Thread Charles Sanders (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658931#comment-16658931
 ] 

Charles Sanders commented on SOLR-10465:


[~gerlowskija]  No problem.  Thanks for the update.

> setIdField should be deprecated in favor of SolrClientBuilder methods
> -
>
> Key: SOLR-10465
> URL: https://issues.apache.org/jira/browse/SOLR-10465
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 7.0
>
> Attachments: SOLR-10465.patch, SOLR-10465.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the {{setIdField}} setter 
> on all {{SolrClient}} implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2957 - Still Unstable!

2018-10-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2957/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at 
https://127.0.0.1:41465/solr/collection1_shard2_replica_n3: Expected mime type 
application/octet-stream but got text/html.Error 404 
Can not find: /solr/collection1_shard2_replica_n3/update  
HTTP ERROR 404 Problem accessing 
/solr/collection1_shard2_replica_n3/update. Reason: Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:41465/solr/collection1_shard2_replica_n3: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n3/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([C0D83D9531B75D40:26F01FD32F7AD38]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Resolved] (SOLR-12729) SplitShardCmd should lock the parent shard to prevent parallel splitting requests

2018-10-22 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-12729.
--
Resolution: Fixed

> SplitShardCmd should lock the parent shard to prevent parallel splitting 
> requests
> -
>
> Key: SOLR-12729
> URL: https://issues.apache.org/jira/browse/SOLR-12729
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> This scenario was discovered by the simulation framework, but it exists also 
> in the non-simulated code.
> When {{IndexSizeTrigger}} requests SPLITSHARD, which is then successfully 
> started and “completed” from the point of view of {{ExecutePlanAction}}, the 
> reality is that it still can take significant amount of time until the moment 
> when the new replicas fully recover and cause the switch of shard states 
> (parent to INACTIVE, child from RECOVERY to ACTIVE).
> If this time is longer than the trigger's {{waitFor}} the trigger will issue 
> the same SPLITSHARD request again. {{SplitShardCmd}} doesn't prevent this new 
> request from being processed because the parent shard is still ACTIVE. 
> However, a section of the code in {{SplitShardCmd}} will realize that 
> sub-slices with the target names already exist and they are not active, at 
> which point it will delete the new sub-slices ({{SplitShardCmd:182}}).
> The end result is an infinite loop, where {{IndexSizeTrigger}} will keep 
> generating SPLITSHARD, and {{SplitShardCmd}} will keep deleting the 
> recovering sub-slices created by the previous command.
> A simple solution is for the parent shard to be marked to indicate that it’s 
> in a process of splitting, so that no other split is attempted on the same 
> shard. Furthermore, {{IndexSizeTrigger}} could temporarily exclude such 
> shards from monitoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12729) SplitShardCmd should lock the parent shard to prevent parallel splitting requests

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658864#comment-16658864
 ] 

ASF subversion and git services commented on SOLR-12729:


Commit f47acc4588346843f3a20d1e973fcfe3fdbe10c2 in lucene-solr's branch 
refs/heads/branch_7x from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f47acc4 ]

SOLR-12729: SplitShardCmd should lock the parent shard to prevent parallel 
splitting requests.


> SplitShardCmd should lock the parent shard to prevent parallel splitting 
> requests
> -
>
> Key: SOLR-12729
> URL: https://issues.apache.org/jira/browse/SOLR-12729
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> This scenario was discovered by the simulation framework, but it exists also 
> in the non-simulated code.
> When {{IndexSizeTrigger}} requests SPLITSHARD, which is then successfully 
> started and “completed” from the point of view of {{ExecutePlanAction}}, the 
> reality is that it still can take significant amount of time until the moment 
> when the new replicas fully recover and cause the switch of shard states 
> (parent to INACTIVE, child from RECOVERY to ACTIVE).
> If this time is longer than the trigger's {{waitFor}} the trigger will issue 
> the same SPLITSHARD request again. {{SplitShardCmd}} doesn't prevent this new 
> request from being processed because the parent shard is still ACTIVE. 
> However, a section of the code in {{SplitShardCmd}} will realize that 
> sub-slices with the target names already exist and they are not active, at 
> which point it will delete the new sub-slices ({{SplitShardCmd:182}}).
> The end result is an infinite loop, where {{IndexSizeTrigger}} will keep 
> generating SPLITSHARD, and {{SplitShardCmd}} will keep deleting the 
> recovering sub-slices created by the previous command.
> A simple solution is for the parent shard to be marked to indicate that it’s 
> in a process of splitting, so that no other split is attempted on the same 
> shard. Furthermore, {{IndexSizeTrigger}} could temporarily exclude such 
> shards from monitoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >