[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 23200 - Still Unstable!

2018-11-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23200/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/42)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10004_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/42)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10004_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([98DE92908512C12A:18FEF7BE9451298C]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1181 - Failure

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1181/

No tests ran.

Build Log:
[...truncated 23412 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2437 links (1990 relative) to 3211 anchors in 248 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[JENKINS] Lucene-Solr-repro - Build # 1931 - Unstable

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1931/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/375/consoleText

[repro] Revision: 6273f696fc427d411c670d3e062548ed71957b94

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.method=test -Dtests.seed=94744E740BFC960B -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sv -Dtests.timezone=Antarctica/Syowa -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.seed=94744E740BFC960B -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sv -Dtests.timezone=Antarctica/Syowa -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
e81dd4e870d2a9b27e1f4366e92daa6dba054da8
[repro] git fetch
[repro] git checkout 6273f696fc427d411c670d3e062548ed71957b94

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsRestartWhileUpdatingTest
[repro] ant compile-test

[...truncated 3580 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.HdfsRestartWhileUpdatingTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=94744E740BFC960B -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sv -Dtests.timezone=Antarctica/Syowa -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 41924 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest
[repro] git checkout e81dd4e870d2a9b27e1f4366e92daa6dba054da8

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12983) JavabinLoader should avoid creating String Objects and create string fields from byte[]

2018-11-12 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12983:
--
Description: Javabin stings already contain Strings in UTF8 bbyte[] format. 
String fields can be created directly from those

> JavabinLoader should avoid creating String Objects and create string fields 
> from byte[]
> ---
>
> Key: SOLR-12983
> URL: https://issues.apache.org/jira/browse/SOLR-12983
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> Javabin stings already contain Strings in UTF8 bbyte[] format. String fields 
> can be created directly from those



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12983) JavabinLoader should avoid creating String Objects and create string fields from byte[]

2018-11-12 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12983:
--
Description: Javabin stings already contain Strings in UTF8 byte[] format. 
String fields can be created directly from those  (was: Javabin stings already 
contain Strings in UTF8 bbyte[] format. String fields can be created directly 
from those)

> JavabinLoader should avoid creating String Objects and create string fields 
> from byte[]
> ---
>
> Key: SOLR-12983
> URL: https://issues.apache.org/jira/browse/SOLR-12983
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> Javabin stings already contain Strings in UTF8 byte[] format. String fields 
> can be created directly from those



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12983) JavabinLoader should avoid creating String Objects and create string fields from byte[]

2018-11-12 Thread Noble Paul (JIRA)
Noble Paul created SOLR-12983:
-

 Summary: JavabinLoader should avoid creating String Objects and 
create string fields from byte[]
 Key: SOLR-12983
 URL: https://issues.apache.org/jira/browse/SOLR-12983
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 3082 - Still Unstable!

2018-11-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3082/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:39175/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:39175/solr
at 
__randomizedtesting.SeedInfo.seed([984351F4ED4C4BA3:59B32858C01C8104]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:35579/solr

Stack Trace:
java.lang.AssertionError: 

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 215 - Still Unstable

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/215/

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at http://127.0.0.1:43393/solr/collection1_shard2_replica_n3: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update  HTTP ERROR 
404 Problem accessing /solr/collection1_shard2_replica_n3/update. 
Reason: Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:43393/solr/collection1_shard2_replica_n3: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n3/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([7D89772062C22325:BF3E4B486182D35D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:237)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:269)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23199 - Still Unstable!

2018-11-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23199/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:41447/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:41447/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([F18D3CB4E3862CCD:79D9036E4D7A4135]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:125)
at 
org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore.test(TestHdfsCloudBackupRestore.java:213)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-SmokeRelease-7.6 - Build # 2 - Still Failing

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.6/2/

No tests ran.

Build Log:
[...truncated 23437 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2436 links (1989 relative) to 3199 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/solr/package/solr-7.6.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.6/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: 

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 3081 - Still Unstable!

2018-11-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3081/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseG1GC

21 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest

Error Message:
Could not find collection:collection1_in_cloud1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1_in_cloud1
at __randomizedtesting.SeedInfo.seed([A04762486379210D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.MultiSolrCloudTestCase$DefaultClusterInitFunction.doAccept(MultiSolrCloudTestCase.java:80)
at 
org.apache.solr.cloud.MultiSolrCloudTestCaseTest$2.accept(MultiSolrCloudTestCaseTest.java:68)
at 
org.apache.solr.cloud.MultiSolrCloudTestCaseTest$2.accept(MultiSolrCloudTestCaseTest.java:61)
at 
org.apache.solr.cloud.MultiSolrCloudTestCase.doSetupClusters(MultiSolrCloudTestCase.java:95)
at 
org.apache.solr.cloud.MultiSolrCloudTestCaseTest.setupClusters(MultiSolrCloudTestCaseTest.java:53)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest

Error Message:
ObjectTracker found 10 object(s) that were not released!!! [InternalHttpClient, 
SolrZkClient, SolrZkClient, Overseer, InternalHttpClient, ZkController, 
SolrZkClient, InternalHttpClient, ZkCollectionTerms, InternalHttpClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:321)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:330)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:268)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:255)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:280)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient$Builder.build(CloudSolrClient.java:1597)
  at 
org.apache.solr.cloud.MiniSolrCloudCluster.buildSolrClient(MiniSolrCloudCluster.java:512)
  at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:265)
  

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23198 - Still Unstable!

2018-11-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23198/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseParallelGC

32 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:
Error from server at https://127.0.0.1:42687/solr: Cannot create collection 
delLiveColl. Value of maxShardsPerNode is 1, and the number of nodes currently 
live or live and part of your createNodeSet is 3. This allows a maximum of 3 to 
be created. Value of numShards is 2, value of nrtReplicas is 2, value of 
tlogReplicas is 0 and value of pullReplicas is 0. This requires 4 shards to be 
created (higher than the allowed number)

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:42687/solr: Cannot create collection 
delLiveColl. Value of maxShardsPerNode is 1, and the number of nodes currently 
live or live and part of your createNodeSet is 3. This allows a maximum of 3 to 
be created. Value of numShards is 2, value of nrtReplicas is 2, value of 
tlogReplicas is 0 and value of pullReplicas is 0. This requires 4 shards to be 
created (higher than the allowed number)
at 
__randomizedtesting.SeedInfo.seed([2E93FE3F13A4D6DC:83F34A340E9B7EA9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:73)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS] Lucene-Solr-repro - Build # 1929 - Unstable

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1929/

[...truncated 39 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.6/3/consoleText

[repro] Revision: 70fe7e69329be8551d6a13538930aed7b1718a18

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testRouting -Dtests.seed=C380751CC11FA9D5 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=es-CU -Dtests.timezone=Africa/Bangui 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
e81dd4e870d2a9b27e1f4366e92daa6dba054da8
[repro] git fetch
[repro] git checkout 70fe7e69329be8551d6a13538930aed7b1718a18

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2716 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=C380751CC11FA9D5 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=es-CU -Dtests.timezone=Africa/Bangui -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 1383 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro] git checkout e81dd4e870d2a9b27e1f4366e92daa6dba054da8

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread mikemccand
Github user mikemccand commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232832055
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,150 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over point values. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
point values. PointValues=" + in);
+  }
+}
+
+@Override
+public void intersect(IntersectVisitor visitor) throws IOException {
+  checkAndThrow();
+  in.intersect(new ExitableIntersectVisitor(visitor, queryTimeout));
+}
+
+@Override
+public long estimatePointCount(IntersectVisitor visitor) {
+  checkAndThrow();
+  return in.estimatePointCount(visitor);
+}
+
+@Override
+public byte[] getMinPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMinPackedValue();
+}
+
+@Override
+public byte[] getMaxPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMaxPackedValue();
+}
+
+@Override
+public int getNumDataDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumDataDimensions();
+}
+
+@Override
+public int getNumIndexDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumIndexDimensions();
+}
+
+@Override
+public int getBytesPerDimension() throws IOException {
+  checkAndThrow();
+  return in.getBytesPerDimension();
+}
+
+@Override
+public long size() {
+  checkAndThrow();
+  return in.size();
+}
+
+@Override
+public int getDocCount() {
+  checkAndThrow();
+  return in.getDocCount();
+}
+  }
+
+  public static class ExitableIntersectVisitor implements 
PointValues.IntersectVisitor {
+
+public static final int MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK = 10;
+
+private final PointValues.IntersectVisitor in;
+private final QueryTimeout queryTimeout;
+private int calls = 0;
--- End diff --

Minor: you don't need the `= 0` -- java does that for you by default.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12927) Ref Guide: Upgrade Notes for 7.6

2018-11-12 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684405#comment-16684405
 ] 

Cassandra Targett commented on SOLR-12927:
--

I've attached a patch which has upgrade notes for 7.6. I have these committed 
locally, but I'm on an airplane so I'll push what's in the attached patch 
either later on when I land or tomorrow morning.

> Ref Guide: Upgrade Notes for 7.6
> 
>
> Key: SOLR-12927
> URL: https://issues.apache.org/jira/browse/SOLR-12927
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Blocker
> Fix For: 7.6
>
> Attachments: SOLR-12927.patch
>
>
> Add Upgrade Notes from CHANGES and any other relevant changes worth 
> mentioning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12927) Ref Guide: Upgrade Notes for 7.6

2018-11-12 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12927:
-
Attachment: SOLR-12927.patch

> Ref Guide: Upgrade Notes for 7.6
> 
>
> Key: SOLR-12927
> URL: https://issues.apache.org/jira/browse/SOLR-12927
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Blocker
> Fix For: 7.6
>
> Attachments: SOLR-12927.patch
>
>
> Add Upgrade Notes from CHANGES and any other relevant changes worth 
> mentioning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #495: LUCENE-8464: Implement ConstantScoreScorer#setMinCom...

2018-11-12 Thread cbismuth
Github user cbismuth commented on the issue:

https://github.com/apache/lucene-solr/pull/495
  
I think I've understood my mistake and it should be fixed now.

I've added a random condition in tests to check the second 
`ConstantScoreScorer` constructor. Even though tests pass, I'm not very 
confident with my [two phase iterator 
implementation](https://github.com/apache/lucene-solr/pull/495/files#diff-de7ae2ad20402371f11166a7486dade7R119),
 could you please give it a review? Thank you.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 3080 - Still Unstable!

2018-11-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3080/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:37425/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:37425/solr
at 
__randomizedtesting.SeedInfo.seed([F42593772E24C615:35D5EADB03740CB2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:44177/solr

Stack Trace:

Re: [DISCUSS] Solr separate deprecation log

2018-11-12 Thread Varun Thacker
+1 . That's a good idea!

On Thu, Nov 8, 2018 at 10:26 AM Jan Høydahl  wrote:

> Hi,
>
> When instructing people in what to do before upgrading to a new version,
> we often tell them to check for deprecation log messages and fix those
> before upgrading. Normally you'll see the most important logs as WARN level
> in the Admin UI log tab just after startup and first use. But I'm wondering
> if it also makes sense to introduce a separate DeprecationLogger.log(foo)
> that is configured in log4j2.xml to log to a separate logs/deprecation.log
> to make it easier to check this from the command line. If the file is
> non-empty you have work to do :)
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[GitHub] lucene-solr pull request #496: LUCENE-8463: Early-terminate queries sorted b...

2018-11-12 Thread jimczi
Github user jimczi commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/496#discussion_r232800133
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java ---
@@ -68,6 +68,18 @@ public void setScorer(Scorable scorer) throws 
IOException {
   }
 
   static boolean canEarlyTerminate(Sort searchSort, Sort indexSort) {
--- End diff --

This function is called only when `indexSort` is 
[non-null](https://github.com/apache/lucene-solr/pull/496/files#diff-3cc08a69c598981a1dca4cb3dca2d59dR114),
 however `canEarlyTerminateOnDocId` doesn't require the index to be sorted. I 
think you can move the `indexSort == null` check into  
`canEarlyTerminateOnPrefix` ?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #496: LUCENE-8463: Early-terminate queries sorted b...

2018-11-12 Thread jimczi
Github user jimczi commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/496#discussion_r232800760
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java ---
@@ -68,6 +68,18 @@ public void setScorer(Scorable scorer) throws 
IOException {
   }
 
   static boolean canEarlyTerminate(Sort searchSort, Sort indexSort) {
+return canEarlyTerminateOnDocId(searchSort, indexSort) ||
+   canEarlyTerminateOnPrefix(searchSort, indexSort);
+  }
+
+  private static boolean canEarlyTerminateOnDocId(Sort searchSort, Sort 
indexSort) {
--- End diff --

The early termination is not based on the `indexSort` so we should only 
check the `searchSort`. 


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12639) Umbrella JIRA for adding support HTTP/2, jira/http2

2018-11-12 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684305#comment-16684305
 ] 

Shawn Heisey commented on SOLR-12639:
-

There was an email today on the Jetty list about the latest release.  The 
important part of that email said this:

{noformat}
Users on Java 11 runtimes and users of HTTP/2 are encouraged to upgrade as soon 
as they are able.
{noformat}

Here's the list of changes:

https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.13.v2018


> Umbrella JIRA for adding support HTTP/2, jira/http2
> ---
>
> Key: SOLR-12639
> URL: https://issues.apache.org/jira/browse/SOLR-12639
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: http2
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> This ticket will aim to replace/add support of http2 by using jetty HTTP 
> client to Solr. All the works will be committed to jira/http2 branch. This 
> branch will be served like a stepping stone between the master branch and 
> Mark Miller starburst branch. I will try to keep jira/http2 as close as 
> master as possible (this will make merging in the future easier). In the same 
> time, changes in starburst branch will be split into smaller/testable parts 
> then push it to jira/http2 branch. 
> Anyone who interests at http2 for Solr can use jira/http2 branch but there is 
> no backward-compatibility guarantee.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23197 - Still Unstable!

2018-11-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23197/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC

12 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:37991_solr, 
127.0.0.1:38173_solr, 127.0.0.1:39707_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/12)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"https://127.0.0.1:46459/solr;,   
"node_name":"127.0.0.1:46459_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"https://127.0.0.1:46459/solr;,   
"node_name":"127.0.0.1:46459_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:37991_solr, 127.0.0.1:38173_solr, 127.0.0.1:39707_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/12)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"https://127.0.0.1:46459/solr;,
  "node_name":"127.0.0.1:46459_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"https://127.0.0.1:46459/solr;,
  "node_name":"127.0.0.1:46459_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([C22CBB72D50A95FD:A83ADAA2BDF8DF37]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:224)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 

[jira] [Comment Edited] (SOLR-12639) Umbrella JIRA for adding support HTTP/2, jira/http2

2018-11-12 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684305#comment-16684305
 ] 

Shawn Heisey edited comment on SOLR-12639 at 11/12/18 7:54 PM:
---

There was an email today on the Jetty list about the latest release.  The 
important part of that email (for this issue) said this:

{noformat}
Users on Java 11 runtimes and users of HTTP/2 are encouraged to upgrade as soon 
as they are able.
{noformat}

Here's the list of changes:

https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.13.v2018



was (Author: elyograg):
There was an email today on the Jetty list about the latest release.  The 
important part of that email said this:

{noformat}
Users on Java 11 runtimes and users of HTTP/2 are encouraged to upgrade as soon 
as they are able.
{noformat}

Here's the list of changes:

https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.13.v2018


> Umbrella JIRA for adding support HTTP/2, jira/http2
> ---
>
> Key: SOLR-12639
> URL: https://issues.apache.org/jira/browse/SOLR-12639
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: http2
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> This ticket will aim to replace/add support of http2 by using jetty HTTP 
> client to Solr. All the works will be committed to jira/http2 branch. This 
> branch will be served like a stepping stone between the master branch and 
> Mark Miller starburst branch. I will try to keep jira/http2 as close as 
> master as possible (this will make merging in the future easier). In the same 
> time, changes in starburst branch will be split into smaller/testable parts 
> then push it to jira/http2 branch. 
> Anyone who interests at http2 for Solr can use jira/http2 branch but there is 
> no backward-compatibility guarantee.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Solr separate deprecation log

2018-11-12 Thread Shawn Heisey

On 11/8/2018 6:06 AM, Jan Høydahl wrote:

But I'm wondering if it also makes sense to introduce a separate 
DeprecationLogger.log(foo) that is configured in log4j2.xml to log to a 
separate logs/deprecation.log to make it easier to check this from the command 
line. If the file is non-empty you have work to do :)


+1

I think we need to leave additivity set to true on this one so that the 
logs are duplicated in solr.log.  Perhaps explicitly set additivity to 
true in the config so that it's easy for somebody to disable it if it's 
important to them to not have log duplication.


I have an implementation question, but that can wait until there's an 
issue in Jira.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #497: LUCENE-8026: ExitableDirectoryReader does not instru...

2018-11-12 Thread cbismuth
Github user cbismuth commented on the issue:

https://github.com/apache/lucene-solr/pull/497
  
Thanks a lot! PR is up-to-date with your comments @mikemccand and @jpountz 
:+1:


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232780211
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java 
---
@@ -152,13 +150,130 @@ public void testExitableFilterIndexReader() throws 
Exception {
 // Set a negative time allowed and expect the query to complete 
(should disable timeouts)
 // Not checking the validity of the result, all we are bothered about 
in this test is the timing out.
 directoryReader = DirectoryReader.open(directory);
-exitableDirectoryReader = new ExitableDirectoryReader(directoryReader, 
new QueryTimeoutImpl(-189034L));
+exitableDirectoryReader = new ExitableDirectoryReader(directoryReader, 
disabledQueryTimeout());
 reader = new TestReader(getOnlyLeafReader(exitableDirectoryReader));
 searcher = new IndexSearcher(reader);
 searcher.search(query, 10);
 reader.close();
 
 directory.close();
   }
+
+  /**
+   * Tests timing out of PointValues queries
+   *
+   * @throws Exception on error
+   */
+  public void testExitablePointValuesIndexReader() throws Exception {
+Directory directory = newDirectory();
+IndexWriter writer = new IndexWriter(directory, 
newIndexWriterConfig(new MockAnalyzer(random(;
+
+Document d1 = new Document();
+d1.add(new IntPoint("default", 10));
+writer.addDocument(d1);
+
+Document d2 = new Document();
+d2.add(new IntPoint("default", 100));
+writer.addDocument(d2);
+
+Document d3 = new Document();
+d3.add(new IntPoint("default", 1000));
+writer.addDocument(d3);
+
+writer.forceMerge(1);
+writer.commit();
+writer.close();
+
+DirectoryReader directoryReader;
+DirectoryReader exitableDirectoryReader;
+IndexReader reader;
+IndexSearcher searcher;
+
+Query query = IntPoint.newRangeQuery("default", 10, 20);
+
+// Set a fairly high timeout value (1 second) and expect the query to 
complete in that time frame.
+// Not checking the validity of the result, all we are bothered about 
in this test is the timing out.
+directoryReader = DirectoryReader.open(directory);
+exitableDirectoryReader = new ExitableDirectoryReader(directoryReader, 
inifiniteQueryTimeout());
+reader = new TestReader(getOnlyLeafReader(exitableDirectoryReader));
+searcher = new IndexSearcher(reader);
+searcher.search(query, 10);
+reader.close();
+
+// Set a really low timeout value (1 millisecond) and expect an 
Exception
--- End diff --

Totally, comments fixed in 45996c885d3e52edcc21d47e40db443895982af5.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232779864
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,157 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over point values. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
point values. PointValues=" + in);
+  }
+}
+
+@Override
+public void intersect(IntersectVisitor visitor) throws IOException {
+  checkAndThrow();
+  in.intersect(new ExitableIntersectVisitor(visitor, queryTimeout));
+}
+
+@Override
+public long estimatePointCount(IntersectVisitor visitor) {
+  checkAndThrow();
+  return in.estimatePointCount(visitor);
+}
+
+@Override
+public byte[] getMinPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMinPackedValue();
+}
+
+@Override
+public byte[] getMaxPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMaxPackedValue();
+}
+
+@Override
+public int getNumDataDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumDataDimensions();
+}
+
+@Override
+public int getNumIndexDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumIndexDimensions();
+}
+
+@Override
+public int getBytesPerDimension() throws IOException {
+  checkAndThrow();
+  return in.getBytesPerDimension();
+}
+
+@Override
+public long size() {
+  checkAndThrow();
+  return in.size();
+}
+
+@Override
+public int getDocCount() {
+  checkAndThrow();
+  return in.getDocCount();
+}
+  }
+
+  public static class ExitableIntersectVisitor implements 
PointValues.IntersectVisitor {
+
+public static final int DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK = 
10;
+
+private final PointValues.IntersectVisitor in;
+private final QueryTimeout queryTimeout;
+private final int maxCallsBeforeQueryTimeoutCheck;
+private int calls = 0;
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in, 
QueryTimeout queryTimeout) {
+  this(in, queryTimeout, DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK);
+}
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in,
+QueryTimeout queryTimeout, int 
maxCallsBeforeQueryTimeoutCheck) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  this.maxCallsBeforeQueryTimeoutCheck = 
maxCallsBeforeQueryTimeoutCheck;
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (calls++ % maxCallsBeforeQueryTimeoutCheck == 0 && 
queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
intersect point values. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
--- End diff --

My bad, outer condition fixed in 9e11f76b2ac7b9cda9f1c78ad0c1f3a5be09f435 
and only sample `visit` methods fixed in 
3d09426a81277a5344435f43a1942570c1a85f37.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232779517
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,157 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over point values. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
point values. PointValues=" + in);
+  }
+}
+
+@Override
+public void intersect(IntersectVisitor visitor) throws IOException {
+  checkAndThrow();
+  in.intersect(new ExitableIntersectVisitor(visitor, queryTimeout));
+}
+
+@Override
+public long estimatePointCount(IntersectVisitor visitor) {
+  checkAndThrow();
+  return in.estimatePointCount(visitor);
+}
+
+@Override
+public byte[] getMinPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMinPackedValue();
+}
+
+@Override
+public byte[] getMaxPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMaxPackedValue();
+}
+
+@Override
+public int getNumDataDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumDataDimensions();
+}
+
+@Override
+public int getNumIndexDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumIndexDimensions();
+}
+
+@Override
+public int getBytesPerDimension() throws IOException {
+  checkAndThrow();
+  return in.getBytesPerDimension();
+}
+
+@Override
+public long size() {
+  checkAndThrow();
+  return in.size();
+}
+
+@Override
+public int getDocCount() {
+  checkAndThrow();
+  return in.getDocCount();
+}
+  }
+
+  public static class ExitableIntersectVisitor implements 
PointValues.IntersectVisitor {
+
+public static final int DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK = 
10;
+
+private final PointValues.IntersectVisitor in;
+private final QueryTimeout queryTimeout;
+private final int maxCallsBeforeQueryTimeoutCheck;
+private int calls = 0;
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in, 
QueryTimeout queryTimeout) {
+  this(in, queryTimeout, DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK);
+}
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in,
+QueryTimeout queryTimeout, int 
maxCallsBeforeQueryTimeoutCheck) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  this.maxCallsBeforeQueryTimeoutCheck = 
maxCallsBeforeQueryTimeoutCheck;
+}
--- End diff --

Yes, fixed in dfd1403e02b270ba0712c0a40ee14dbf9e7719fd.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Solr separate deprecation log

2018-11-12 Thread Tomas Fernandez Lobbe
+1 Sounds like a good a idea to me.

> On Nov 8, 2018, at 5:06 AM, Jan Høydahl  wrote:
> 
> Hi,
> 
> When instructing people in what to do before upgrading to a new version, we 
> often tell them to check for deprecation log messages and fix those before 
> upgrading. Normally you'll see the most important logs as WARN level in the 
> Admin UI log tab just after startup and first use. But I'm wondering if it 
> also makes sense to introduce a separate DeprecationLogger.log(foo) that is 
> configured in log4j2.xml to log to a separate logs/deprecation.log to make it 
> easier to check this from the command line. If the file is non-empty you have 
> work to do :)
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-7.x #360: POMs out of sync

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-7.x/360/

No tests ran.

Build Log:
[...truncated 19704 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:672: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:209: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/build.xml:411: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:2261:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:1719:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:650:
 Error deploying artifact 'org.apache.lucene:lucene-queries:jar': Error 
installing artifact's metadata: Error while deploying metadata: Error 
transferring file

Total time: 9 minutes 47 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7381) Improve Debuggability of SolrCloud using MDC

2018-11-12 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684166#comment-16684166
 ] 

Shalin Shekhar Mangar commented on SOLR-7381:
-

[~yuzhih...@gmail.com] - No, our implementation explicitly sets MDC context 
from parent threads and unsets/restores old context once the submitted task 
finishes. See ExecutorUtil.MDCAwareThreadPoolExecutor for the implementation.

> Improve Debuggability of SolrCloud using MDC
> 
>
> Key: SOLR-7381
> URL: https://issues.apache.org/jira/browse/SOLR-7381
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7381-forbid-threadpoolexecutor.patch, 
> SOLR-7381-submitter-stacktrace.patch, SOLR-7381-thread-names.patch, 
> SOLR-7381-thread-names.patch, SOLR-7381-thread-names.patch, SOLR-7381.patch, 
> SOLR-7381.patch
>
>
> SOLR-6673 added MDC based logging in a few places but we have a lot of ground 
> to cover.
> # Threads created via thread pool executors do not inherit MDC values and 
> those are some of the most interesting places to log MDC context.
> # We must expose node names (in tests) so that we can debug faster
> # We can expose more information via thread names so that a thread dump has 
> enough context to help debug problems in production
> This is critical to help debug SolrCloud failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 3079 - Unstable!

2018-11-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3079/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10006_solr, 127.0.0.1:10008_solr, 127.0.0.1:10005_solr, 
127.0.0.1:10009_solr, 127.0.0.1:10007_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/60)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10009_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10006_solr, 127.0.0.1:10008_solr, 127.0.0.1:10005_solr, 
127.0.0.1:10009_solr, 127.0.0.1:10007_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/60)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10009_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([238C51B8C57B2AB9:A3AC3496D438C21F]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

[JENKINS] Lucene-Solr-repro - Build # 1927 - Unstable

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1927/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/214/consoleText

[repro] Revision: 2fc689fbf9f8600baaeed385fac4bc678fd2cb18

[repro] Repro line:  ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=DC3E47525EBC3475 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr 
-Dtests.timezone=Atlantic/Faeroe -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSimPolicyCloud 
-Dtests.method=testCreateCollectionSplitShard -Dtests.seed=DC3E47525EBC3475 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=lv 
-Dtests.timezone=Antarctica/Vostok -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestRecovery 
-Dtests.method=testExistOldBufferLog -Dtests.seed=DC3E47525EBC3475 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sr-BA -Dtests.timezone=America/Danmarkshavn -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
e81dd4e870d2a9b27e1f4366e92daa6dba054da8
[repro] git fetch
[repro] git checkout 2fc689fbf9f8600baaeed385fac4bc678fd2cb18

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimPolicyCloud
[repro]   TestRecovery
[repro]   ScheduledMaintenanceTriggerTest
[repro] ant compile-test

[...truncated 3580 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.TestSimPolicyCloud|*.TestRecovery|*.ScheduledMaintenanceTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=DC3E47525EBC3475 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=lv 
-Dtests.timezone=Antarctica/Vostok -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 8649 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.search.TestRecovery
[repro]   1/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud
[repro]   5/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ScheduledMaintenanceTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.ScheduledMaintenanceTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=DC3E47525EBC3475 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=fr -Dtests.timezone=Atlantic/Faeroe 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 1137 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
[repro] git checkout e81dd4e870d2a9b27e1f4366e92daa6dba054da8

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1696 - Failure

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1696/

4 tests failed.
FAILED:  org.apache.lucene.document.TestLatLonLineShapeQueries.testRandomBig

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([31B9F311A979761D:B6EE8E9E38200A9D]:0)
at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:84)
at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:57)
at 
org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:168)
at 
org.apache.lucene.store.RAMOutputStream.writeBytes(RAMOutputStream.java:154)
at 
org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:141)
at 
org.apache.lucene.util.bkd.OfflinePointReader.split(OfflinePointReader.java:215)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1843)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1870)
at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:1022)
at 
org.apache.lucene.index.RandomCodec$1$1.writeField(RandomCodec.java:140)
at 
org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62)
at org.apache.lucene.codecs.PointsWriter.merge(PointsWriter.java:191)
at 
org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:145)
at 
org.apache.lucene.codecs.asserting.AssertingPointsFormat$AssertingPointsWriter.merge(AssertingPointsFormat.java:142)
at 
org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:201)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:161)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4453)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4075)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2178)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:2011)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1962)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.indexRandomShapes(BaseLatLonShapeTestCase.java:268)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.verify(BaseLatLonShapeTestCase.java:232)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.doTestRandom(BaseLatLonShapeTestCase.java:213)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.testRandomBig(BaseLatLonShapeTestCase.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)


FAILED:  org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest.test

Error Message:
There are still nodes recoverying - waited for 320 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 320 
seconds
at 
__randomizedtesting.SeedInfo.seed([85C7EB197BF0F6AB:D93D4C3D50C9B53]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:920)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1477)
at 
org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 

[jira] [Commented] (SOLR-12833) Use timed-out lock in DistributedUpdateProcessor

2018-11-12 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684131#comment-16684131
 ] 

Mark Miller commented on SOLR-12833:


Cool, I'll try and get this in soon.

> Use timed-out lock in DistributedUpdateProcessor
> 
>
> Key: SOLR-12833
> URL: https://issues.apache.org/jira/browse/SOLR-12833
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: 7.5, master (8.0)
>Reporter: jefferyyuan
>Assignee: Mark Miller
>Priority: Minor
> Fix For: master (8.0)
>
>
> There is a synchronize block that blocks other update requests whose IDs fall 
> in the same hash bucket. The update waits forever until it gets the lock at 
> the synchronize block, this can be a problem in some cases.
>  
> Some add/update requests (for example updates with spatial/shape analysis) 
> like may take time (30+ seconds or even more), this would the request time 
> out and fail.
> Client may retry the same requests multiple times or several minutes, this 
> would make things worse.
> The server side receives all the update requests but all except one can do 
> nothing, have to wait there. This wastes precious memory and cpu resource.
> We have seen the case 2000+ threads are blocking at the synchronize lock, and 
> only a few updates are making progress. Each thread takes 3+ mb memory which 
> causes OOM.
> Also if the update can't get the lock in expected time range, its better to 
> fail fast.
>  
> We can have one configuration in solrconfig.xml: 
> updateHandler/versionLock/timeInMill, so users can specify how long they want 
> to wait the version bucket lock.
> The default value can be -1, so it behaves same - wait forever until it gets 
> the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684107#comment-16684107
 ] 

Adrien Grand commented on LUCENE-8563:
--

Agreed [~softwaredoug] I was assuming a single similarity. This would also 
change ordering if other fields use different similarities.

> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10217) Add a query for the background set to the significantTerms streaming expression

2018-11-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684085#comment-16684085
 ] 

Joel Bernstein commented on SOLR-10217:
---

I think Gethin pretty much sums it up. If I remember correctly there was 
something about the patch that needed more work. But I'd have to review again 
very closely to understand the issue. I think it's a nice feature though and 
would likely pick it back up again sometime in the future. [~janhoy], if there 
is a particularly strategic use case for the background query that you're 
looking at, feel free explain it in ticket and we can perhaps pick the ticket 
back up sooner.

> Add a query for the background set to the significantTerms streaming 
> expression
> ---
>
> Key: SOLR-10217
> URL: https://issues.apache.org/jira/browse/SOLR-10217
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Gethin James
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-10217.patch, SOLR-10217.patch, SOLR-20217.patch
>
>
> Following the work on SOLR-10156 we now have a significantTerms expression.
> Currently, the background set is always the full index.  It would be great if 
> we could use a query to define the background set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread jpountz
Github user jpountz commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232738909
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,157 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over point values. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
point values. PointValues=" + in);
+  }
+}
+
+@Override
+public void intersect(IntersectVisitor visitor) throws IOException {
+  checkAndThrow();
+  in.intersect(new ExitableIntersectVisitor(visitor, queryTimeout));
+}
+
+@Override
+public long estimatePointCount(IntersectVisitor visitor) {
+  checkAndThrow();
+  return in.estimatePointCount(visitor);
+}
+
+@Override
+public byte[] getMinPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMinPackedValue();
+}
+
+@Override
+public byte[] getMaxPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMaxPackedValue();
+}
+
+@Override
+public int getNumDataDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumDataDimensions();
+}
+
+@Override
+public int getNumIndexDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumIndexDimensions();
+}
+
+@Override
+public int getBytesPerDimension() throws IOException {
+  checkAndThrow();
+  return in.getBytesPerDimension();
+}
+
+@Override
+public long size() {
+  checkAndThrow();
+  return in.size();
+}
+
+@Override
+public int getDocCount() {
+  checkAndThrow();
+  return in.getDocCount();
+}
+  }
+
+  public static class ExitableIntersectVisitor implements 
PointValues.IntersectVisitor {
+
+public static final int DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK = 
10;
+
+private final PointValues.IntersectVisitor in;
+private final QueryTimeout queryTimeout;
+private final int maxCallsBeforeQueryTimeoutCheck;
+private int calls = 0;
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in, 
QueryTimeout queryTimeout) {
+  this(in, queryTimeout, DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK);
+}
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in,
+QueryTimeout queryTimeout, int 
maxCallsBeforeQueryTimeoutCheck) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  this.maxCallsBeforeQueryTimeoutCheck = 
maxCallsBeforeQueryTimeoutCheck;
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (calls++ % maxCallsBeforeQueryTimeoutCheck == 0 && 
queryTimeout.shouldExit()) {
--- End diff --

:+1: to sampling checks to reduce overhead.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread jpountz
Github user jpountz commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232735208
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,157 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over point values. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
point values. PointValues=" + in);
+  }
+}
+
+@Override
+public void intersect(IntersectVisitor visitor) throws IOException {
+  checkAndThrow();
+  in.intersect(new ExitableIntersectVisitor(visitor, queryTimeout));
+}
+
+@Override
+public long estimatePointCount(IntersectVisitor visitor) {
+  checkAndThrow();
+  return in.estimatePointCount(visitor);
+}
+
+@Override
+public byte[] getMinPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMinPackedValue();
+}
+
+@Override
+public byte[] getMaxPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMaxPackedValue();
+}
+
+@Override
+public int getNumDataDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumDataDimensions();
+}
+
+@Override
+public int getNumIndexDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumIndexDimensions();
+}
+
+@Override
+public int getBytesPerDimension() throws IOException {
+  checkAndThrow();
+  return in.getBytesPerDimension();
+}
+
+@Override
+public long size() {
+  checkAndThrow();
+  return in.size();
+}
+
+@Override
+public int getDocCount() {
+  checkAndThrow();
+  return in.getDocCount();
+}
+  }
+
+  public static class ExitableIntersectVisitor implements 
PointValues.IntersectVisitor {
+
+public static final int DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK = 
10;
+
+private final PointValues.IntersectVisitor in;
+private final QueryTimeout queryTimeout;
+private final int maxCallsBeforeQueryTimeoutCheck;
+private int calls = 0;
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in, 
QueryTimeout queryTimeout) {
+  this(in, queryTimeout, DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK);
+}
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in,
+QueryTimeout queryTimeout, int 
maxCallsBeforeQueryTimeoutCheck) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  this.maxCallsBeforeQueryTimeoutCheck = 
maxCallsBeforeQueryTimeoutCheck;
+}
--- End diff --

Let's not make the `maxCallsBeforeQueryTimeoutCheck` configurable and just 
rely on the constant? 


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread jpountz
Github user jpountz commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232740066
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java 
---
@@ -152,13 +150,130 @@ public void testExitableFilterIndexReader() throws 
Exception {
 // Set a negative time allowed and expect the query to complete 
(should disable timeouts)
 // Not checking the validity of the result, all we are bothered about 
in this test is the timing out.
 directoryReader = DirectoryReader.open(directory);
-exitableDirectoryReader = new ExitableDirectoryReader(directoryReader, 
new QueryTimeoutImpl(-189034L));
+exitableDirectoryReader = new ExitableDirectoryReader(directoryReader, 
disabledQueryTimeout());
 reader = new TestReader(getOnlyLeafReader(exitableDirectoryReader));
 searcher = new IndexSearcher(reader);
 searcher.search(query, 10);
 reader.close();
 
 directory.close();
   }
+
+  /**
+   * Tests timing out of PointValues queries
+   *
+   * @throws Exception on error
+   */
+  public void testExitablePointValuesIndexReader() throws Exception {
+Directory directory = newDirectory();
+IndexWriter writer = new IndexWriter(directory, 
newIndexWriterConfig(new MockAnalyzer(random(;
+
+Document d1 = new Document();
+d1.add(new IntPoint("default", 10));
+writer.addDocument(d1);
+
+Document d2 = new Document();
+d2.add(new IntPoint("default", 100));
+writer.addDocument(d2);
+
+Document d3 = new Document();
+d3.add(new IntPoint("default", 1000));
+writer.addDocument(d3);
+
+writer.forceMerge(1);
+writer.commit();
+writer.close();
+
+DirectoryReader directoryReader;
+DirectoryReader exitableDirectoryReader;
+IndexReader reader;
+IndexSearcher searcher;
+
+Query query = IntPoint.newRangeQuery("default", 10, 20);
+
+// Set a fairly high timeout value (1 second) and expect the query to 
complete in that time frame.
+// Not checking the validity of the result, all we are bothered about 
in this test is the timing out.
+directoryReader = DirectoryReader.open(directory);
+exitableDirectoryReader = new ExitableDirectoryReader(directoryReader, 
inifiniteQueryTimeout());
+reader = new TestReader(getOnlyLeafReader(exitableDirectoryReader));
+searcher = new IndexSearcher(reader);
+searcher.search(query, 10);
+reader.close();
+
+// Set a really low timeout value (1 millisecond) and expect an 
Exception
+directoryReader = DirectoryReader.open(directory);
+exitableDirectoryReader = new ExitableDirectoryReader(directoryReader, 
immediateQueryTimeout());
+reader = new TestReader(getOnlyLeafReader(exitableDirectoryReader));
+IndexSearcher slowSearcher = new IndexSearcher(reader);
+expectThrows(ExitingReaderException.class, () -> {
+  slowSearcher.search(query, 10);
+});
+reader.close();
+
+// Set maximum time out and expect the query to complete.
+// Not checking the validity of the result, all we are bothered about 
in this test is the timing out.
+directoryReader = DirectoryReader.open(directory);
+exitableDirectoryReader = new ExitableDirectoryReader(directoryReader, 
inifiniteQueryTimeout());
+reader = new TestReader(getOnlyLeafReader(exitableDirectoryReader));
+searcher = new IndexSearcher(reader);
+searcher.search(query, 10);
+reader.close();
+
+// Set a negative time allowed and expect the query to complete 
(should disable timeouts)
+// Not checking the validity of the result, all we are bothered about 
in this test is the timing out.
+directoryReader = DirectoryReader.open(directory);
+exitableDirectoryReader = new ExitableDirectoryReader(directoryReader, 
disabledQueryTimeout());
+reader = new TestReader(getOnlyLeafReader(exitableDirectoryReader));
+searcher = new IndexSearcher(reader);
+searcher.search(query, 10);
+reader.close();
+
+directory.close();
+  }
+
+  private static QueryTimeout disabledQueryTimeout() {
+return new QueryTimeout() {
+
+  @Override
+  public boolean shouldExit() {
+return false;
+  }
+
+  @Override
+  public boolean isTimeoutEnabled() {
+return false;
+  }
+};
+  }
+
+  private static QueryTimeout inifiniteQueryTimeout() {
+return new QueryTimeout() {
+
+  @Override
+  public boolean shouldExit() {
+return false;
+  }
+
+  @Override
+  public boolean 

[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread jpountz
Github user jpountz commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232738340
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,157 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over point values. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
point values. PointValues=" + in);
+  }
+}
+
+@Override
+public void intersect(IntersectVisitor visitor) throws IOException {
+  checkAndThrow();
+  in.intersect(new ExitableIntersectVisitor(visitor, queryTimeout));
+}
+
+@Override
+public long estimatePointCount(IntersectVisitor visitor) {
+  checkAndThrow();
+  return in.estimatePointCount(visitor);
+}
+
+@Override
+public byte[] getMinPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMinPackedValue();
+}
+
+@Override
+public byte[] getMaxPackedValue() throws IOException {
+  checkAndThrow();
+  return in.getMaxPackedValue();
+}
+
+@Override
+public int getNumDataDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumDataDimensions();
+}
+
+@Override
+public int getNumIndexDimensions() throws IOException {
+  checkAndThrow();
+  return in.getNumIndexDimensions();
+}
+
+@Override
+public int getBytesPerDimension() throws IOException {
+  checkAndThrow();
+  return in.getBytesPerDimension();
+}
+
+@Override
+public long size() {
+  checkAndThrow();
+  return in.size();
+}
+
+@Override
+public int getDocCount() {
+  checkAndThrow();
+  return in.getDocCount();
+}
+  }
+
+  public static class ExitableIntersectVisitor implements 
PointValues.IntersectVisitor {
+
+public static final int DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK = 
10;
+
+private final PointValues.IntersectVisitor in;
+private final QueryTimeout queryTimeout;
+private final int maxCallsBeforeQueryTimeoutCheck;
+private int calls = 0;
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in, 
QueryTimeout queryTimeout) {
+  this(in, queryTimeout, DEFAULT_MAX_CALLS_BEFORE_QUERY_TIMEOUT_CHECK);
+}
+
+public ExitableIntersectVisitor(PointValues.IntersectVisitor in,
+QueryTimeout queryTimeout, int 
maxCallsBeforeQueryTimeoutCheck) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  this.maxCallsBeforeQueryTimeoutCheck = 
maxCallsBeforeQueryTimeoutCheck;
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (calls++ % maxCallsBeforeQueryTimeoutCheck == 0 && 
queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
intersect point values. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
--- End diff --

should this also not be called everytime and moved under the `if (calls++ % 
maxCallsBeforeQueryTimeoutCheck == 0)` test?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Doug Turnbull (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684091#comment-16684091
 ] 

Doug Turnbull commented on LUCENE-8563:
---

For the sake of this discussion, here's a desmos graph with BM25 with/without 
k1 in the numerator 

https://www.desmos.com/calculator/cklb27fcn9 

> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Doug Turnbull (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684080#comment-16684080
 ] 

Doug Turnbull edited comment on LUCENE-8563 at 11/12/18 5:01 PM:
-

It would modify ordering when dealing with multiple fields. Consider one field 
with a different k1 than another because the impact of term frequency is 
calibrated differently. If one calibrates one field to saturate term freq 
faster, and another slower, then ordering would be impacted


was (Author: softwaredoug):
It would modify ordering when dealing with multiple fields. Consider one field 
with a different k1 than another because the impact of term frequency is 
calibrated differently. If one calibrates one field to saturate term freq 
faster, and another slower, then ordering would be impacted

Additionally, currently k1=0 is the only way to disable term frequency without 
also disabling positions.

> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10217) Add a query for the background set to the significantTerms streaming expression

2018-11-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684085#comment-16684085
 ] 

Joel Bernstein edited comment on SOLR-10217 at 11/12/18 5:00 PM:
-

I think Gethin pretty much sums it up. If I remember correctly there was 
something about the patch that needed more work. But I'd have to review again 
very closely to understand the issue. I think it's a nice feature though and 
would likely pick it back up again sometime in the future. [~janhoy], if there 
is a particularly strategic use case for the background query that you're 
looking at, feel free to explain it in the ticket and I can perhaps pick the 
ticket back up sooner.


was (Author: joel.bernstein):
I think Gethin pretty much sums it up. If I remember correctly there was 
something about the patch that needed more work. But I'd have to review again 
very closely to understand the issue. I think it's a nice feature though and 
would likely pick it back up again sometime in the future. [~janhoy], if there 
is a particularly strategic use case for the background query that you're 
looking at, feel free explain it in ticket and we can perhaps pick the ticket 
back up sooner.

> Add a query for the background set to the significantTerms streaming 
> expression
> ---
>
> Key: SOLR-10217
> URL: https://issues.apache.org/jira/browse/SOLR-10217
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Gethin James
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-10217.patch, SOLR-10217.patch, SOLR-20217.patch
>
>
> Following the work on SOLR-10156 we now have a significantTerms expression.
> Currently, the background set is always the full index.  It would be great if 
> we could use a query to define the background set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Doug Turnbull (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684080#comment-16684080
 ] 

Doug Turnbull commented on LUCENE-8563:
---

It would modify ordering when dealing with multiple fields. Consider one field 
with a different k1 than another because the impact of term frequency is 
calibrated differently. If one calibrates one field to saturate term freq 
faster, and another slower, then ordering would be impacted

Additionally, currently k1=0 is the only way to disable term frequency without 
also disabling positions.

> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12976) Unify RedactionUtils and metrics hiddenSysProps settings

2018-11-12 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684075#comment-16684075
 ] 

Gus Heck commented on SOLR-12976:
-

Another point: anyone who has access to the blob-store and config overlay 
modification potentially can load up any code they want to run inside a handler 
of their own creation, including logging all the sysprops (which should then 
would show up on the logging page in the admin UI). Of course access to the 
config overlay modification capability plus blob store is fundamentally full 
trust already, and sys prop gleaning is the least of one's worries so maybe 
this isn't viewed as a problem for this ticket. However, if SOLR-9175 is 
implemented blob-store + schema access will have the same capability.
 

> Unify RedactionUtils and metrics hiddenSysProps settings
> 
>
> Key: SOLR-12976
> URL: https://issues.apache.org/jira/browse/SOLR-12976
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Priority: Major
>
> System properties can contain sensitive data, and they are easily available 
> from the Admin UI (/admin/info/system) and also from the Metrics API 
> (/admin/metrics).
> By default the {{/admin/info/system}} redacts any sys prop with a key 
> containing *password*. This can be configured with sysprop 
> {{-Dsolr.redaction.system.pattern=}}
> The metrics API by default hides these sysprops from the API output:
> {code:java}
> "javax.net.ssl.keyStorePassword",
> "javax.net.ssl.trustStorePassword",
> "basicauth",
> "zkDigestPassword",
> "zkDigestReadonlyPassword"
> {code}
> You can redefine these by adding a section to {{solr.xml}}: 
> ([https://lucene.apache.org/solr/guide/7_5/metrics-reporting.html#the-metrics-hiddensysprops-element])
> {code:xml}
> 
>  
>foo
>bar
>baz
>  
> {code}
> h2. Unifying the two
> It is not very user firiendly to have two different systems for redacting 
> system properties and two sets of defaults. This goals of this issue are
>  * Keep only one set of defaults
>  * Both metrics and system info handler will use the same source
>  * It should be possible to change and persist the list without a full 
> cluster restart, preferably though some API
> Note that the {{solr.redaction.system.pattern}} property is not documented in 
> the ref guide, so this Jira should also fix documentation!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684051#comment-16684051
 ] 

Adrien Grand commented on LUCENE-8563:
--

bq. There will be cases where this affects relative scoring and ranking

I don't think this is correct. All scores would be divided by the same 
constant, so ordering would be preserved.

> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23196 - Unstable!

2018-11-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23196/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseG1GC

56 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([14FF44185C819A4C]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([14FF44185C819A4C]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Comment Edited] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Elizabeth Haubert (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683976#comment-16683976
 ] 

Elizabeth Haubert edited comment on LUCENE-8563 at 11/12/18 4:00 PM:
-

The boost*IDF is not particularly important, this is about the handling of the 
TF component relative to the norms. 

Pull that out as 
{code:java}
(tf + k1*tf) / (tf + k1*length_norms)
{code}

Removing it only from the numerator produces 
{code:java}
 tf / (tf +k1* length norms) 
{code}

At a minimum, that will need a new empirical default for k1. 

Changing k1 in the numerator is the knob to adjust the ratio of tf and norms.   
In the case where document length does not follow standard models, it can be 
helpful to damp down b.  This is not the standard use case, but is not unusual, 
either.  At the extreme,  b=0 then this component reduces to 
{code:java}
(tf * (k1 +1)) / (tf + k1)
{code}

Removing the (k1 +1) from the numerator only produces 
{code:java}
tf / (tf + k1)
{code}

There will be cases where this affects relative scoring and ranking, and I 
don't understand the statement that it doesn't modify ordering.

If there is a need to remove it in the normal case, then perhaps the numerator 
and denominator should be split into two distinct constants.








was (Author: ehaubert):
The boost*IDF is not particularly important, this is about the handling of the 
TF component relative to the norms. 

Pull that out as 
{code:java}
(tf + tf*k1) / (tf + k1*length_norms)
{code}

Removing it only from the numerator produces 
{code:java}
 tf / (tf +k1* length norms) 
{code}

At a minimum, that will need a new empirical default for k1. 

Changing k1 in the numerator is the knob to adjust the ratio of tf and norms.   
In the case where document length does not follow standard models, it can be 
helpful to damp down b.  This is not the standard use case, but is not unusual, 
either.  At the extreme,  b=0 then this component reduces to 
{code:java}
(tf * (k1 +1)) / (tf + k1)
{code}

Removing the (k1 +1) from the numerator only produces 
{code:java}
tf / (tf + k1)
{code}

There will be cases where this affects relative scoring and ranking, and I 
don't understand the statement that it doesn't modify ordering.

If there is a need to remove it in the normal case, then perhaps the numerator 
and denominator should be split into two distinct constants.







> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Elizabeth Haubert (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683976#comment-16683976
 ] 

Elizabeth Haubert commented on LUCENE-8563:
---

The boost*IDF is not particularly important, this is about the handling of the 
TF component relative to the norms. 

Pull that out as 
{code:java}
(tf + tf*k1) / (tf + k1*length_norms)
{code}

Removing it only from the numerator produces 
{code:java}
 tf / (tf +k1* length norms) 
{code}

At a minimum, that will need a new empirical default for k1. 

Changing k1 in the numerator is the knob to adjust the ratio of tf and norms.   
In the case where document length does not follow standard models, it can be 
helpful to damp down b.  This is not the standard use case, but is not unusual, 
either.  At the extreme,  b=0 then this component reduces to 
{code:java}
(tf * (k1 +1)) / (tf + k1)
{code}

Removing the (k1 +1) from the numerator only produces 
{code:java}
tf / (tf + k1)
{code}

There will be cases where this affects relative scoring and ranking, and I 
don't understand the statement that it doesn't modify ordering.

If there is a need to remove it in the normal case, then perhaps the numerator 
and denominator should be split into two distinct constants.







> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-12 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683961#comment-16683961
 ] 

Bar Rotstein commented on SOLR-12955:
-

New patch.
DistributedUpdateProcessor#fetchFullDocumentFromLeader now throws an exception 
if a leader URL cannot be resolved, as it did prior to this refactor.

> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
> Attachments: SOLR-12955.patch, SOLR-12955.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-12 Thread Bar Rotstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bar Rotstein updated SOLR-12955:

Attachment: SOLR-12955.patch

> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
> Attachments: SOLR-12955.patch, SOLR-12955.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8294) KeywordTokenizer hangs with user misconfigured inputs

2018-11-12 Thread Christophe Bismuth (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683907#comment-16683907
 ] 

Christophe Bismuth commented on LUCENE-8294:


Issue can be closed as fixed in 
[906679adc80f0fad1e5c311b03023c7bd95633d7|https://github.com/apache/lucene-solr/commit/906679adc80f0fad1e5c311b03023c7bd95633d7].

> KeywordTokenizer hangs with user misconfigured inputs
> -
>
> Key: LUCENE-8294
> URL: https://issues.apache.org/jira/browse/LUCENE-8294
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: John Doe
>Priority: Minor
>
> When a user configures the bufferSize to be 0, the while loop in 
> KeywordTokenizer.next() function hangs endlessly. Here is the code snippet.
> {code:java}
>   public KeywordTokenizer(Reader input, int bufferSize) {
> super(input);
> this.buffer = new char[bufferSize];//bufferSize is misconfigured with 0
> this.done = false;
>   }
>   public Token next() throws IOException {
> if (!done) {
>   done = true;
>   StringBuffer buffer = new StringBuffer();
>   int length;
>   while (true) {
> length = input.read(this.buffer); //length is always 0 when the 
> buffer.size == 0
> if (length == -1) break;
> buffer.append(this.buffer, 0, length);
>   }
>   String text = buffer.toString();
>   return new Token(text, 0, text.length());
> }
> return null;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 372 - Failure

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/372/

No tests ran.

Build Log:
[...truncated 23437 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2436 links (1989 relative) to 3199 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.7.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

Re: Lucene/Solr 7.6

2018-11-12 Thread Cassandra Targett
On Sat, Nov 10, 2018 at 9:50 AM Steve Rowe  wrote:

> Hi Cassandra,
>
> > On Nov 9, 2018, at 3:47 PM, Cassandra Targett 
> wrote:
> >
> > I don't know if it's on the Release ToDo list, but we need a Jenkins job
> for the Ref Guide to be built from branch_7x  also.
>
> I assume you mean a branch_7_6 ref guide job, since there already is one
> for branch_7x; I created it along with the others.
>
>
Right, I meant branch_7_6! Thanks Steve.


[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683868#comment-16683868
 ] 

Robert Muir commented on LUCENE-8563:
-

+1 to nuke it. Currently the explain() goes out of its way to try to separate 
out this scaling factor to make it easier to see. Its unnecessary.

> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232681060
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java 
---
@@ -160,5 +160,78 @@ public void testExitableFilterIndexReader() throws 
Exception {
 
 directory.close();
   }
+
+  /**
+   * Tests timing out of PointValues queries
+   *
+   * @throws Exception on error
+   */
+  @Ignore("this test relies on wall clock time and sometimes false fails")
--- End diff --

Fixed in 2b3dfbcba8944dd48c22c341699da528c8525f91.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232681097
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,97 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over terms. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
terms. PointValues=" + in);
+  }
+}
+
+@Override
+public void intersect(IntersectVisitor visitor) throws IOException {
+  checkAndThrow();
+  in.intersect(visitor);
--- End diff --

Fixed in 5587aa2c904ba135da11a89f47cfa035901b046d.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232681035
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,97 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over terms. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
terms. PointValues=" + in);
--- End diff --

Fixed in 6c6e45be3b645814b9871db3b33b799a7c31296d.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232681011
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,97 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over terms. Timeout: "
--- End diff --

Fixed in 6c6e45be3b645814b9871db3b33b799a7c31296d.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683858#comment-16683858
 ] 

Adrien Grand commented on LUCENE-8563:
--

[~ehaubert] The change I'm suggesting would divide every BM25 score by (k1+1), 
which doesn't affect ranking. Setting k1 to 0 would have the undesirable 
side-effect of disabling the impact of term frequency and document length: the 
formula that I wrote was a bit simplified as {{norm}} actually depends on 
{{k1}}, it looks like below when expanding {{norm}}:

{code:java}
boost * IDF * (k1+1) * tf / (tf + k1 * (1 - b + b * len / avgLen))
{code}

> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12881) Remove unneeded import statements

2018-11-12 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683817#comment-16683817
 ] 

Uwe Schindler commented on SOLR-12881:
--

Hi,

- Forbiddenapis does not work on source level, so we won't see imports.
- check-source-pattern.groovy would work, just add a regex there.

About ECJ, we can read the documentation. The setting should be visible in the 
Eclipse GUI. If you won't find it there, we can't configure it!

> Remove unneeded import statements
> -
>
> Key: SOLR-12881
> URL: https://issues.apache.org/jira/browse/SOLR-12881
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Peter Somogyi
>Assignee: Erick Erickson
>Priority: Trivial
> Attachments: SOLR-12881.patch, SOLR-12881.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are unnecessary import statements:
>  * import from java.lang
>  * import from same package
>  * unused import



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #496: LUCENE-8463: Early-terminate queries sorted b...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/496#discussion_r232666957
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java ---
@@ -68,6 +68,20 @@ public void setScorer(Scorable scorer) throws 
IOException {
   }
 
   static boolean canEarlyTerminate(Sort searchSort, Sort indexSort) {
+return canEarlyTerminateOnDocId(searchSort, indexSort) ||
+   canEarlyTerminateOnPrefix(searchSort, indexSort);
+  }
+
+  private static boolean canEarlyTerminateOnDocId(Sort searchSort, Sort 
indexSort) {
+final SortField[] fields1 = searchSort.getSort();
+final SortField[] fields2 = indexSort.getSort();
+return fields1.length == 1 &&
--- End diff --

Fixed in 7ef9b96746503ca0672d9e65b41555bc4ae567b3.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #496: LUCENE-8463: Early-terminate queries sorted b...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/496#discussion_r232665140
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java ---
@@ -68,6 +68,20 @@ public void setScorer(Scorable scorer) throws 
IOException {
   }
 
   static boolean canEarlyTerminate(Sort searchSort, Sort indexSort) {
+return canEarlyTerminateOnDocId(searchSort, indexSort) ||
+   canEarlyTerminateOnPrefix(searchSort, indexSort);
+  }
+
+  private static boolean canEarlyTerminateOnDocId(Sort searchSort, Sort 
indexSort) {
+final SortField[] fields1 = searchSort.getSort();
+final SortField[] fields2 = indexSort.getSort();
+return fields1.length == 1 &&
--- End diff --

Oh, yes, we don't need it, I'll remove these conditions.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #496: LUCENE-8463: Early-terminate queries sorted b...

2018-11-12 Thread mikemccand
Github user mikemccand commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/496#discussion_r232664496
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java ---
@@ -68,6 +68,20 @@ public void setScorer(Scorable scorer) throws 
IOException {
   }
 
   static boolean canEarlyTerminate(Sort searchSort, Sort indexSort) {
+return canEarlyTerminateOnDocId(searchSort, indexSort) ||
+   canEarlyTerminateOnPrefix(searchSort, indexSort);
+  }
+
+  private static boolean canEarlyTerminateOnDocId(Sort searchSort, Sort 
indexSort) {
+final SortField[] fields1 = searchSort.getSort();
+final SortField[] fields2 = indexSort.getSort();
+return fields1.length == 1 &&
--- End diff --

It'd be weird, but we can also safely early terminate even if there are 
other sort fields after `docid` right (since `docid` is a total sort)?  So we 
don't need to insist the length is 1?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232664251
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java 
---
@@ -160,5 +160,78 @@ public void testExitableFilterIndexReader() throws 
Exception {
 
 directory.close();
   }
+
+  /**
+   * Tests timing out of PointValues queries
+   *
+   * @throws Exception on error
+   */
+  @Ignore("this test relies on wall clock time and sometimes false fails")
--- End diff --

I'll try to do so, yes, thanks :+1:


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232664071
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,97 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over terms. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
terms. PointValues=" + in);
+  }
+}
+
+@Override
+public void intersect(IntersectVisitor visitor) throws IOException {
+  checkAndThrow();
+  in.intersect(visitor);
--- End diff --

I think it would be better, yes. I'll suggest a change like this, thank you.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Elizabeth Haubert (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683815#comment-16683815
 ] 

Elizabeth Haubert commented on LUCENE-8563:
---

Mathematically, it changes the ratio of 
{code:java} 
tf * idf / ( tf + norm) 
{/code}

which determines the relative importance of the norms parameter.   It seems 
like that should affect ranking, at least for low values of tf.   Why not just 
set the parameter to 0 for the cases you are looking at?




> Remove k1+1 from the numerator of  BM25Similarity
> -
>
> Key: LUCENE-8563
> URL: https://issues.apache.org/jira/browse/LUCENE-8563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Our current implementation of BM25 does
> {code:java}
> boost * IDF * (k1+1) * tf / (tf + norm)
> {code}
> As (k1+1) is a constant, it is the same for every term and doesn't modify 
> ordering. It is often omitted and I found out that the "The Probabilistic 
> Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
> Zaragova even describes adding (k1+1) to the numerator as a variant whose 
> benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
> we don't care about.
> {quote}A common variant is to add a (k1 + 1) component to the
>  numerator of the saturation function. This is the same for all
>  terms, and therefore does not affect the ranking produced.
>  The reason for including it was to make the final formula
>  more compatible with the RSJ weight used on its own
> {quote}
> Should we remove it from BM25Similarity as well?
> A side-effect that I'm interested in is that integrating other score 
> contributions (eg. via oal.document.FeatureField) would be a bit easier to 
> reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
> would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
> rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232663580
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,97 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over terms. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
terms. PointValues=" + in);
--- End diff --

Same here.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
Github user cbismuth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232663501
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,97 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over terms. Timeout: "
--- End diff --

I missed this one ... yes :+1:


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread mikemccand
Github user mikemccand commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232661641
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,97 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over terms. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
terms. PointValues=" + in);
--- End diff --

Same here?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread mikemccand
Github user mikemccand commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232661519
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,97 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over terms. Timeout: "
--- End diff --

Maybe change `to iterate over terms` to `to iterate over points`?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread mikemccand
Github user mikemccand commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232661927
  
--- Diff: 
lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java 
---
@@ -160,5 +160,78 @@ public void testExitableFilterIndexReader() throws 
Exception {
 
 directory.close();
   }
+
+  /**
+   * Tests timing out of PointValues queries
+   *
+   * @throws Exception on error
+   */
+  @Ignore("this test relies on wall clock time and sometimes false fails")
--- End diff --

I wonder if we could make a mock clock and then have deterministic control 
and be able to re-enable these tests?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread mikemccand
Github user mikemccand commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/497#discussion_r232662145
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java ---
@@ -100,13 +109,97 @@ public CacheHelper getCoreCacheHelper() {
 
   }
 
+  /**
+   * Wrapper class for another PointValues implementation that is used by 
ExitableFields.
+   */
+  public static class ExitablePointValues extends PointValues {
+
+private final PointValues in;
+private final QueryTimeout queryTimeout;
+
+public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) {
+  this.in = in;
+  this.queryTimeout = queryTimeout;
+  checkAndThrow();
+}
+
+/**
+ * Throws {@link ExitingReaderException} if {@link 
QueryTimeout#shouldExit()} returns true,
+ * or if {@link Thread#interrupted()} returns true.
+ */
+private void checkAndThrow() {
+  if (queryTimeout.shouldExit()) {
+throw new ExitingReaderException("The request took too long to 
iterate over terms. Timeout: "
++ queryTimeout.toString()
++ ", PointValues=" + in
+);
+  } else if (Thread.interrupted()) {
+throw new ExitingReaderException("Interrupted while iterating over 
terms. PointValues=" + in);
+  }
+}
+
+@Override
+public void intersect(IntersectVisitor visitor) throws IOException {
+  checkAndThrow();
+  in.intersect(visitor);
--- End diff --

A lot of time/effort can be spent in this (recursive) intersect call -- 
should we also wrap the `IntersectVisitor` and sometimes check for timeout?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12881) Remove unneeded import statements

2018-11-12 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683807#comment-16683807
 ] 

Christine Poerschke commented on SOLR-12881:


bq. ... cleanup ...

+1

Could we perhaps also do something to prevent re-introductions i.e. to keep 
things shiny after cleanup?
* Perhaps 
https://github.com/apache/lucene-solr/blob/master/lucene/tools/src/groovy/check-source-patterns.groovy
 (or [~thetaphi]'s 
https://github.com/apache/lucene-solr/tree/master/lucene/tools/forbiddenApis ?) 
could guard against {{import java.lang}} re-introduction?
* In 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/dev-tools/eclipse/dot.settings/org.eclipse.jdt.core.prefs#L34
 and 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/lucene/tools/javadoc/ecj.javadocs.prefs#L85
 we currently have
{{org.eclipse.jdt.core.compiler.problem.unusedImport=error}} and the latter 
file is used by precommit. Not sure if something equivalent specifically 
looking for same-package imports exists?



> Remove unneeded import statements
> -
>
> Key: SOLR-12881
> URL: https://issues.apache.org/jira/browse/SOLR-12881
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Peter Somogyi
>Assignee: Erick Erickson
>Priority: Trivial
> Attachments: SOLR-12881.patch, SOLR-12881.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are unnecessary import statements:
>  * import from java.lang
>  * import from same package
>  * unused import



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8552) optimize getMergedFieldInfos for one-segment FieldInfos

2018-11-12 Thread Christophe Bismuth (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683801#comment-16683801
 ] 

Christophe Bismuth commented on LUCENE-8552:


Is the underlying idea to limit the number of {{FieldInfos}} instances added to 
the {{FieldInfos.Builder}} for performances purpose?

> optimize getMergedFieldInfos for one-segment FieldInfos
> ---
>
> Key: LUCENE-8552
> URL: https://issues.apache.org/jira/browse/LUCENE-8552
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: David Smiley
>Priority: Minor
>
> FieldInfos.getMergedFieldInfos could trivially return the FieldInfos of the 
> first and only LeafReader if there is only one LeafReader.
> Also... if there is more than one LeafReader, and if FieldInfos & FieldInfo 
> implemented equals() & hashCode() (including a cached hashCode), maybe we 
> could also call equals() iterating through the FieldInfos to see if we should 
> bother adding it to the FieldInfos.Builder?  Admittedly this is speculative; 
> may not be worth the bother.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity

2018-11-12 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8563:


 Summary: Remove k1+1 from the numerator of  BM25Similarity
 Key: LUCENE-8563
 URL: https://issues.apache.org/jira/browse/LUCENE-8563
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand


Our current implementation of BM25 does
{code:java}
boost * IDF * (k1+1) * tf / (tf + norm)
{code}
As (k1+1) is a constant, it is the same for every term and doesn't modify 
ordering. It is often omitted and I found out that the "The Probabilistic 
Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and 
Zaragova even describes adding (k1+1) to the numerator as a variant whose 
benefit is to be more comparable with Robertson/Sparck-Jones weighting, which 
we don't care about.
{quote}A common variant is to add a (k1 + 1) component to the
 numerator of the saturation function. This is the same for all
 terms, and therefore does not affect the ranking produced.
 The reason for including it was to make the final formula
 more compatible with the RSJ weight used on its own
{quote}
Should we remove it from BM25Similarity as well?

A side-effect that I'm interested in is that integrating other score 
contributions (eg. via oal.document.FeatureField) would be a bit easier to 
reason about. For instance a weight of 3 in FeatureField#newSaturationQuery 
would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) 
rather than a term whose IDF is 3/(k1 + 1).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12977) Autoscaling policy initialisation tries to fetch metrics from dead nodes

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683787#comment-16683787
 ] 

ASF subversion and git services commented on SOLR-12977:


Commit e81dd4e870d2a9b27e1f4366e92daa6dba054da8 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e81dd4e ]

SOLR-12977: fixed bug


> Autoscaling policy initialisation tries to fetch metrics from dead nodes
> 
>
> Key: SOLR-12977
> URL: https://issues.apache.org/jira/browse/SOLR-12977
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
>
> Autoscaling policy initialisation tries to fetch metrics for each node during 
> construction. However, it does not skip the known dead nodes causing a 
> timeout to be logged. We should skip such requests entirely.
> {code}
> 20579 WARN  (AutoscalingActionExecutor-37-thread-1) [] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> 127.0.0.1:63255_solr
> org.apache.solr.client.solrj.SolrServerException: Server refused connection 
> at: http://127.0.0.1:63255/solr
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:650)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) 
> ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:342)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:195)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:186)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getReplicaInfo(SolrClientNodeStateProvider.java:169)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.(Row.java:60) 
> [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:181)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:114)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$419(ScheduledTriggers.java:308)
>  [java/:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12977) Autoscaling policy initialisation tries to fetch metrics from dead nodes

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683786#comment-16683786
 ] 

ASF subversion and git services commented on SOLR-12977:


Commit 988462b9ea3cdf8137a478fc7f81da03355aea4c in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=988462b ]

SOLR-12977: fixed bug


> Autoscaling policy initialisation tries to fetch metrics from dead nodes
> 
>
> Key: SOLR-12977
> URL: https://issues.apache.org/jira/browse/SOLR-12977
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
>
> Autoscaling policy initialisation tries to fetch metrics for each node during 
> construction. However, it does not skip the known dead nodes causing a 
> timeout to be logged. We should skip such requests entirely.
> {code}
> 20579 WARN  (AutoscalingActionExecutor-37-thread-1) [] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> 127.0.0.1:63255_solr
> org.apache.solr.client.solrj.SolrServerException: Server refused connection 
> at: http://127.0.0.1:63255/solr
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:650)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) 
> ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:342)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:195)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:186)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getReplicaInfo(SolrClientNodeStateProvider.java:169)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.(Row.java:60) 
> [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:181)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:114)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$419(ScheduledTriggers.java:308)
>  [java/:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12977) Autoscaling policy initialisation tries to fetch metrics from dead nodes

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683783#comment-16683783
 ] 

ASF subversion and git services commented on SOLR-12977:


Commit 5cee6e467bf272beea7055c72c3bbc6ba89ac591 in lucene-solr's branch 
refs/heads/branch_7_6 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5cee6e4 ]

SOLR-12977: fixed bug


> Autoscaling policy initialisation tries to fetch metrics from dead nodes
> 
>
> Key: SOLR-12977
> URL: https://issues.apache.org/jira/browse/SOLR-12977
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
>
> Autoscaling policy initialisation tries to fetch metrics for each node during 
> construction. However, it does not skip the known dead nodes causing a 
> timeout to be logged. We should skip such requests entirely.
> {code}
> 20579 WARN  (AutoscalingActionExecutor-37-thread-1) [] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> 127.0.0.1:63255_solr
> org.apache.solr.client.solrj.SolrServerException: Server refused connection 
> at: http://127.0.0.1:63255/solr
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:650)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) 
> ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:342)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:195)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:186)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getReplicaInfo(SolrClientNodeStateProvider.java:169)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.(Row.java:60) 
> [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:181)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:114)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$419(ScheduledTriggers.java:308)
>  [java/:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8552) optimize getMergedFieldInfos for one-segment FieldInfos

2018-11-12 Thread Christophe Bismuth (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683779#comment-16683779
 ] 

Christophe Bismuth commented on LUCENE-8552:


Hi, I'd like to work on this one.

> optimize getMergedFieldInfos for one-segment FieldInfos
> ---
>
> Key: LUCENE-8552
> URL: https://issues.apache.org/jira/browse/LUCENE-8552
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: David Smiley
>Priority: Minor
>
> FieldInfos.getMergedFieldInfos could trivially return the FieldInfos of the 
> first and only LeafReader if there is only one LeafReader.
> Also... if there is more than one LeafReader, and if FieldInfos & FieldInfo 
> implemented equals() & hashCode() (including a cached hashCode), maybe we 
> could also call equals() iterating through the FieldInfos to see if we should 
> bother adding it to the FieldInfos.Builder?  Admittedly this is speculative; 
> may not be worth the bother.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8026) ExitableDirectoryReader does not instrument points

2018-11-12 Thread Christophe Bismuth (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683763#comment-16683763
 ] 

Christophe Bismuth commented on LUCENE-8026:


Hi, I've opened PR [#497|https://github.com/apache/lucene-solr/pull/497] to fix 
this bug.

> ExitableDirectoryReader does not instrument points
> --
>
> Key: LUCENE-8026
> URL: https://issues.apache.org/jira/browse/LUCENE-8026
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Trivial
>  Labels: newdev
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This means it cannot interrupt range or geo queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...

2018-11-12 Thread cbismuth
GitHub user cbismuth opened a pull request:

https://github.com/apache/lucene-solr/pull/497

LUCENE-8026: ExitableDirectoryReader does not instrument points

> This means it cannot interrupt range or geo queries.

See [LUCENE-8026](https://issues.apache.org/jira/browse/LUCENE-8026).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cbismuth/lucene-solr LUCENE-8026

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/497.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #497


commit 6aca2ab24f55604a74baa0cf7ec1459640098d59
Author: Christophe Bismuth 
Date:   2018-11-12T12:53:21Z

LUCENE-8026: ExitableDirectoryReader does not instrument points




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 375 - Still Unstable

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/375/

3 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest.test

Error Message:
There are still nodes recoverying - waited for 320 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 320 
seconds
at 
__randomizedtesting.SeedInfo.seed([94744E740BFC960B:1C2071AEA500FBF3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:920)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1477)
at 
org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-12977) Autoscaling policy initialisation tries to fetch metrics from dead nodes

2018-11-12 Thread Eric Pugh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683744#comment-16683744
 ] 

Eric Pugh commented on SOLR-12977:
--

Looking at 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientNodeStateProvider.java;hb=605c3f6f1a8d14ad3933d2ea225ec5ee66a631d9#l321,
 shouldn't it return `false` on line 321?

> Autoscaling policy initialisation tries to fetch metrics from dead nodes
> 
>
> Key: SOLR-12977
> URL: https://issues.apache.org/jira/browse/SOLR-12977
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
>
> Autoscaling policy initialisation tries to fetch metrics for each node during 
> construction. However, it does not skip the known dead nodes causing a 
> timeout to be logged. We should skip such requests entirely.
> {code}
> 20579 WARN  (AutoscalingActionExecutor-37-thread-1) [] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> 127.0.0.1:63255_solr
> org.apache.solr.client.solrj.SolrServerException: Server refused connection 
> at: http://127.0.0.1:63255/solr
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:650)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) 
> ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:342)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:195)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:186)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getReplicaInfo(SolrClientNodeStateProvider.java:169)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.(Row.java:60) 
> [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:181)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:114)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$419(ScheduledTriggers.java:308)
>  [java/:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12313) TestInjection#waitForInSyncWithLeader needs improvement.

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683653#comment-16683653
 ] 

ASF subversion and git services commented on SOLR-12313:


Commit 397b88aefa39d66d1310dfdea6b6d344ce1c9ce5 in lucene-solr's branch 
refs/heads/jira/http2 from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=397b88a ]

SOLR-12313: No need to wait for in-sync with leader in 
RecoveryAfterSoftCommitTest since we only care about recovery


> TestInjection#waitForInSyncWithLeader needs improvement.
> 
>
> Key: SOLR-12313
> URL: https://issues.apache.org/jira/browse/SOLR-12313
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>
> This really should have some doc for why it would be used.
> I also think it causes BasicDistributedZkTest to take forever for sometimes 
> and perhaps other tests?
> I think checking for uncommitted data is probably a race condition and should 
> be removed.
> Checking index versions should follow the rules that replication does - if 
> the slave is higher than the leader, it's in sync, being equal is not 
> required. If it's expected for a test it should be a specific test that 
> fails. This just introduces massive delays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12977) Autoscaling policy initialisation tries to fetch metrics from dead nodes

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683656#comment-16683656
 ] 

ASF subversion and git services commented on SOLR-12977:


Commit 605c3f6f1a8d14ad3933d2ea225ec5ee66a631d9 in lucene-solr's branch 
refs/heads/jira/http2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=605c3f6 ]

SOLR-12977: Autoscaling tries to fetch metrics from dead nodes


> Autoscaling policy initialisation tries to fetch metrics from dead nodes
> 
>
> Key: SOLR-12977
> URL: https://issues.apache.org/jira/browse/SOLR-12977
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
>
> Autoscaling policy initialisation tries to fetch metrics for each node during 
> construction. However, it does not skip the known dead nodes causing a 
> timeout to be logged. We should skip such requests entirely.
> {code}
> 20579 WARN  (AutoscalingActionExecutor-37-thread-1) [] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> 127.0.0.1:63255_solr
> org.apache.solr.client.solrj.SolrServerException: Server refused connection 
> at: http://127.0.0.1:63255/solr
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:650)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) 
> ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:342)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:195)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:186)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getReplicaInfo(SolrClientNodeStateProvider.java:169)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.(Row.java:60) 
> [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:181)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:114)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$419(ScheduledTriggers.java:308)
>  [java/:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12965) Add JSON faceting support to SolrJ

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683655#comment-16683655
 ] 

ASF subversion and git services commented on SOLR-12965:


Commit 52998fa50e60ce9c7f49167b1ab107347c30d8d6 in lucene-solr's branch 
refs/heads/jira/http2 from [~gerlowskija]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=52998fa ]

SOLR-12965: Add facet support to JsonQueryRequest


> Add JSON faceting support to SolrJ
> --
>
> Key: SOLR-12965
> URL: https://issues.apache.org/jira/browse/SOLR-12965
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 7.5, master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-12965.patch, SOLR-12965.patch, SOLR-12965.patch, 
> SOLR-12965.patch
>
>
> SOLR-12947 created {{JsonQueryRequest}}, a SolrJ class that makes it easier 
> for users to make JSON-api requests in their Java/SolrJ code.  Currently this 
> class is missing any sort of faceting capabilities (I'd held off on adding 
> this as a part of SOLR-12947 just to keep the issues smaller).
> This JIRA covers adding that missing faceting capability.
> There's a few ways we could handle it, but my first attempt at adding 
> faceting support will probably have users specify a Map for 
> each facet that they wish to add, similar to how complex queries were 
> supported in SOLR-12947.  This approach has some pros and cons:
> The benefit is how general the approach is- our interface stays resilient to 
> any future changes to the syntax of the JSON API, and users can build facets 
> that I'd never thought to explicitly test.  The downside is that this doesn't 
> offer much abstraction for users who are unfamiliar with our JSON syntax- 
> they still have to know the JSON "schema" to build a map representing their 
> facet.  But in practice we can probably mitigate this downside by providing 
> "facet builders" or some other helper classes to provide this abstraction in 
> the common case.
> Hope to have a skeleton patch up soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8537) ant test command fails under lucene/tools

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683652#comment-16683652
 ] 

ASF subversion and git services commented on LUCENE-8537:
-

Commit efd3f17f9a98aa9544e8af5126ae892fbc14728c in lucene-solr's branch 
refs/heads/jira/http2 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=efd3f17 ]

LUCENE-8537: ant test command fails under lucene/tools


> ant test command fails under lucene/tools
> -
>
> Key: LUCENE-8537
> URL: https://issues.apache.org/jira/browse/LUCENE-8537
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0)
>Reporter: Peter Somogyi
>Assignee: Uwe Schindler
>Priority: Minor
> Fix For: master (8.0), 7.7
>
> Attachments: LUCENE-8537.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The {{ant test}} command executed under {{lucene/tools}} folder fails because 
> it does not have {{junit.classpath}} property. Since the module does not have 
> any test folder we could override the {{-test}} and {{-check-totals}} targets.
> {noformat}
> bash-3.2$ pwd
> /Users/peter.somogyi/repos/lucene-solr/lucene/tools
> bash-3.2$ ant test
> Buildfile: /Users/peter.somogyi/repos/lucene-solr/lucene/tools/build.xml
> ...
> -test:
>[junit4]  says ciao! Master seed: 9A2ACC9B4A3C8553
> BUILD FAILED
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1567: The 
> following error occurred while executing this line:
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1092: 
> Reference junit.classpath not found.
> Total time: 1 second
> {noformat}
> I ran into this issue when uploaded a patch where I removed an import from 
> this module. This triggered a module-level build during precommit that failed 
> with this error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12977) Autoscaling policy initialisation tries to fetch metrics from dead nodes

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683657#comment-16683657
 ] 

ASF subversion and git services commented on SOLR-12977:


Commit e6e6ad2c833591028ca9f504571cf26e9585fdda in lucene-solr's branch 
refs/heads/jira/http2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e6e6ad2 ]

SOLR-12977: Autoscaling tries to fetch metrics from dead nodes


> Autoscaling policy initialisation tries to fetch metrics from dead nodes
> 
>
> Key: SOLR-12977
> URL: https://issues.apache.org/jira/browse/SOLR-12977
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
>
> Autoscaling policy initialisation tries to fetch metrics for each node during 
> construction. However, it does not skip the known dead nodes causing a 
> timeout to be logged. We should skip such requests entirely.
> {code}
> 20579 WARN  (AutoscalingActionExecutor-37-thread-1) [] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> 127.0.0.1:63255_solr
> org.apache.solr.client.solrj.SolrServerException: Server refused connection 
> at: http://127.0.0.1:63255/solr
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:650)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) 
> ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:342)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:195)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:186)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getReplicaInfo(SolrClientNodeStateProvider.java:169)
>  [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.(Row.java:60) 
> [java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:181)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:114)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$419(ScheduledTriggers.java:308)
>  [java/:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8560) TestByteBuffersDirectory.testSeekPastEOF() failures with ByteArrayIndexInput

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683654#comment-16683654
 ] 

ASF subversion and git services commented on LUCENE-8560:
-

Commit 4e2481b04b31ee0e5fb368fb69b47bb3da389030 in lucene-solr's branch 
refs/heads/jira/http2 from [~dawid.weiss]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4e2481b ]

LUCENE-8560: TestByteBuffersDirectory.testSeekPastEOF() failures with 
ByteArrayIndexInput. ByteArrayIndexInput removed entirely, without a 
replacement.


> TestByteBuffersDirectory.testSeekPastEOF() failures with ByteArrayIndexInput
> 
>
> Key: LUCENE-8560
> URL: https://issues.apache.org/jira/browse/LUCENE-8560
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Steve Rowe
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8560.patch, LUCENE-8560.patch
>
>
> Two reproducing seeds below.  In both cases:
> * the {{IndexInput}} implementation is {{ByteArrayIndexInput}}
> * seeking to exactly EOF does not throw an exception
> * {{ByteArrayIndexInput.readByte()}} throws AIOOBE instead of the expected 
> EOFException
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4903]:
> {noformat}
> Checking out Revision 856e28d8cf07cc34bc1361784bf00e7aceb3af97 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestByteBuffersDirectory -Dtests.method=testSeekPastEOF 
> -Dtests.seed=BDFA8CEDB7C93AC1 -Dtests.slow=true -Dtests.locale=sr-RS 
> -Dtests.timezone=Europe/Astrakhan -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.00s J0 | TestByteBuffersDirectory.testSeekPastEOF 
> {impl=byte array (heap)} <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> Unexpected exception type, expected EOFException but got 
> java.lang.ArrayIndexOutOfBoundsException: 1770
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([BDFA8CEDB7C93AC1:5DBC4714B74C4450]:0)
>[junit4]>  at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2680)
>[junit4]>  at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2669)
>[junit4]>  at 
> org.apache.lucene.store.BaseDirectoryTestCase.testSeekPastEOF(BaseDirectoryTestCase.java:516)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
>[junit4]> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1770
>[junit4]>  at 
> org.apache.lucene.store.ByteArrayIndexInput.readByte(ByteArrayIndexInput.java:145)
>[junit4]>  at 
> org.apache.lucene.store.BaseDirectoryTestCase.lambda$testSeekPastEOF$12(BaseDirectoryTestCase.java:518)
>[junit4]>  at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2675)
>[junit4]>  ... 37 more
> [...]
>[junit4]   2> NOTE: test params are: codec=Lucene80, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@2c972cf9),
>  locale=sr-RS, timezone=Europe/Astrakhan
>[junit4]   2> NOTE: Mac OS X 10.11.6 x86_64/Oracle Corporation 9 
> (64-bit)/cpus=3,threads=1,free=157933784,total=235929600
> {noformat}
> Also (older) from 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1645]:
> {noformat}
>   [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestByteBuffersDirectory -Dtests.method=testSeekPastEOF 
> -Dtests.seed=90B07B6267E63464 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=es-PR -Dtests.timezone=Australia/Currie -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>   [junit4] FAILURE 0.01s J1 | TestByteBuffersDirectory.testSeekPastEOF 
> {impl=byte array (heap)} <<<
>   [junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> Unexpected exception type, expected EOFException but got 
> java.lang.ArrayIndexOutOfBoundsException: 1881
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([90B07B6267E63464:70F6B09B67634AF5]:0)
>   [junit4]>   at 
> 

[jira] [Commented] (SOLR-12978) Autoscaling Suggester tries to test metrics for dead nodes

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683658#comment-16683658
 ] 

ASF subversion and git services commented on SOLR-12978:


Commit cd1e829732157399f7e38d810a38df3f4c2e0792 in lucene-solr's branch 
refs/heads/jira/http2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cd1e829 ]

SOLR-12978: In autoscaling NPE thrown for nodes where value is absent


> Autoscaling Suggester tries to test metrics for dead nodes
> --
>
> Key: SOLR-12978
> URL: https://issues.apache.org/jira/browse/SOLR-12978
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> Suggester tries to test clauses in the applyRules phase for each row 
> regardless of whether the row is live or not. When the node is not live and 
> there are no metrics fetched, testing the clause causes an NPE.
> {code}
> 20586 WARN  (AutoscalingActionExecutor-37-thread-1) [] 
> o.a.s.c.a.ScheduledTriggers Exception executing actions
> org.apache.solr.cloud.autoscaling.TriggerActionException: Error processing 
> action for trigger event: {
>   "id":"21d1e96fd8737T4ighk35ce6gv7f6h5zbndib4n",
>   "source":"node_lost_trigger",
>   "eventTime":594967172843319,
>   "eventType":"NODELOST",
>   "properties":{
> "eventTimes":[594967172843319],
> "preferredOperation":"movereplica",
> "_enqueue_time_":594968181417909,
> "nodeNames":["127.0.0.1:63255_solr"]}}
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$419(ScheduledTriggers.java:311)
>  [java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$498/1669229711.run(Unknown
>  Source) [java/:?]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_51]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_51]
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$328(ExecutorUtil.java:209)
>  [java/:?]
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$10/1568754952.run(Unknown
>  Source) [java/:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_51]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_51]
> Caused by: org.apache.solr.common.SolrException: Unexpected exception while 
> processing event: {
>   "id":"21d1e96fd8737T4ighk35ce6gv7f6h5zbndib4n",
>   "source":"node_lost_trigger",
>   "eventTime":594967172843319,
>   "eventType":"NODELOST",
>   "properties":{
> "eventTimes":[594967172843319],
> "preferredOperation":"movereplica",
> "_enqueue_time_":594968181417909,
> "nodeNames":["127.0.0.1:63255_solr"]}}
>   at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:160)
>  ~[java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$419(ScheduledTriggers.java:308)
>  ~[java/:?]
>   ... 8 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.RangeVal.match(RangeVal.java:34)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Operand$2.match(Operand.java:43)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable.match(Variable.java:46)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.match(Variable.java:358)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Condition.isPass(Condition.java:71)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Condition.isPass(Condition.java:76)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Clause.test(Clause.java:531) 
> ~[java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:635)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:185)
>  ~[java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:114)
>  ~[java/:?]
>   at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$419(ScheduledTriggers.java:308)
>  ~[java/:?]
>   ... 8 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (SOLR-12969) Inconsistency with leader when PeerSync return ALREADY_IN_SYNC

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683659#comment-16683659
 ] 

ASF subversion and git services commented on SOLR-12969:


Commit f357c06276139defa26d0569fe5903cfd3d66cdb in lucene-solr's branch 
refs/heads/jira/http2 from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f357c06 ]

SOLR-12969: Inconsistency with leader when PeerSync return ALREADY_IN_SYNC


> Inconsistency with leader when PeerSync return ALREADY_IN_SYNC
> --
>
> Key: SOLR-12969
> URL: https://issues.apache.org/jira/browse/SOLR-12969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Affects Versions: 6.6.5, 7.5
>Reporter: Jeremy Smith
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12969.patch, SOLR-12969.patch, SOLR-12969.patch
>
>
> Under certain circumstances, replication fails between a leader and follower. 
>  The follower will not receive updates from the leader, even though the 
> leader has a newer version.  If the leader is restarted, it will get the 
> older version from the follower.
>  
> This was discussed on the [mailing 
> list|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201810.mbox/%3CBYAPR04MB4406710795EA07E93BF80913ADCD0%40BYAPR04MB4406.namprd04.prod.outlook.com%3E]
>  and [~risdenk] [wrote a 
> script|https://github.com/risdenk/test-solr-start-stop-replica-consistency] 
> that demonstrates this error.  He also verified that the error occurs when 
> the script is run outside of docker.
>  
> Here is the scenario of the failure:
>  * A collection with 1 shards and 2 replicas
>  * Stop non-leader replica (B)
>  * Index more than 100 documents to the collection
>  * Start replica B, it failed to do PeerSync and starts segments replication
>  * Index document 101th to the collection
>  ** Leader's tlog: [1, 2, 3, ..., 100, 101]
>  ** Replica's tlog: [101]
>  * Stop replica B
>  * Index document 102th to the collection
>  * Start replica B, on doing PeerSync
>  ** Leader's tlog: [1, 2, 3, ..., 100, 101, 102]
>  ** Replica's tlog: [101]
>  ** Leader's high (80th): 80
>  ** Replica's low: 101
>  ** By comparison: replica's low > leader's high => ALREADY_IN_SYNC



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8463) Early-terminate queries sorted by SortField.DOC

2018-11-12 Thread Christophe Bismuth (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683584#comment-16683584
 ] 

Christophe Bismuth commented on LUCENE-8463:


Hi, I've opened PR [#496|https://github.com/apache/lucene-solr/pull/496] to 
implement this improvement.

> Early-terminate queries sorted by SortField.DOC
> ---
>
> Key: LUCENE-8463
> URL: https://issues.apache.org/jira/browse/LUCENE-8463
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: newdev
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently TopFieldCollector only early-terminates when the search sort is a 
> prefix of the index sort, but it could also early-terminate when sorting by 
> doc id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #496: LUCENE-8463: Early-terminate queries sorted b...

2018-11-12 Thread cbismuth
GitHub user cbismuth opened a pull request:

https://github.com/apache/lucene-solr/pull/496

LUCENE-8463: Early-terminate queries sorted by SortField.DOC

> Currently TopFieldCollector only early-terminates when the search sort is 
a prefix of the index sort, but it could also early-terminate when sorting by 
doc id.

See [LUCENE-8463](https://issues.apache.org/jira/browse/LUCENE-8463).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cbismuth/lucene-solr LUCENE-8463

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/496.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #496


commit 91ec76c8f6f337c4616021a68b858f912447c871
Author: Christophe Bismuth 
Date:   2018-11-12T10:00:20Z

LUCENE-8463: Early-terminate queries sorted by SortField.DOC




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2947 - Still Unstable

2018-11-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2947/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimNodeLostTrigger.testTrigger

Error Message:
[127.0.0.1:10001_solr] doesn't contain 127.0.0.1:10004_solr

Stack Trace:
java.lang.AssertionError: [127.0.0.1:10001_solr] doesn't contain 
127.0.0.1:10004_solr
at 
__randomizedtesting.SeedInfo.seed([4E1B05A356F84DCA:2DD03321CF373EE7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimNodeLostTrigger.testTrigger(TestSimNodeLostTrigger.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13195 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestSimNodeLostTrigger
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-10217) Add a query for the background set to the significantTerms streaming expression

2018-11-12 Thread Gethin James (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683512#comment-16683512
 ] 

Gethin James commented on SOLR-10217:
-

[~joel.bernstein] and I worked on this while we were at Alfresco, we have both 
left now and, to the best of my knowledge, no progress has been made.  If I 
remember correctly, the patch needed a different approach.  It is unlikely to 
get into a future release unless someone picks it up and does it :(.  Any 
update Joel?

> Add a query for the background set to the significantTerms streaming 
> expression
> ---
>
> Key: SOLR-10217
> URL: https://issues.apache.org/jira/browse/SOLR-10217
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Gethin James
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-10217.patch, SOLR-10217.patch, SOLR-20217.patch
>
>
> Following the work on SOLR-10156 we now have a significantTerms expression.
> Currently, the background set is always the full index.  It would be great if 
> we could use a query to define the background set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12969) Inconsistency with leader when PeerSync return ALREADY_IN_SYNC

2018-11-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683505#comment-16683505
 ] 

ASF subversion and git services commented on SOLR-12969:


Commit f357c06276139defa26d0569fe5903cfd3d66cdb in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f357c06 ]

SOLR-12969: Inconsistency with leader when PeerSync return ALREADY_IN_SYNC


> Inconsistency with leader when PeerSync return ALREADY_IN_SYNC
> --
>
> Key: SOLR-12969
> URL: https://issues.apache.org/jira/browse/SOLR-12969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Affects Versions: 6.6.5, 7.5
>Reporter: Jeremy Smith
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12969.patch, SOLR-12969.patch, SOLR-12969.patch
>
>
> Under certain circumstances, replication fails between a leader and follower. 
>  The follower will not receive updates from the leader, even though the 
> leader has a newer version.  If the leader is restarted, it will get the 
> older version from the follower.
>  
> This was discussed on the [mailing 
> list|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201810.mbox/%3CBYAPR04MB4406710795EA07E93BF80913ADCD0%40BYAPR04MB4406.namprd04.prod.outlook.com%3E]
>  and [~risdenk] [wrote a 
> script|https://github.com/risdenk/test-solr-start-stop-replica-consistency] 
> that demonstrates this error.  He also verified that the error occurs when 
> the script is run outside of docker.
>  
> Here is the scenario of the failure:
>  * A collection with 1 shards and 2 replicas
>  * Stop non-leader replica (B)
>  * Index more than 100 documents to the collection
>  * Start replica B, it failed to do PeerSync and starts segments replication
>  * Index document 101th to the collection
>  ** Leader's tlog: [1, 2, 3, ..., 100, 101]
>  ** Replica's tlog: [101]
>  * Stop replica B
>  * Index document 102th to the collection
>  * Start replica B, on doing PeerSync
>  ** Leader's tlog: [1, 2, 3, ..., 100, 101, 102]
>  ** Replica's tlog: [101]
>  ** Leader's high (80th): 80
>  ** Replica's low: 101
>  ** By comparison: replica's low > leader's high => ALREADY_IN_SYNC



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.6

2018-11-12 Thread Đạt Cao Mạnh
Hi guys,

Is it ok for backporting SOLR-12969 to branch_7_6?

On Sat, Nov 10, 2018 at 3:50 PM Steve Rowe  wrote:

> Hi Cassandra,
>
> > On Nov 9, 2018, at 3:47 PM, Cassandra Targett 
> wrote:
> >
> > I don't know if it's on the Release ToDo list, but we need a Jenkins job
> for the Ref Guide to be built from branch_7x  also.
>
> I assume you mean a branch_7_6 ref guide job, since there already is one
> for branch_7x; I created it along with the others.
>
> FYI the ref guide job is listed among those to create on
> https://wiki.apache.org/lucene-java/JenkinsReleaseBuilds , which is
> linked from
> https://wiki.apache.org/lucene-java/ReleaseTodo#Jenkins_Release_Builds .
>
> Steve
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


  1   2   >