[JENKINS] Lucene-Solr-repro - Build # 783 - Still unstable

2018-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/783/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/78/consoleText

[repro] Revision: c01287d7b34293d9ae7b0abcd1bf66334f9d5138

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=151641B485075F8D 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=vi-VN -Dtests.timezone=Mexico/General -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Encountered IncompleteRead exception, pausing and then retrying...
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/78/consoleText

[repro] Revision: c01287d7b34293d9ae7b0abcd1bf66334f9d5138

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=151641B485075F8D 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=vi-VN -Dtests.timezone=Mexico/General -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Encountered IncompleteRead exception, pausing and then retrying...
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/78/consoleText

[repro] Revision: c01287d7b34293d9ae7b0abcd1bf66334f9d5138

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=151641B485075F8D 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=vi-VN -Dtests.timezone=Mexico/General -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestStressInPlaceUpdates 
-Dtests.method=stressTest -Dtests.seed=151641B485075F8D -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ja 
-Dtests.timezone=Antarctica/DumontDUrville -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SolrRrdBackendFactoryTest 
-Dtests.method=testBasic -Dtests.seed=151641B485075F8D -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar-YE 
-Dtests.timezone=Africa/Ceuta -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerIntegrationTest 
-Dtests.method=testDeleteNode -Dtests.seed=151641B485075F8D 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=id-ID -Dtests.timezone=America/Port-au-Prince 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestRecovery 
-Dtests.method=testExistOldBufferLog -Dtests.seed=151641B485075F8D 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sl 
-Dtests.timezone=Antarctica/South_Pole -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestPullReplica 
-Dtests.method=testCreateDelete -Dtests.seed=151641B485075F8D 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=el-CY -Dtests.timezone=Asia/Ust-Nera -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestPullReplica 
-Dtests.method=testCreateDelete -Dtests.seed=151641B485075F8D 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=el-CY -Dtests.timezone=Asia/Ust-Nera -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testAddNode -Dtests.seed=151641B485075F8D -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ja-JP 
-Dtests.timezone=Africa/Kigali -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
c524dc2606719876004f7a9fade6fa0cc4741db9
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout c01287d7b34293d9ae7b0abcd1bf66334f9d5138

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SolrRrdBackendFactoryTest
[repro]   TestPullReplica
[repro]   SearchRateTriggerIntegrationTest
[repro]   TestLargeCluster
[repro]   TestStressInPlaceUpdates
[repro]   IndexSizeTriggerTest
[repro]   TestRecovery
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=35 
-Dtests.class="*.SolrRrdBackendFactoryTest|*.TestPullReplica|*.SearchRateTriggerIntegrationTest|*.TestLargeCluster|*.TestStressInPlaceUpdates|*.IndexSizeTriggerTest|*.TestRecovery"
 -Dtests.showOutput=onerror  -Dtests.seed=151641B485075F8D -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar-YE 
-Dtests.timezone=Africa/Ceuta -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 21366 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 

[GitHub] lucene-solr pull request #398: Lucene 8343 data type migration

2018-06-08 Thread nvnmandadhi
Github user nvnmandadhi commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/398#discussion_r194210584
  
--- Diff: 
lucene/suggest/src/java/org/apache/lucene/search/suggest/analyzing/BlendedInfixSuggester.java
 ---
@@ -200,7 +201,13 @@ protected FieldType getTextFieldType() {
   textDV.advance(fd.doc);
 
   final String text = textDV.binaryValue().utf8ToString();
-  long weight = (Long) fd.fields[0];
+
+  NumericDocValues weightDV = 
MultiDocValues.getNumericValues(searcher.getIndexReader(), WEIGHT_FIELD_NAME);
--- End diff --

Could you please make local variables final to prevent reassignment. 


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.4 - Build # 1 - Failure

2018-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.4/1/

No tests ran.

Build Log:
[...truncated 24204 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2207 links (1760 relative) to 2985 anchors in 230 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/solr/package/solr-7.4.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.4/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:

[GitHub] lucene-solr pull request #399: fix explicit type declaration

2018-06-08 Thread nvnmandadhi
GitHub user nvnmandadhi opened a pull request:

https://github.com/apache/lucene-solr/pull/399

fix explicit type declaration

Cleaned up explicit type declaration

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nvnmandadhi/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/399.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #399


commit 0ca082b54154192ce7feaca25068881047b91228
Author: Naveen Mandadhi 
Date:   2018-06-09T01:47:32Z

fix explicit type declaration




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 238 - Still Failing

2018-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/238/

No tests ran.

Build Log:
[...truncated 24200 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2207 links (1760 relative) to 2985 anchors in 230 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.5.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml


[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2083 - Still Unstable!

2018-06-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2083/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=12864, 
name=cdcr-replicator-5667-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=12864, name=cdcr-replicator-5667-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([F4B55509DC446C5]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13680 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> 896392 INFO  
(SUITE-CdcrBidirectionalTest-seed#[F4B55509DC446C5]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_F4B55509DC446C5-001/init-core-data-001
   [junit4]   2> 896393 WARN  
(SUITE-CdcrBidirectionalTest-seed#[F4B55509DC446C5]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=4 numCloses=4
   [junit4]   2> 896393 INFO  
(SUITE-CdcrBidirectionalTest-seed#[F4B55509DC446C5]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 896393 INFO  
(SUITE-CdcrBidirectionalTest-seed#[F4B55509DC446C5]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason="", ssl=0.0/0.0, value=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 896395 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[F4B55509DC446C5]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 896395 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[F4B55509DC446C5]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_F4B55509DC446C5-001/cdcr-cluster2-001
   [junit4]   2> 896395 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[F4B55509DC446C5]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 896403 INFO  (Thread-2850) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 896403 INFO  (Thread-2850) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 896424 ERROR (Thread-2850) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 896507 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[F4B55509DC446C5]) [] 
o.a.s.c.ZkTestServer start zk server on port:43705
   [junit4]   2> 896508 INFO  (zkConnectionManagerCallback-2904-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 896512 INFO  (jetty-launcher-2901-thread-1) [] 
o.e.j.s.Server jetty-9.4.10.v20180503; built: 2018-05-03T15:56:21.710Z; git: 
daa59876e6f384329b122929e70a80934569428c; jvm 10.0.1+10
   [junit4]   2> 896575 INFO  (jetty-launcher-2901-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 896575 INFO  (jetty-launcher-2901-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 896575 INFO  (jetty-launcher-2901-thread-1) [] 
o.e.j.s.session node0 Scavenging every 60ms
   [junit4]   2> 896576 INFO  (jetty-launcher-2901-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6276f4af{/solr,null,AVAILABLE}
   [junit4]   2> 896577 INFO  (jetty-launcher-2901-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@345e42c6{SSL,[ssl, 
http/1.1]}{127.0.0.1:35155}
   [junit4]   2> 896577 INFO  (jetty-launcher-2901-thread-1) [] 
o.e.j.s.Server Started @896601ms
   [junit4]   2> 896577 INFO  (jetty-launcher-2901-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=35155}
   [junit4]   2> 896577 

[jira] [Commented] (LUCENE-8344) TokenStreamToAutomaton doesn't ignore trailing posInc when preservePositionIncrements=false

2018-06-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506649#comment-16506649
 ] 

David Smiley commented on LUCENE-8344:
--

The patch may be hard to review as a diff.  There are 3 tests now in 
TestPrefixCompletionQuery that are the same in data and queries but differ in 
expected results based on different CompletionAnalyzer settings.  I think it 
may be hard to maintain this as-such... it ought to be one so we don't have so 
much duplication and it may become easier to understand how the change in 
settings adjusts the expectations.  But hopefully you all think it's fine as is.

After some reflection, I figured that if preserveSep=false, then 
preservePositionIncrement is irrelevant, and so that's why we have one fewer 
test method than 2x2 would suggest.  This ought to throw an exception to the 
user.  Perhaps 3 factory methods would be better than the one constructor with 
two booleans?  There's likely an analogous situation with AnalyzingSuggester's 
long constructor.  Anyway this proposal doesn't belong in this issue.

Suggested CHANGES.txt notes:
* LUCENE-8344: TokenStreamToAutomaton (used by some suggesters) was not 
ignoring a trailing position increment when the preservePositionIncrement 
setting is false.  (David Smiley, Jim Ferenczi)

Upgrading _(a new section)_
*  LUCENE-8344: If you are using the AnalyzingSuggester or FuzzySuggester 
subclass, and if you explicitly use the preservePositionIncrements=false 
setting (not the default), then you ought to rebuild your suggester index.  If 
you don't, queries or indexed data with trailing position gaps (e.g. stop 
words) may not work correctly.

> TokenStreamToAutomaton doesn't ignore trailing posInc when 
> preservePositionIncrements=false
> ---
>
> Key: LUCENE-8344
> URL: https://issues.apache.org/jira/browse/LUCENE-8344
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/suggest
>Reporter: David Smiley
>Priority: Major
> Attachments: LUCENE-8344.patch, LUCENE-8344.patch, LUCENE-8344.patch
>
>
> TokenStreamToAutomaton in Lucene core is used by the AnalyzingSuggester 
> (incl. FuzzySuggester subclass ) and NRT Document Suggester and soon the 
> SolrTextTagger.  It has a setting {{preservePositionIncrements}} defaulting 
> to true.  If it's set to false (e.g. to ignore stopwords) and if there is a 
> _trailing_ position increment greater than 1, TS2A will _still_ add position 
> increments (holes) into the automata even though it was configured not to.
> I'm filing this issue separate from LUCENE-8332 where I first found it.  The 
> fix is very simple but I'm concerned about back-compat ramifications so I'm 
> filing it separately.  I'll attach a patch to show the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 238 - Still Unstable

2018-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/238/

1 tests failed.
FAILED:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
first posInc must be > 0

Stack Trace:
java.lang.IllegalStateException: first posInc must be > 0
at 
__randomizedtesting.SeedInfo.seed([681223C11B803DE5:2499CD042CE1D16]:0)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:76)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:748)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:888)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 3361 lines...]
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=true 
text='\u0b56\u0b68\u0b76\u0b07\u0b71\u0b4b\u0b64  \u01f2  viapv  
\u2de0\u2dea\u2df2\u2de5\u2df2 \ueb27\uf378 zwafpehc hsfbniwohmv ii 
\u2024\u2039\u2049\u2008\u201f\u204c 

[jira] [Updated] (LUCENE-8344) TokenStreamToAutomaton doesn't ignore trailing posInc when preservePositionIncrements=false

2018-06-08 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8344:
-
Attachment: LUCENE-8344.patch

> TokenStreamToAutomaton doesn't ignore trailing posInc when 
> preservePositionIncrements=false
> ---
>
> Key: LUCENE-8344
> URL: https://issues.apache.org/jira/browse/LUCENE-8344
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/suggest
>Reporter: David Smiley
>Priority: Major
> Attachments: LUCENE-8344.patch, LUCENE-8344.patch, LUCENE-8344.patch
>
>
> TokenStreamToAutomaton in Lucene core is used by the AnalyzingSuggester 
> (incl. FuzzySuggester subclass ) and NRT Document Suggester and soon the 
> SolrTextTagger.  It has a setting {{preservePositionIncrements}} defaulting 
> to true.  If it's set to false (e.g. to ignore stopwords) and if there is a 
> _trailing_ position increment greater than 1, TS2A will _still_ add position 
> increments (holes) into the automata even though it was configured not to.
> I'm filing this issue separate from LUCENE-8332 where I first found it.  The 
> fix is very simple but I'm concerned about back-compat ramifications so I'm 
> filing it separately.  I'll attach a patch to show the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-06-08 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-7976:
---
Attachment: SOLR-7976.patch

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> SOLR-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-06-08 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506594#comment-16506594
 ] 

Erick Erickson commented on LUCENE-7976:


[~simonw]

bq. should we check here if the segDelDocs is less that the threshold rather 
than checking if there is at least one delete.

Not unless we redefine what forceMerge does. It's perfectly possible to have a 
segment at this point that's 4.999G with one document deleted. It'll be 
horribly wasteful, but it's no worse than what has always happened with 
forceMerge.

Outside of forceMerge, segments won't be eligible unless they have 10% deleted 
docs.

In the case of findMerge, I'm counting on the scoring mechanism to keep this 
from being a problem.

bq. no if we have not seen a too large merge but the best one is too large we 
still add it? is this correct, don't we want to prevent.

This is awkward at present in that it preserves the old behavior. 
findForcedDeletesMerges has always allowed multiple large merges, leaving that 
for a later JIRA.

In the other cases, this will prevent multiple large merges because the first 
time we get a large merge, haveOneLargeMerge == false and bestTooLarge == true 
so we create a large merge.

Thereafter, if bestTooLarge == true we'll avoid adding it.

bq. I do wonder about the naming here why is this named maxDoc should it be 
named delCount or so?

Brain fart, changed. I started out doing one thing then changed it without 
noticing that.

bq. can I suggest to remove the seg prefix. It's obivous form the name. I also 
think it should be delCount instead
Done

bq. can you plese use parentesis around this?
Done

bq. in SegmentsInfoRequestHandler solr reads the SegmentInfos from disk which 
will not result in accurate counts.

Good to know, is there a better way to go? I don't think total accuracy is 
necessary here.

bq. ..I would loved to see them work without index writerDo you think you 
can still fix that easily

I have no idea ;) I saw the discussion at 8330 but didn't see any test 
conversions I could copy. I'll put up another version of this patch 
momentarily, if you could show me the pattern to use I'll see what I can do. 
That said, if it's involved at all I'd like to put it in a follow-on JIRA.

[~mikemccand] This set of changes is purely style, no code changes. So unless 
there are objections, I'll commit it sometime next week. 




> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> SOLR-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G 

[jira] [Commented] (SOLR-12469) Use the term TLS instead of SSL in documentation

2018-06-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506570#comment-16506570
 ] 

Jan Høydahl commented on SOLR-12469:


I was simply pinging you, not assuming you would do any work for me :)

It is no secret that SSL is outdated and deprecated and that TLS is the 
protocol we all use for HTTPS these days. However, after reading a bit more 
about the terminology confusion, see this article: 
[https://certsimple.com/blog/ssl-or-tls], I think I've changed my mind about 
the priority. Since "everyone" are still saying SSL we could keep SSL as the 
main term, but add TLS in addition for findability. So perhaps the page title 
should be "Enabling SSL/TLS" and also mention TLS elsewhere on the page?

> Use the term TLS instead of SSL in documentation
> 
>
> Key: SOLR-12469
> URL: https://issues.apache.org/jira/browse/SOLR-12469
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jan Høydahl
>Priority: Minor
>  Labels: ssl, tls
>
> In our documentation we should use the correct term TLS instead of SSL. We 
> could still mention SSL in the text for searchability. We should probably not 
> rename the refguide page file name in 
> [https://lucene.apache.org/solr/guide/7_3/enabling-ssl.html] and the title of 
> this page could be "Enabling TLS / SSL" since our refguide search is 
> title-only right now :) 
> I'm not proposing to rename the {{solr.in.sh}} environment variables for SSL 
> or java code.
> [~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12018) Ref Guide: Comment system is offline

2018-06-08 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506555#comment-16506555
 ] 

Cassandra Targett commented on SOLR-12018:
--

bq. Another simple idea I had was to include a link to the GitHub version of 
the page, from where the user (if he/she is logged in to GitHub) can click the 
edit button (pencil icon), do the suggested changes and submit a PR directly 
from that page

+1. I've seen it done and it wouldn't take much work at all. I think our GitHub 
PR workflow is fundamentally broken (in the sense that we can't just merge the 
PR from the GH interface, but instead have to download a diff or patch, etc.), 
but it shouldn't stop us from making it easier for non-committers to contribute.

> Ref Guide: Comment system is offline
> 
>
> Key: SOLR-12018
> URL: https://issues.apache.org/jira/browse/SOLR-12018
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: RefGuideCommentsBroken.png, SOLR-12018.patch
>
>
> The Ref Guide uses comments.apache.org to allow user comments. Sometime in 
> December/early January, it was taken offline. 
> I filed INFRA-15947 to ask after it's long-term status, and recently got an 
> answer that it an ETA is mid-March for a permanent INFRA-hosted system. 
> However, it's of course possible changes in priorities or other factors will 
> delay that timeline.
> Every Ref Guide page currently invites users to leave comments, but since the 
> whole Comments area is pulled in via JavaScript from a non-existent server, 
> there's no space to do so (see attached screenshot). While we wait for the 
> permanent server to be online, we have a couple of options:
> # Leave it the way it is and hopefully by mid-March it will be back
> # Change the text to tell users it's not working temporarily on all published 
> versions
> # Remove it from all the published versions and put it back when it's back
> I'm not a great fan of #2 or #3, because it'd be a bit of work for me to 
> backport changes to 4 branches and republish every guide just to fix it again 
> in a month or so. I'm fine with option #1 since I've known about it for about 
> a month at least and as far as I can tell no one else has noticed. But if 
> people feel strongly about it now that they know about it, we can figure 
> something out.
> If for some reason it takes longer than mid-March to get it back, or INFRA 
> chooses to stop supporting it entirely, this issue can morph into what we 
> should do for an alternative permanent solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12018) Ref Guide: Comment system is offline

2018-06-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506542#comment-16506542
 ] 

Jan Høydahl commented on SOLR-12018:


Another simple idea I had was to include a link to the GitHub version of the 
page, from where the user (if he/she is logged in to GitHub) can click the edit 
button (pencil icon), do the suggested changes and submit a PR directly from 
that page. Example for the Analyzers page: 
[https://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/analyzers.adoc]
 

> Ref Guide: Comment system is offline
> 
>
> Key: SOLR-12018
> URL: https://issues.apache.org/jira/browse/SOLR-12018
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: RefGuideCommentsBroken.png, SOLR-12018.patch
>
>
> The Ref Guide uses comments.apache.org to allow user comments. Sometime in 
> December/early January, it was taken offline. 
> I filed INFRA-15947 to ask after it's long-term status, and recently got an 
> answer that it an ETA is mid-March for a permanent INFRA-hosted system. 
> However, it's of course possible changes in priorities or other factors will 
> delay that timeline.
> Every Ref Guide page currently invites users to leave comments, but since the 
> whole Comments area is pulled in via JavaScript from a non-existent server, 
> there's no space to do so (see attached screenshot). While we wait for the 
> permanent server to be online, we have a couple of options:
> # Leave it the way it is and hopefully by mid-March it will be back
> # Change the text to tell users it's not working temporarily on all published 
> versions
> # Remove it from all the published versions and put it back when it's back
> I'm not a great fan of #2 or #3, because it'd be a bit of work for me to 
> backport changes to 4 branches and republish every guide just to fix it again 
> in a month or so. I'm fine with option #1 since I've known about it for about 
> a month at least and as far as I can tell no one else has noticed. But if 
> people feel strongly about it now that they know about it, we can figure 
> something out.
> If for some reason it takes longer than mid-March to get it back, or INFRA 
> chooses to stop supporting it entirely, this issue can morph into what we 
> should do for an alternative permanent solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7619) Add WordDelimiterGraphFilter

2018-06-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506535#comment-16506535
 ] 

David Smiley commented on LUCENE-7619:
--

RE TokenStreamToAutomaton and finalOffsetGapAsHole, shouldn't this be ignored 
when preservePositionIncrements==false?  In other words, I think 
finalOffsetGapAsHole should only have an effect when preservePositionIncrements.

> Add WordDelimiterGraphFilter
> 
>
> Key: LUCENE-7619
> URL: https://issues.apache.org/jira/browse/LUCENE-7619
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Major
> Fix For: 6.5, 7.0
>
> Attachments: LUCENE-7619.patch, LUCENE-7619.patch, LUCENE-7619.patch, 
> after.png, before.png
>
>
> Currently, {{WordDelimiterFilter}} doesn't try to set the {{posLen}} 
> attribute and so it creates graphs like this:
> !before.png!
> but with this patch (still a work in progress) it creates this graph instead:
> !after.png!
> This means (today) positional queries when using WDF at search time are 
> buggy, but since we fixed LUCENE-7603, with this change here you should be 
> able to use positional queries with WDGF.
> I'm also trying to produce holes properly (removes logic from the current WDF 
> that swallows a hole when whole token is just delimiters).
> Surprisingly, it's actually quite easy to tweak WDF to create a graph (unlike 
> e.g. {{SynonymGraphFilter}}) because it's already creating the necessary new 
> positions, and its output graph never has side paths, except for single 
> tokens that skip nodes because they have {{posLen > 1}}.  I.e. the only fix 
> to make, I think, is to set {{posLen}} properly.  And it really helps that it 
> does its own "new token buffering + sorting" already.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506523#comment-16506523
 ] 

ASF subversion and git services commented on SOLR-11982:


Commit 4dacf9081240076dd421bdd9819c2f13aec19b8c in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4dacf90 ]

SOLR-11982: Ref Guide: remove deprecated content; break up long paragraphs


> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506527#comment-16506527
 ] 

ASF subversion and git services commented on SOLR-11982:


Commit b3ec7510735e5f4a34e63a4977cdd5139d7135f8 in lucene-solr's branch 
refs/heads/branch_7_4 from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b3ec751 ]

SOLR-11982: Ref Guide: remove deprecated content; break up long paragraphs


> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506526#comment-16506526
 ] 

ASF subversion and git services commented on SOLR-11982:


Commit a05234f77739878215109e4959fb435486a85fb9 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a05234f ]

SOLR-11982: Ref Guide: remove deprecated content; break up long paragraphs


> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-06-08 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-7976:
---
Attachment: LUCENE-7976.patch

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 2082 - Still Unstable!

2018-06-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2082/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([575206002C131A6A:34993082B5DC6947]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:188)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 

[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506493#comment-16506493
 ] 

ASF subversion and git services commented on SOLR-4793:
---

Commit c35edc8fc394f4b88185fe24b83f748b7793dde9 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c35edc8 ]

SOLR-4793: Ref Guide: shorten the section heading & fix refs


> Solr Cloud can't upload large config files ( > 1MB)  to Zookeeper
> -
>
> Key: SOLR-4793
> URL: https://issues.apache.org/jira/browse/SOLR-4793
> Project: Solr
>  Issue Type: Improvement
>Reporter: Son Nguyen
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-4793.patch
>
>
> Zookeeper set znode size limit to 1MB by default. So we can't start Solr 
> Cloud with some large config files, like synonyms.txt.
> Jan Høydahl has a good idea:
> "SolrCloud is designed with an assumption that you should be able to upload 
> your whole disk-based conf folder into ZK, and that you should be able to add 
> an empty Solr node to a cluster and it would download all config from ZK. So 
> immediately a splitting strategy automatically handled by ZkSolresourceLoader 
> for large files could be one way forward, i.e. store synonyms.txt as e.g. 
> __001_synonyms.txt __002_synonyms.txt"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506495#comment-16506495
 ] 

ASF subversion and git services commented on SOLR-4793:
---

Commit 4031935f3ba008a6beb8216b94e67548bfbf9ec2 in lucene-solr's branch 
refs/heads/branch_7_4 from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4031935 ]

SOLR-4793: Ref Guide: shorten the section heading & fix refs


> Solr Cloud can't upload large config files ( > 1MB)  to Zookeeper
> -
>
> Key: SOLR-4793
> URL: https://issues.apache.org/jira/browse/SOLR-4793
> Project: Solr
>  Issue Type: Improvement
>Reporter: Son Nguyen
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-4793.patch
>
>
> Zookeeper set znode size limit to 1MB by default. So we can't start Solr 
> Cloud with some large config files, like synonyms.txt.
> Jan Høydahl has a good idea:
> "SolrCloud is designed with an assumption that you should be able to upload 
> your whole disk-based conf folder into ZK, and that you should be able to add 
> an empty Solr node to a cluster and it would download all config from ZK. So 
> immediately a splitting strategy automatically handled by ZkSolresourceLoader 
> for large files could be one way forward, i.e. store synonyms.txt as e.g. 
> __001_synonyms.txt __002_synonyms.txt"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506491#comment-16506491
 ] 

ASF subversion and git services commented on SOLR-4793:
---

Commit 4c7b7c0063b2fd194d2037fd769c2b0e5fcf in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4c7b7c0 ]

SOLR-4793: Ref Guide: shorten the section heading & fix refs


> Solr Cloud can't upload large config files ( > 1MB)  to Zookeeper
> -
>
> Key: SOLR-4793
> URL: https://issues.apache.org/jira/browse/SOLR-4793
> Project: Solr
>  Issue Type: Improvement
>Reporter: Son Nguyen
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-4793.patch
>
>
> Zookeeper set znode size limit to 1MB by default. So we can't start Solr 
> Cloud with some large config files, like synonyms.txt.
> Jan Høydahl has a good idea:
> "SolrCloud is designed with an assumption that you should be able to upload 
> your whole disk-based conf folder into ZK, and that you should be able to add 
> an empty Solr node to a cluster and it would download all config from ZK. So 
> immediately a splitting strategy automatically handled by ZkSolresourceLoader 
> for large files could be one way forward, i.e. store synonyms.txt as e.g. 
> __001_synonyms.txt __002_synonyms.txt"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7536) adding fields to newly created managed-schema could sometimes cause error

2018-06-08 Thread Christopher Jackson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506461#comment-16506461
 ] 

Christopher Jackson edited comment on SOLR-7536 at 6/8/18 7:12 PM:
---

I have also just encountered this issue on version 5.5.2 when using solrCloud 
and a ManagedIndexSchemaFactory. The configset included a schema.xml which was 
used to produce a managed-schema file in the config folder on zk, however the 
modify to the schema failed as it was still looking for schema.xml.

To workaround the issue I took the managed-schema that was introduced and 
placed it in my initial configset, dropped the collection, recreated it, and 
tried the update to schema again which worked without issue.


was (Author: jackson):
I have also just encountered this issue on version 5.5.2. The configset 
included a schema.xml which was used to produce a managed-schema file in the 
config folder on zk, however the modify to the schema failed as it was still 
looking for schema.xml.

To workaround the issue I took the managed-schema that was introduced and 
placed it in my initial configset, dropped the collection, recreated it, and 
tried the update to schema again which worked without issue.

>  adding fields to newly created managed-schema could sometimes cause error
> --
>
> Key: SOLR-7536
> URL: https://issues.apache.org/jira/browse/SOLR-7536
> Project: Solr
>  Issue Type: Bug
>Reporter: Zilo Zongh
>Assignee: Steve Rowe
>Priority: Major
>
> When using managed schema in SolrCloud, adding fields into schema would 
> SOMETIMES end up prompting "Can't find resource 'schema.xml' in classpath or 
> '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server", there is of 
> course no schema.xml in configs, but 'schema.xml.bak' and 'managed-schema'
> Code to upload configs and create collection:
>  Path tempPath = getConfigPath();
>   client.uploadConfig(tempPath, name); //customized 
> configs with solrconfig.xml using ManagedIndexSchemaFactory
>   
>   if(numShards==0){
>   numShards = getNumNodes(client);
>   }
>   
>   Create request = new CollectionAdminRequest.Create();
>   request.setCollectionName(name);
>   request.setNumShards(numShards);
>   replicationFactor = 
> (replicationFactor==0?DEFAULT_REPLICA_FACTOR:replicationFactor);
>   request.setReplicationFactor(replicationFactor);
>   
> request.setMaxShardsPerNode(maxShardsPerNode==0?replicationFactor:maxShardsPerNode);
>   CollectionAdminResponse response = 
> request.process(client);
>  adding fields to schema, either by curl or by httpclient,  would sometimes 
> yield the following error, but the error can be fixed by RELOADING the newly 
> created collection once or several times:
> INFO  - [{  "responseHeader":{"status":500,"QTime":5},  
> "errors":["Error reading input String Can't find resource 'schema.xml' in 
> classpath or '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server"], 
>  "error":{"msg":"Can't find resource 'schema.xml' in classpath or 
> '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server",
> "trace":"java.io.IOException: Can't find resource 'schema.xml' in classpath 
> or '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server
>
>   at 
> org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:98)
>   at 
> org.apache.solr.schema.SchemaManager.getFreshManagedSchema(SchemaManager.java:421)
>   at 
> org.apache.solr.schema.SchemaManager.doOperations(SchemaManager.java:104)
>   at 
> org.apache.solr.schema.SchemaManager.performOperations(SchemaManager.java:94)
>   at 
> org.apache.solr.handler.SchemaHandler.handleRequestBody(SchemaHandler.java:57)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> 

[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506465#comment-16506465
 ] 

ASF subversion and git services commented on SOLR-12378:


Commit 72022c293ef82eb2e69949c803fa7889e070286d in lucene-solr's branch 
refs/heads/branch_7_4 from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=72022c2 ]

SOLR-12378: Ref Guide: reformat parameter list; break up big paragraph; fix 
typos


> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506463#comment-16506463
 ] 

ASF subversion and git services commented on SOLR-12378:


Commit b47cb38d63d0c9d8518f81a83845ebe61a517ce1 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b47cb38 ]

SOLR-12378: Ref Guide: reformat parameter list; break up big paragraph; fix 
typos


> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7536) adding fields to newly created managed-schema could sometimes cause error

2018-06-08 Thread Christopher Jackson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506461#comment-16506461
 ] 

Christopher Jackson commented on SOLR-7536:
---

I have also just encountered this issue on version 5.5.2. The configset 
included a schema.xml which was used to produce a managed-schema file in the 
config folder on zk, however the modify to the schema failed as it was still 
looking for schema.xml.

To workaround the issue I took the managed-schema that was introduced and 
placed it in my initial configset, dropped the collection, recreated it, and 
tried the update to schema again which worked without issue.

>  adding fields to newly created managed-schema could sometimes cause error
> --
>
> Key: SOLR-7536
> URL: https://issues.apache.org/jira/browse/SOLR-7536
> Project: Solr
>  Issue Type: Bug
>Reporter: Zilo Zongh
>Assignee: Steve Rowe
>Priority: Major
>
> When using managed schema in SolrCloud, adding fields into schema would 
> SOMETIMES end up prompting "Can't find resource 'schema.xml' in classpath or 
> '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server", there is of 
> course no schema.xml in configs, but 'schema.xml.bak' and 'managed-schema'
> Code to upload configs and create collection:
>  Path tempPath = getConfigPath();
>   client.uploadConfig(tempPath, name); //customized 
> configs with solrconfig.xml using ManagedIndexSchemaFactory
>   
>   if(numShards==0){
>   numShards = getNumNodes(client);
>   }
>   
>   Create request = new CollectionAdminRequest.Create();
>   request.setCollectionName(name);
>   request.setNumShards(numShards);
>   replicationFactor = 
> (replicationFactor==0?DEFAULT_REPLICA_FACTOR:replicationFactor);
>   request.setReplicationFactor(replicationFactor);
>   
> request.setMaxShardsPerNode(maxShardsPerNode==0?replicationFactor:maxShardsPerNode);
>   CollectionAdminResponse response = 
> request.process(client);
>  adding fields to schema, either by curl or by httpclient,  would sometimes 
> yield the following error, but the error can be fixed by RELOADING the newly 
> created collection once or several times:
> INFO  - [{  "responseHeader":{"status":500,"QTime":5},  
> "errors":["Error reading input String Can't find resource 'schema.xml' in 
> classpath or '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server"], 
>  "error":{"msg":"Can't find resource 'schema.xml' in classpath or 
> '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server",
> "trace":"java.io.IOException: Can't find resource 'schema.xml' in classpath 
> or '/configs/collectionName', cwd=/export/solr/solr-5.1.0/server
>
>   at 
> org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:98)
>   at 
> org.apache.solr.schema.SchemaManager.getFreshManagedSchema(SchemaManager.java:421)
>   at 
> org.apache.solr.schema.SchemaManager.doOperations(SchemaManager.java:104)
>   at 
> org.apache.solr.schema.SchemaManager.performOperations(SchemaManager.java:94)
>   at 
> org.apache.solr.handler.SchemaHandler.handleRequestBody(SchemaHandler.java:57)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> 

[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506460#comment-16506460
 ] 

ASF subversion and git services commented on SOLR-12378:


Commit 9b5dd15471a979ef4e5f197c6673e0e324b2f24d in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b5dd15 ]

SOLR-12378: Ref Guide: reformat parameter list; break up big paragraph; fix 
typos


> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Propose hiding/removing JIRA Environment text input

2018-06-08 Thread Steve Rowe
+1 to try to fix the form ourselves, thanks Cassandra.  I think putting 
Description above Environment will do the trick.  (I just created an issue and 
put the description in the environment field…)

--
Steve
www.lucidworks.com

> On Jun 8, 2018, at 8:44 AM, Cassandra Targett  wrote:
> 
> I've been debating saying something about this too - I think it happened when 
> INFRA added some text to direct users to use the mailing list or IRC if they 
> really have a support question instead of a bug (INFRA-16507).
> 
> The most basic solution is a simple re-ordering of the form, which in JIRA is 
> really easy to do. We could put the environment field near the bottom and if 
> someone is paying attention to the form and wants to fill it in, fine, but 
> the rest of us can get at the most commonly used/needed fields quicker.
> 
> As I was writing that I thought I'd refresh my memory of where screen editing 
> is done in JIRA, and it looks like those of us with committer status have 
> access to edit that form. So we can solve this quickly, and probably we can 
> do it on our own without asking INFRA.
> 
> If we come to consensus on either burying or removing the field, I'd be happy 
> to be the one that makes the change.
> 
> On Fri, Jun 8, 2018 at 7:24 AM David Smiley  wrote:
> Many of us have accidentally added a long-form description of our JIRA issues 
> into the Environment field of JIRA instead of the Description.  I think we 
> can agree this is pretty annoying.  It seems to have been happening more 
> lately with a change to JIRA that for whatever reason has made it more 
> visually tempting to start typing there.  I want to arrange for some sort of 
> fix with infra.  I'm willing to work with them to explore what can be done.  
> But what should we propose infra do exactly?  I'd like to get a sense of that 
> here with our community first.
> 
> IMO, I don't think a dedicated Environment input field is useful when someone 
> could just as easily type anything pertinent into the description field of a 
> bug report.  Less input fields means a simpler JIRA UI -- a good thing IMO.  
> But since it's been used in the past, it may be impossible to actually remove 
> it while keeping the text on old issues.  Nonetheless I'm ambivalent if it 
> were to be outright removed and others here want this since I think it's of 
> such low value that data loss wouldn't bother me.
> 
> Can it be retained as a purely read-only on display but otherwise can't edit? 
> I'd like that.
> 
> Perhaps the path of least change and thus "safest" path is for it to be 
> removed from the "Create Issue" screen, yet retain it on other screens for 
> those that are fans of adding/editing it?
> 
> ~ David
> -- 
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
> http://www.solrenterprisesearchserver.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12471) TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise() reproducing failure

2018-06-08 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-12471:
-

 Summary: TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise() 
reproducing failure
 Key: SOLR-12471
 URL: https://issues.apache.org/jira/browse/SOLR-12471
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
 Environment: From 
[https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2081/] - reproduced 5/5 
iterations:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSolr4Spatial2 
-Dtests.method=testLLPDecodeIsStableAndPrecise -Dtests.seed=6CFF59F4465B6A88 
-Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=Indian/Maldives -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.02s J1 | 
TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise 
{seed=[6CFF59F4465B6A88:689387D5F6F69C45]} <<<
   [junit4]> Throwable #1: java.lang.AssertionError: deltaCm too high: 
1.3856821572729467
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([6CFF59F4465B6A88:689387D5F6F69C45]:0)
   [junit4]>at 
org.apache.solr.search.TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise(TestSolr4Spatial2.java:171)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}
Reporter: Steve Rowe






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12471) TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise() reproducing failure

2018-06-08 Thread Steve Rowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12471:
--
Environment: (was: From 
[https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2081/] - reproduced 5/5 
iterations:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSolr4Spatial2 
-Dtests.method=testLLPDecodeIsStableAndPrecise -Dtests.seed=6CFF59F4465B6A88 
-Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=Indian/Maldives -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.02s J1 | 
TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise 
{seed=[6CFF59F4465B6A88:689387D5F6F69C45]} <<<
   [junit4]> Throwable #1: java.lang.AssertionError: deltaCm too high: 
1.3856821572729467
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([6CFF59F4465B6A88:689387D5F6F69C45]:0)
   [junit4]>at 
org.apache.solr.search.TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise(TestSolr4Spatial2.java:171)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat})

> TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise() reproducing failure
> ---
>
> Key: SOLR-12471
> URL: https://issues.apache.org/jira/browse/SOLR-12471
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12471) TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise() reproducing failure

2018-06-08 Thread Steve Rowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12471:
--
Description: 
>From [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2081/] - 
>reproduced 5/5 iterations:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSolr4Spatial2 
-Dtests.method=testLLPDecodeIsStableAndPrecise -Dtests.seed=6CFF59F4465B6A88 
-Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=Indian/Maldives -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.02s J1 | 
TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise 
{seed=[6CFF59F4465B6A88:689387D5F6F69C45]} <<<
   [junit4]> Throwable #1: java.lang.AssertionError: deltaCm too high: 
1.3856821572729467
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([6CFF59F4465B6A88:689387D5F6F69C45]:0)
   [junit4]>at 
org.apache.solr.search.TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise(TestSolr4Spatial2.java:171)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}

> TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise() reproducing failure
> ---
>
> Key: SOLR-12471
> URL: https://issues.apache.org/jira/browse/SOLR-12471
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Major
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2081/] - 
> reproduced 5/5 iterations:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSolr4Spatial2 
> -Dtests.method=testLLPDecodeIsStableAndPrecise -Dtests.seed=6CFF59F4465B6A88 
> -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
> -Dtests.timezone=Indian/Maldives -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 0.02s J1 | 
> TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise 
> {seed=[6CFF59F4465B6A88:689387D5F6F69C45]} <<<
>[junit4]> Throwable #1: java.lang.AssertionError: deltaCm too high: 
> 1.3856821572729467
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([6CFF59F4465B6A88:689387D5F6F69C45]:0)
>[junit4]>  at 
> org.apache.solr.search.TestSolr4Spatial2.testLLPDecodeIsStableAndPrecise(TestSolr4Spatial2.java:171)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12470) Search Rate Trigger created more than 3 replicas

2018-06-08 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506437#comment-16506437
 ] 

Andrzej Bialecki  commented on SOLR-12470:
--

I'll look into it more closely - a quick comment for now:

bq. My first expectation was that I'd see 3 docs but I saw 4 docs. Curious why 
it's 4 ( the docs are attached as 4_docs.json )

These are STARTED and SUCCEEDED events produced by two listeners - your 
listener and the default one that is always created when a trigger is created. 
Admittedly there should be an event property that contains the listener name - 
it's impossible to figure out now which listener produced what event...

2 replicas on the same node ... it could be caused by the cluster layout 
existing just before each operation, and the autoscaling policy? 
AddReplicaSuggester simulates how adding a replica affects number of cores and 
disk space on each node, and assigns replicas to nodes based on minimizing the 
violations.

> Search Rate Trigger created more than 3 replicas
> 
>
> Key: SOLR-12470
> URL: https://issues.apache.org/jira/browse/SOLR-12470
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: 4_docs.json, bug_report.txt, graph_view.png, 
> multiple_replicas.zip, system_docs.json
>
>
> Here's the trigger that I created . At this point the collection was one 
> shard and one replica ( on node3 )
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-trigger": {
> "name" : "search_rate_trigger",
> "event" : "searchRate",
> "collection" : "test_rate_trigger",
> "rate" : 1.0,
> "waitFor" : "1m",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }' http://localhost:8983/solr/admin/autoscaling{code}
>  
> I also had a trigger listener setup as I was testing the listener feature
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-listener": {
> "name": "search_rate_listener",
> "trigger": "search_rate_trigger",
> "stage": ["STARTED", "ABORTED", "SUCCEEDED", "FAILED"],
> "class": "solr.SystemLogListener"
> }
> }' http://localhost:8983/solr/admin/autoscaling{code}
>  
> I ran a script to fire queries at every 100ms . The index didn't have any 
> docs so it's a simple match all query
> {code:java}
> while [ 1 ]
> do
> curl -s "http://localhost:8984/solr/test_rate_trigger/select/?q=*:*; > 
> /dev/null
> sleep .1
> done{code}
> After a few minutes I see 4 replicas being created.
> Attaching logs from all 4 nodes. It should be fairly easy to reproduce with 
> the above mentioned steps
> Also attaching all the docs from the .system collection for reference
> Here's another interesting this I noticed. I re-created the setup but this 
> time removed the execute_plan part
> Now every 1 min the compute plan action tries to create 3 replicas . Why I 
> found this interesting is that it was trying to create two replicas on the 
> same node
> Does this look like a separate bug?
> {code:java}
> INFO - 2018-06-08 03:41:32.586; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> INFO - 2018-06-08 03:41:40.909; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={prefix=CONTAINER.fs.usableSpace,CORE.coreName=javabin=2=solr.node,solr.core}
>  status=0 QTime=1
> INFO - 2018-06-08 03:41:40.932; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8984_solr=NRT
> INFO - 2018-06-08 03:41:40.933; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
> INFO - 2018-06-08 03:41:40.934; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
> INFO - 2018-06-08 03:41:40.934; [ ] 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper; returnSession, 
> curr-time 9184331 sessionWrapper.createTime 9184324085271, 
> this.sessionWrapper.createTime 9184324085271
> INFO - 2018-06-08 03:42:32.604; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> 

[jira] [Assigned] (SOLR-12470) Search Rate Trigger created more than 3 replicas

2018-06-08 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  reassigned SOLR-12470:


Assignee: Andrzej Bialecki 

> Search Rate Trigger created more than 3 replicas
> 
>
> Key: SOLR-12470
> URL: https://issues.apache.org/jira/browse/SOLR-12470
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: 4_docs.json, bug_report.txt, graph_view.png, 
> multiple_replicas.zip, system_docs.json
>
>
> Here's the trigger that I created . At this point the collection was one 
> shard and one replica ( on node3 )
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-trigger": {
> "name" : "search_rate_trigger",
> "event" : "searchRate",
> "collection" : "test_rate_trigger",
> "rate" : 1.0,
> "waitFor" : "1m",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }' http://localhost:8983/solr/admin/autoscaling{code}
>  
> I also had a trigger listener setup as I was testing the listener feature
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-listener": {
> "name": "search_rate_listener",
> "trigger": "search_rate_trigger",
> "stage": ["STARTED", "ABORTED", "SUCCEEDED", "FAILED"],
> "class": "solr.SystemLogListener"
> }
> }' http://localhost:8983/solr/admin/autoscaling{code}
>  
> I ran a script to fire queries at every 100ms . The index didn't have any 
> docs so it's a simple match all query
> {code:java}
> while [ 1 ]
> do
> curl -s "http://localhost:8984/solr/test_rate_trigger/select/?q=*:*; > 
> /dev/null
> sleep .1
> done{code}
> After a few minutes I see 4 replicas being created.
> Attaching logs from all 4 nodes. It should be fairly easy to reproduce with 
> the above mentioned steps
> Also attaching all the docs from the .system collection for reference
> Here's another interesting this I noticed. I re-created the setup but this 
> time removed the execute_plan part
> Now every 1 min the compute plan action tries to create 3 replicas . Why I 
> found this interesting is that it was trying to create two replicas on the 
> same node
> Does this look like a separate bug?
> {code:java}
> INFO - 2018-06-08 03:41:32.586; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> INFO - 2018-06-08 03:41:40.909; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={prefix=CONTAINER.fs.usableSpace,CORE.coreName=javabin=2=solr.node,solr.core}
>  status=0 QTime=1
> INFO - 2018-06-08 03:41:40.932; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8984_solr=NRT
> INFO - 2018-06-08 03:41:40.933; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
> INFO - 2018-06-08 03:41:40.934; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
> INFO - 2018-06-08 03:41:40.934; [ ] 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper; returnSession, 
> curr-time 9184331 sessionWrapper.createTime 9184324085271, 
> this.sessionWrapper.createTime 9184324085271
> INFO - 2018-06-08 03:42:32.604; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> INFO - 2018-06-08 03:42:41.525; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={prefix=CONTAINER.fs.usableSpace,CORE.coreName=javabin=2=solr.node,solr.core}
>  status=0 QTime=0
> INFO - 2018-06-08 03:42:41.559; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8984_solr=NRT
> INFO - 2018-06-08 03:42:41.560; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
> INFO - 2018-06-08 03:42:41.560; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> 

[jira] [Resolved] (SOLR-12438) Improve status reporting of metrics history API

2018-06-08 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-12438.
--
Resolution: Fixed

> Improve status reporting of metrics history API
> ---
>
> Key: SOLR-12438
> URL: https://issues.apache.org/jira/browse/SOLR-12438
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12438.patch, SOLR-12438.patch
>
>
> In an offline conversation with [~janhoy] we identified the following areas 
> of improvement to the metrics history API in order to increase its 
> user-friendliness and provide more details about its status.
>  
> * there are three possible states for the API: inactive (when not in cloud 
> mode), in-memory only (when {{.system}} collection doesn’t exist), and 
> persisted when it’s both active and persisted in Solr. The 
> /admin/metrics/history endpoint should give some hint about this status, such 
> as "mode":"memory/index", "active": true|false. Or a separate action=status 
> just to poll status? Currently when the API is inactive it simply returns 404 
> Not Found.
> * when in "memory" mode a call to /admin/metrics/history on a non-overseer 
> node should forward the request to the overseer, so that the client does not 
> need to care what mode it is in - kind of like how a query works in 
> distributed mode.
> * better documentation for the API behavior in each mode.
> * perhaps if mode=memory, there could also be a "message":"Warning, metrics 
> history is not being persisted. Please create the .system collection to start 
> persisting history"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12438) Improve status reporting of metrics history API

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506424#comment-16506424
 ] 

ASF subversion and git services commented on SOLR-12438:


Commit 943e78e7e45e29661fb5ef1a3cb2d315ab348165 in lucene-solr's branch 
refs/heads/branch_7_4 from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=943e78e ]

SOLR-12438: Improve status reporting of metrics history API.


> Improve status reporting of metrics history API
> ---
>
> Key: SOLR-12438
> URL: https://issues.apache.org/jira/browse/SOLR-12438
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12438.patch, SOLR-12438.patch
>
>
> In an offline conversation with [~janhoy] we identified the following areas 
> of improvement to the metrics history API in order to increase its 
> user-friendliness and provide more details about its status.
>  
> * there are three possible states for the API: inactive (when not in cloud 
> mode), in-memory only (when {{.system}} collection doesn’t exist), and 
> persisted when it’s both active and persisted in Solr. The 
> /admin/metrics/history endpoint should give some hint about this status, such 
> as "mode":"memory/index", "active": true|false. Or a separate action=status 
> just to poll status? Currently when the API is inactive it simply returns 404 
> Not Found.
> * when in "memory" mode a call to /admin/metrics/history on a non-overseer 
> node should forward the request to the overseer, so that the client does not 
> need to care what mode it is in - kind of like how a query works in 
> distributed mode.
> * better documentation for the API behavior in each mode.
> * perhaps if mode=memory, there could also be a "message":"Warning, metrics 
> history is not being persisted. Please create the .system collection to start 
> persisting history"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12470) Search Rate Trigger created more than 3 replicas

2018-06-08 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12470:
-
Attachment: system_docs.json
multiple_replicas.zip
graph_view.png
bug_report.txt
4_docs.json

> Search Rate Trigger created more than 3 replicas
> 
>
> Key: SOLR-12470
> URL: https://issues.apache.org/jira/browse/SOLR-12470
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Priority: Major
> Attachments: 4_docs.json, bug_report.txt, graph_view.png, 
> multiple_replicas.zip, system_docs.json
>
>
> Here's the trigger that I created . At this point the collection was one 
> shard and one replica ( on node3 )
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-trigger": {
> "name" : "search_rate_trigger",
> "event" : "searchRate",
> "collection" : "test_rate_trigger",
> "rate" : 1.0,
> "waitFor" : "1m",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }' http://localhost:8983/solr/admin/autoscaling{code}
>  
> I also had a trigger listener setup as I was testing the listener feature
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-listener": {
> "name": "search_rate_listener",
> "trigger": "search_rate_trigger",
> "stage": ["STARTED", "ABORTED", "SUCCEEDED", "FAILED"],
> "class": "solr.SystemLogListener"
> }
> }' http://localhost:8983/solr/admin/autoscaling{code}
>  
> I ran a script to fire queries at every 100ms . The index didn't have any 
> docs so it's a simple match all query
> {code:java}
> while [ 1 ]
> do
> curl -s "http://localhost:8984/solr/test_rate_trigger/select/?q=*:*; > 
> /dev/null
> sleep .1
> done{code}
> After a few minutes I see 4 replicas being created.
> Attaching logs from all 4 nodes. It should be fairly easy to reproduce with 
> the above mentioned steps
> Also attaching all the docs from the .system collection for reference
> Here's another interesting this I noticed. I re-created the setup but this 
> time removed the execute_plan part
> Now every 1 min the compute plan action tries to create 3 replicas . Why I 
> found this interesting is that it was trying to create two replicas on the 
> same node
> Does this look like a separate bug?
> {code:java}
> INFO - 2018-06-08 03:41:32.586; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> INFO - 2018-06-08 03:41:40.909; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={prefix=CONTAINER.fs.usableSpace,CORE.coreName=javabin=2=solr.node,solr.core}
>  status=0 QTime=1
> INFO - 2018-06-08 03:41:40.932; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8984_solr=NRT
> INFO - 2018-06-08 03:41:40.933; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
> INFO - 2018-06-08 03:41:40.934; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
> INFO - 2018-06-08 03:41:40.934; [ ] 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper; returnSession, 
> curr-time 9184331 sessionWrapper.createTime 9184324085271, 
> this.sessionWrapper.createTime 9184324085271
> INFO - 2018-06-08 03:42:32.604; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> INFO - 2018-06-08 03:42:41.525; [ ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/metrics 
> params={prefix=CONTAINER.fs.usableSpace,CORE.coreName=javabin=2=solr.node,solr.core}
>  status=0 QTime=0
> INFO - 2018-06-08 03:42:41.559; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8984_solr=NRT
> INFO - 2018-06-08 03:42:41.560; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
> action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
> INFO - 2018-06-08 03:42:41.560; [ ] 
> org.apache.solr.cloud.autoscaling.ComputePlanAction; 

[jira] [Created] (SOLR-12470) Search Rate Trigger created more than 3 replicas

2018-06-08 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12470:


 Summary: Search Rate Trigger created more than 3 replicas
 Key: SOLR-12470
 URL: https://issues.apache.org/jira/browse/SOLR-12470
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Varun Thacker
 Attachments: 4_docs.json, bug_report.txt, graph_view.png, 
multiple_replicas.zip, system_docs.json

Here's the trigger that I created . At this point the collection was one shard 
and one replica ( on node3 )
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-trigger": {
"name" : "search_rate_trigger",
"event" : "searchRate",
"collection" : "test_rate_trigger",
"rate" : 1.0,
"waitFor" : "1m",
"enabled" : true,
"actions" : [
{
"name" : "compute_plan",
"class": "solr.ComputePlanAction"
},
{
"name" : "execute_plan",
"class": "solr.ExecutePlanAction"
}
]
}
}' http://localhost:8983/solr/admin/autoscaling{code}
 

I also had a trigger listener setup as I was testing the listener feature
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-listener": {
"name": "search_rate_listener",
"trigger": "search_rate_trigger",
"stage": ["STARTED", "ABORTED", "SUCCEEDED", "FAILED"],
"class": "solr.SystemLogListener"
}
}' http://localhost:8983/solr/admin/autoscaling{code}
 

I ran a script to fire queries at every 100ms . The index didn't have any docs 
so it's a simple match all query
{code:java}
while [ 1 ]
do
curl -s "http://localhost:8984/solr/test_rate_trigger/select/?q=*:*; > /dev/null
sleep .1
done{code}

After a few minutes I see 4 replicas being created.

Attaching logs from all 4 nodes. It should be fairly easy to reproduce with the 
above mentioned steps

Also attaching all the docs from the .system collection for reference

Here's another interesting this I noticed. I re-created the setup but this time 
removed the execute_plan part

Now every 1 min the compute plan action tries to create 3 replicas . Why I 
found this interesting is that it was trying to create two replicas on the same 
node
Does this look like a separate bug?
{code:java}
INFO - 2018-06-08 03:41:32.586; [ ] org.apache.solr.servlet.HttpSolrCall; 
[admin] webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
INFO - 2018-06-08 03:41:40.909; [ ] org.apache.solr.servlet.HttpSolrCall; 
[admin] webapp=null path=/admin/metrics 
params={prefix=CONTAINER.fs.usableSpace,CORE.coreName=javabin=2=solr.node,solr.core}
 status=0 QTime=1
INFO - 2018-06-08 03:41:40.932; [ ] 
org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8984_solr=NRT
INFO - 2018-06-08 03:41:40.933; [ ] 
org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
INFO - 2018-06-08 03:41:40.934; [ ] 
org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
INFO - 2018-06-08 03:41:40.934; [ ] 
org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper; returnSession, 
curr-time 9184331 sessionWrapper.createTime 9184324085271, 
this.sessionWrapper.createTime 9184324085271
INFO - 2018-06-08 03:42:32.604; [ ] org.apache.solr.servlet.HttpSolrCall; 
[admin] webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
INFO - 2018-06-08 03:42:41.525; [ ] org.apache.solr.servlet.HttpSolrCall; 
[admin] webapp=null path=/admin/metrics 
params={prefix=CONTAINER.fs.usableSpace,CORE.coreName=javabin=2=solr.node,solr.core}
 status=0 QTime=0
INFO - 2018-06-08 03:42:41.559; [ ] 
org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8984_solr=NRT
INFO - 2018-06-08 03:42:41.560; [ ] 
org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
INFO - 2018-06-08 03:42:41.560; [ ] 
org.apache.solr.cloud.autoscaling.ComputePlanAction; Computed Plan: 
action=ADDREPLICA=test_rate_trigger=shard1=127.94.0.1:8983_solr=NRT
INFO - 2018-06-08 03:42:41.561; [ ] 
org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper; returnSession, 
curr-time 9244959 sessionWrapper.createTime 9244956725861, 
this.sessionWrapper.createTime 9244956725861
INFO - 2018-06-08 03:43:32.622; [ ] org.apache.solr.servlet.HttpSolrCall; 
[admin] webapp=null path=/admin/metrics 

[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-06-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506274#comment-16506274
 ] 

David Smiley commented on SOLR-12361:
-

New patch:

* removed overloaded getLuceneDocument(isInplaceUpdate) since we can simply 
call IsInPlaceUpdate() internally
* Added "ignoreNestedDocs" flag to DocumentBuilder.toDocument() to toggle 
wether it throws an exception or ignores them.  Further, 
AddUpdateCommand.getLuceneDocsIfNested() now checks this early and returns null 
instead of throwing an exception later in this method.

I left the document equality issue for another time, commenting on SOLR-5265: 
https://issues.apache.org/jira/browse/SOLR-5265?focusedCommentId=16506268=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16506268

I think this is ready.  The tests pass, and precommit is running now.  What do 
you think [~moshebla]?  How should I refer to you in CHANGES.txt?

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch, SOLR-12361.patch, 
> SOLR-12361.patch
>
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2081 - Unstable!

2018-06-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2081/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseParallelGC

11 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=17132, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=17133, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=17134, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=17130, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=17131, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=17132, name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 

[jira] [Updated] (SOLR-12361) Change _childDocuments to Map

2018-06-08 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12361:

Attachment: SOLR-12361.patch

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch, SOLR-12361.patch, 
> SOLR-12361.patch
>
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5265) Add backward compatibility tests to JavaBinCodec's format.

2018-06-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506268#comment-16506268
 ] 

David Smiley commented on SOLR-5265:


[~varunthacker] can you explain why you removed equals/hashCode from 
SolrDocument (today, it'd be SolrDocumentBase)?  (related to SOLR-6048).  It's 
not obvious writing tests that equals() should not be used nor that 
{{SolrTestCaseJ4.compareSolrInputDocument}} exists.  Furthermore, 
SolrDocumentBase implements Map and Map spells out equals & hashCode semantics 
that we don't comply with (we don't meet base class demands).  Sure we _get 
away with it_ but :-/

> Add backward compatibility tests to JavaBinCodec's format.
> --
>
> Key: SOLR-5265
> URL: https://issues.apache.org/jira/browse/SOLR-5265
> Project: Solr
>  Issue Type: Test
>Reporter: Adrien Grand
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 4.8, 6.0
>
> Attachments: SOLR-5265.patch, SOLR-5265.patch, SOLR-5265.patch, 
> SOLR-5265.patch, javabin_backcompat.bin
>
>
> Since Solr guarantees backward compatibility of JavaBinCodec's format between 
> releases, we should have tests for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6734) Standalone solr as *two* applications -- Solr and a controlling agent

2018-06-08 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506261#comment-16506261
 ] 

Shawn Heisey commented on SOLR-6734:


bq. If you want the agent process (service) to be able to do e.g. a 
cluster-wide shutdown or rolling restart, would it not then need to listen to 
commands from the network?

I did think about this, although admittedly it didn't occur to me early on.

As I thought about your question just now, I was initially thinking about 
having that use the Solr API.  But then I thought of something where it 
wouldn't work.  Communicating cluster-wide operations through the Solr API 
would work for shutdown or restart.  But if a full shutdown were executed, it 
would not be possible for one agent to start the whole cluster back up.  Also, 
if a Solr node died, only the agent on that machine would be able to start it 
back up.

So the answer to your question is yes, it must be able to listen to commands 
from the network.  If we use a symmetrical cipher with the authentication 
password as a pre-shared key, that should take care of encryption and 
authentication in one step. without having to worry about keystores.

And speaking of keystores ... when things progress to that point, I have a few 
thoughts about making life easier for users who want Solr to use https.


> Standalone solr as *two* applications -- Solr and a controlling agent
> -
>
> Key: SOLR-6734
> URL: https://issues.apache.org/jira/browse/SOLR-6734
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Shawn Heisey
>Priority: Major
>
> In a message to the dev list outlining reasons to switch from a webapp to a 
> standalone app, Mark Miller included the idea of making Solr into two 
> applications, rather than just one.  There would be Solr itself, and an agent 
> to control Solr.
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201305.mbox/%3C807476C6-E4C3-4E7E-9F67-2BECB63990DE%40gmail.com%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Propose hiding/removing JIRA Environment text input

2018-06-08 Thread Shawn Heisey
On 6/8/2018 6:24 AM, David Smiley wrote:
> Many of us have accidentally added a long-form description of our JIRA
> issues into the Environment field of JIRA instead of the Description. 
> I think we can agree this is pretty annoying.  It seems to have been
> happening more lately with a change to JIRA that for whatever reason
> has made it more visually tempting to start typing there.  I want to
> arrange for some sort of fix with infra.  I'm willing to work with
> them to explore what can be done.  But what should we propose infra do
> exactly?  I'd like to get a sense of that here with our community first.

I think a free-form text box for environment is important, and I
wouldn't want to get rid of it entirely.

Reordering things so description is the first large text box is one
option, and might be the best idea.  Other ideas:

* Put some kind of button or other control in place so the user has to
click on something to open up an environment text box.

* Use a visual cue of some kind to emphasize the description box and
draw the eye to it.  Borders, colors, etc.

* Provide a series of radio buttons, checkboxes, and/or dropdowns for a
user to choose various aspects of their environment from pre-defined
choices.  Disadvantage is that would require considerable maintenance as
the project and possible environments change.  Also, a user would not be
able to describe an aspect of their environment that might be a little
unusual.

I have no idea how much control Infra actually has over this.  It might
require an upstream change in Jira.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2559 - Still Unstable

2018-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2559/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([E4C977A370959EA8:B770351392840B52]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13437 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
   [junit4]   2> 1235044 INFO  
(SUITE-IndexSizeTriggerTest-seed#[E4C977A370959EA8]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & 

[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506172#comment-16506172
 ] 

ASF subversion and git services commented on SOLR-12338:


Commit d1dbef5e4d1a1b2bfac75a59496f86d6edbbc16f in lucene-solr's branch 
refs/heads/branch_7_4 from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d1dbef5 ]

SOLR-12338: State default value more directly


> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch, 
> SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506166#comment-16506166
 ] 

ASF subversion and git services commented on SOLR-12338:


Commit eb7bb2d90654ec15d25ba947e287bf7d96e07900 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb7bb2d ]

SOLR-12338: State default value more directly


> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch, 
> SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506171#comment-16506171
 ] 

ASF subversion and git services commented on SOLR-12338:


Commit 13cad54a3efb179fdb4da7528d3448b03989c75e in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=13cad54 ]

SOLR-12338: State default value more directly


> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch, 
> SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12469) Use the term TLS instead of SSL in documentation

2018-06-08 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506157#comment-16506157
 ] 

Cassandra Targett commented on SOLR-12469:
--

I guess you pinged me because you want me to make the changes you propose? I 
don't know enough about TLS or SSL or whatever to really understand what you're 
saying should be done. Or did you want to dig at me about search for the Ref 
Guide?

I will suggest instead that it seems you have a better grasp on what the page 
should say, so you could go ahead and change it if you think that's what it 
needs.

> Use the term TLS instead of SSL in documentation
> 
>
> Key: SOLR-12469
> URL: https://issues.apache.org/jira/browse/SOLR-12469
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jan Høydahl
>Priority: Minor
>  Labels: ssl, tls
>
> In our documentation we should use the correct term TLS instead of SSL. We 
> could still mention SSL in the text for searchability. We should probably not 
> rename the refguide page file name in 
> [https://lucene.apache.org/solr/guide/7_3/enabling-ssl.html] and the title of 
> this page could be "Enabling TLS / SSL" since our refguide search is 
> title-only right now :) 
> I'm not proposing to rename the {{solr.in.sh}} environment variables for SSL 
> or java code.
> [~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12458) ADLS support for SOLR

2018-06-08 Thread Mike Wingert (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506131#comment-16506131
 ] 

Mike Wingert commented on SOLR-12458:
-

Updated patch with documentation

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12458) ADLS support for SOLR

2018-06-08 Thread Mike Wingert (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated SOLR-12458:

Attachment: SOLR-12458.patch

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights

2018-06-08 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506129#comment-16506129
 ] 

Alessandro Benedetti commented on LUCENE-8343:
--

First of all, thanks again Adrien for your time.
I have done the work for the data type migration approach here :
[https://github.com/apache/lucene-solr/pull/398]
The patch is affecting many more files as expected, but the strictly 
BlendedInfixSuggester fix is much more elegant.

The drawbacks are that :
- much more attention is needed to review the new patch
- it should be pretty safe, but introducing nulls around, I never feel fully 
comfortable unless I am super confident of my tests

in case this approach is preferred and someone from the community commit to 
take care of the review process, I am more than happy to spend more effort in 
this and make it producton ready!

> BlendedInfixSuggester bad score calculus for certain suggestion weights
> ---
>
> Key: LUCENE-8343
> URL: https://issues.apache.org/jira/browse/LUCENE-8343
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the BlendedInfixSuggester return a (long) score to rank the 
> suggestions.
> This score is calculated as a multiplication between :
> long *Weight* : the suggestion weight, coming from a document field, it can 
> be any long value ( including 1, 0,.. )
> double *Coefficient* : 0<=x<=1, calculated based on the position match, 
> earlier the better
> The resulting score is a long, which means that at the moment, any weight<10 
> can bring inconsistencies.
> *Edge cases* 
> Weight =1
> Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for 
> any other match)
> Weight =0
> Score = 0 ( independently of the position match coefficient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 781 - Still Unstable

2018-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/781/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/237/consoleText

[repro] Revision: e691bf734270296ae31cd9d330f3ce0137ec5124

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=RollingRestartTest 
-Dtests.method=test -Dtests.seed=A505572E8A8C81FF -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-TN -Dtests.timezone=America/Virgin -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  
-Dtestcase=TestTolerantUpdateProcessorRandomCloud 
-Dtests.method=testRandomUpdates -Dtests.seed=A505572E8A8C81FF 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=he-IL -Dtests.timezone=Asia/Baghdad -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSolrCLIRunExample 
-Dtests.seed=A505572E8A8C81FF -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-SY -Dtests.timezone=Asia/Seoul -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SolrSlf4jReporterTest 
-Dtests.method=testReporter -Dtests.seed=A505572E8A8C81FF -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=fi-FI -Dtests.timezone=America/Phoenix -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=UpdateRequestProcessorFactoryTest 
-Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=A505572E8A8C81FF 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-SV -Dtests.timezone=Asia/Famagusta -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=testSplitShardWithRule -Dtests.seed=A505572E8A8C81FF 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ga-IE -Dtests.timezone=AST -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testAddNode -Dtests.seed=A505572E8A8C81FF -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-CR -Dtests.timezone=Australia/Victoria -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
01aeb3aa4a9311d7e1b06cf1153059fef22994a6
[repro] git fetch
[repro] git checkout e691bf734270296ae31cd9d330f3ce0137ec5124

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   RollingRestartTest
[repro]   TestSolrCLIRunExample
[repro]   TestLargeCluster
[repro]   UpdateRequestProcessorFactoryTest
[repro]   SolrSlf4jReporterTest
[repro]   ShardSplitTest
[repro]   TestTolerantUpdateProcessorRandomCloud
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=35 
-Dtests.class="*.RollingRestartTest|*.TestSolrCLIRunExample|*.TestLargeCluster|*.UpdateRequestProcessorFactoryTest|*.SolrSlf4jReporterTest|*.ShardSplitTest|*.TestTolerantUpdateProcessorRandomCloud"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=A505572E8A8C81FF -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-TN -Dtests.timezone=America/Virgin -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 33025 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.TestTolerantUpdateProcessorRandomCloud
[repro]   0/5 failed: 

[jira] [Commented] (SOLR-12018) Ref Guide: Comment system is offline

2018-06-08 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506117#comment-16506117
 ] 

Cassandra Targett commented on SOLR-12018:
--

bq. I guess the Apache one will eventually come back online

I honestly don't think it's going to. We were apparently the only project 
officially using it, so I think it will sit on the backburner forever.

I had an idea that maybe we should replace it with something like an annotation 
system where people can comment on specific lines or paragraphs instead of the 
entire page. Five minutes of vague and interrupted "research" led me to 
https://web.hypothes.is/, which appears to share our values - free, nonprofit, 
open source - but I really haven't thought about it more than just looking at 
their website for a couple minutes and watching a demo (which looked cool, but 
the devil is in the details).

> Ref Guide: Comment system is offline
> 
>
> Key: SOLR-12018
> URL: https://issues.apache.org/jira/browse/SOLR-12018
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: RefGuideCommentsBroken.png, SOLR-12018.patch
>
>
> The Ref Guide uses comments.apache.org to allow user comments. Sometime in 
> December/early January, it was taken offline. 
> I filed INFRA-15947 to ask after it's long-term status, and recently got an 
> answer that it an ETA is mid-March for a permanent INFRA-hosted system. 
> However, it's of course possible changes in priorities or other factors will 
> delay that timeline.
> Every Ref Guide page currently invites users to leave comments, but since the 
> whole Comments area is pulled in via JavaScript from a non-existent server, 
> there's no space to do so (see attached screenshot). While we wait for the 
> permanent server to be online, we have a couple of options:
> # Leave it the way it is and hopefully by mid-March it will be back
> # Change the text to tell users it's not working temporarily on all published 
> versions
> # Remove it from all the published versions and put it back when it's back
> I'm not a great fan of #2 or #3, because it'd be a bit of work for me to 
> backport changes to 4 branches and republish every guide just to fix it again 
> in a month or so. I'm fine with option #1 since I've known about it for about 
> a month at least and as far as I can tell no one else has noticed. But if 
> people feel strongly about it now that they know about it, we can figure 
> something out.
> If for some reason it takes longer than mid-March to get it back, or INFRA 
> chooses to stop supporting it entirely, this issue can morph into what we 
> should do for an alternative permanent solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #398: Lucene 8343 data type migration

2018-06-08 Thread alessandrobenedetti
GitHub user alessandrobenedetti opened a pull request:

https://github.com/apache/lucene-solr/pull/398

Lucene 8343 data type migration

Different approach, data type migration to fix the bugs  : 


1) Weight for the Document dictionary moved to Long from long
2) Suggestion score moved to double from long

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SeaseLtd/lucene-solr 
LUCENE-8343-dataTypeMigration

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/398.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #398


commit e83e8ee1a42388606fffd10330ed1aeec9518098
Author: Alessandro Benedetti 
Date:   2018-06-01T11:52:41Z

[LUCENE-8343] introduced weight 0 check and positional coefficient scaling 
+ tests

commit 17cfa634798f96539c2535dca2e9a8f2cc0bff45
Author: Alessandro Benedetti 
Date:   2018-06-06T18:42:08Z

[LUCENE-8343] documentation fix

commit cef9a2283e30a297b3add8e73ee6dba9492dcc57
Author: Alessandro Benedetti 
Date:   2018-06-07T15:50:58Z

Merge remote-tracking branch 'upstream/master' into LUCENE-8343

commit 2b636e8c3adb879f0cd2cff45824e226d747b5f0
Author: Alessandro Benedetti 
Date:   2018-06-07T15:51:38Z

[LUCENE-8343] minor documentation fixes

commit e0232f104509f28126d9ce060663f87508366338
Author: Alessandro Benedetti 
Date:   2018-06-07T17:57:30Z

[LUCENE-8343] weight long overflow fix + test

commit cd4ad3b3be64edaf554cb3795a3a21a2da93de6f
Author: Alessandro Benedetti 
Date:   2018-06-08T13:59:39Z

Merge remote-tracking branch 'upstream/master' into 
LUCENE-8343-dataTypeMigration

commit 484a85df9b707e0a82723650f1f60531e3cc39bb
Author: Alessandro Benedetti 
Date:   2018-06-08T14:37:54Z

[LUCENE-8343] data type migration approach for weight not defined - weight 
too small Blended Infix Suggestion Score calculus bug




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506093#comment-16506093
 ] 

ASF subversion and git services commented on SOLR-12392:


Commit 6e55d2f2651e0763c1134348a8e0db78f059dd12 in lucene-solr's branch 
refs/heads/branch_7_4 from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6e55d2f ]

SOLR-12392: Bad-apple IndexSizeTriggerTest.test(Split|Merge)Integration.


> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506092#comment-16506092
 ] 

ASF subversion and git services commented on SOLR-12392:


Commit b8c4f34b25281531e762f2991b716c940db4dcda in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b8c4f34 ]

SOLR-12392: Bad-apple IndexSizeTriggerTest.test(Split|Merge)Integration.


> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506088#comment-16506088
 ] 

ASF subversion and git services commented on SOLR-12392:


Commit 15078ccc83df5e21fce63a596444d4af53f9e158 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=15078cc ]

SOLR-12392: Bad-apple IndexSizeTriggerTest.test(Split|Merge)Integration.


> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12075) TestLargeCluster is too flaky

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506084#comment-16506084
 ] 

ASF subversion and git services commented on SOLR-12075:


Commit d6d24ecfd2be057a85a7b558a8a0aeb3fc66c32e in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d6d24ec ]

SOLR-12075: Disable TestLargeCluster again.


> TestLargeCluster is too flaky
> -
>
> Key: SOLR-12075
> URL: https://issues.apache.org/jira/browse/SOLR-12075
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> This test is failing a lot in jenkins builds, with two types of failures:
>  * specific test method failures - this may be caused by either bugs in the 
> autoscaling code, bugs in the simulator or timing issues. It should be 
> possible to narrow down the cause by using different speeds of simulated time.
>  * suite-level failures due to leaked threads - most of these failures 
> indicate the ongoing Policy calculations, eg:
> {code}
> com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from 
> SUITE scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
>   1) Thread[id=21406, name=AutoscalingActionExecutor-7277-thread-1, 
> state=RUNNABLE, group=TGRP-TestLargeCluster]
>at java.util.ArrayList.iterator(ArrayList.java:834)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:131)
>at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:110)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
>at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74)
>at org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$466/1757323495.apply(Unknown
>  Source)
>at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
>at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:298)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:287)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.removeReplica(Row.java:156)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:60)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
>at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
>at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
>at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$439/951218654.run(Unknown
>  Source)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1677458082.run(Unknown
>  Source)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at java.lang.Thread.run(Thread.java:748)
>   at __randomizedtesting.SeedInfo.seed([C6FA0364D13DAFCC]:0)
> {code}
> It's possible that somewhere an InterruptedException is caught and not 
> propagated so that the Policy calculations don't terminate when the thread is 
> interrupted when closing parent 

[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-06-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506082#comment-16506082
 ] 

Adrien Grand commented on SOLR-12392:
-

I have seen smoketest builds fail because of it too, eg. 
https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-SmokeRelease-7.x/237/consoleFull.
 I'll badapple testMergeIntegration and testSplitIntegration.

> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6734) Standalone solr as *two* applications -- Solr and a controlling agent

2018-06-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506079#comment-16506079
 ] 

Jan Høydahl commented on SOLR-6734:
---

If you want the agent process (service) to be able to do e.g. a cluster-wide 
shutdown or rolling restart, would it not then need to listen to commands from 
the network? Or is your plan that such commands will be received by the main 
Solr APIs, that will in turn ask the local daemon to restart itself through a 
socket command on localhost?

> Standalone solr as *two* applications -- Solr and a controlling agent
> -
>
> Key: SOLR-6734
> URL: https://issues.apache.org/jira/browse/SOLR-6734
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Shawn Heisey
>Priority: Major
>
> In a message to the dev list outlining reasons to switch from a webapp to a 
> standalone app, Mark Miller included the idea of making Solr into two 
> applications, rather than just one.  There would be Solr itself, and an agent 
> to control Solr.
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201305.mbox/%3C807476C6-E4C3-4E7E-9F67-2BECB63990DE%40gmail.com%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12018) Ref Guide: Comment system is offline

2018-06-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506076#comment-16506076
 ] 

Jan Høydahl commented on SOLR-12018:


We could setup our own with [https://github.com/adtac/commento] but I guess the 
Apache one will eventually come back online

> Ref Guide: Comment system is offline
> 
>
> Key: SOLR-12018
> URL: https://issues.apache.org/jira/browse/SOLR-12018
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: RefGuideCommentsBroken.png, SOLR-12018.patch
>
>
> The Ref Guide uses comments.apache.org to allow user comments. Sometime in 
> December/early January, it was taken offline. 
> I filed INFRA-15947 to ask after it's long-term status, and recently got an 
> answer that it an ETA is mid-March for a permanent INFRA-hosted system. 
> However, it's of course possible changes in priorities or other factors will 
> delay that timeline.
> Every Ref Guide page currently invites users to leave comments, but since the 
> whole Comments area is pulled in via JavaScript from a non-existent server, 
> there's no space to do so (see attached screenshot). While we wait for the 
> permanent server to be online, we have a couple of options:
> # Leave it the way it is and hopefully by mid-March it will be back
> # Change the text to tell users it's not working temporarily on all published 
> versions
> # Remove it from all the published versions and put it back when it's back
> I'm not a great fan of #2 or #3, because it'd be a bit of work for me to 
> backport changes to 4 branches and republish every guide just to fix it again 
> in a month or so. I'm fine with option #1 since I've known about it for about 
> a month at least and as far as I can tell no one else has noticed. But if 
> people feel strongly about it now that they know about it, we can figure 
> something out.
> If for some reason it takes longer than mid-March to get it back, or INFRA 
> chooses to stop supporting it entirely, this issue can morph into what we 
> should do for an alternative permanent solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8351) TestLargeCluster fails often

2018-06-08 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-8351.
--
Resolution: Invalid

Typed too quickly. 

> TestLargeCluster fails often
> 
>
> Key: LUCENE-8351
> URL: https://issues.apache.org/jira/browse/LUCENE-8351
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Priority: Minor
>
> This test failed 3 of the last 10 smoke-release builds.
>  - 
> https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-SmokeRelease-7.x/237/consoleFull
>  - 
> https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-SmokeRelease-7.x/235/consoleFull
>  - 
> https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1043/consoleFull



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12075) TestLargeCluster is too flaky

2018-06-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506072#comment-16506072
 ] 

Adrien Grand commented on SOLR-12075:
-

This test failed 3 of the last 10 smoke-release builds.

https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-SmokeRelease-7.x/237/consoleFull

https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-SmokeRelease-7.x/235/consoleFull

https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1043/consoleFull

I'm going to badapple it again.



> TestLargeCluster is too flaky
> -
>
> Key: SOLR-12075
> URL: https://issues.apache.org/jira/browse/SOLR-12075
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> This test is failing a lot in jenkins builds, with two types of failures:
>  * specific test method failures - this may be caused by either bugs in the 
> autoscaling code, bugs in the simulator or timing issues. It should be 
> possible to narrow down the cause by using different speeds of simulated time.
>  * suite-level failures due to leaked threads - most of these failures 
> indicate the ongoing Policy calculations, eg:
> {code}
> com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from 
> SUITE scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
>   1) Thread[id=21406, name=AutoscalingActionExecutor-7277-thread-1, 
> state=RUNNABLE, group=TGRP-TestLargeCluster]
>at java.util.ArrayList.iterator(ArrayList.java:834)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:131)
>at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:110)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
>at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74)
>at org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$466/1757323495.apply(Unknown
>  Source)
>at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
>at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:298)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:287)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.removeReplica(Row.java:156)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:60)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
>at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
>at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
>at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$439/951218654.run(Unknown
>  Source)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1677458082.run(Unknown
>  Source)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at java.lang.Thread.run(Thread.java:748)
>   at __randomizedtesting.SeedInfo.seed([C6FA0364D13DAFCC]:0)
> {code}
> It's possible that somewhere an InterruptedException is caught and not 
> propagated 

[jira] [Commented] (SOLR-10189) Add a solr zk clusterprop command

2018-06-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-10189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506067#comment-16506067
 ] 

Jan Høydahl commented on SOLR-10189:


Anyone interested in this? It will make setting up TLS a bit easier :)

> Add a solr zk clusterprop command
> -
>
> Key: SOLR-10189
> URL: https://issues.apache.org/jira/browse/SOLR-10189
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Priority: Major
>
> The bin/solr script should support setting clusterprop. Proposal
> {code}
> solr zk clusterprop # lists all props
> solr zk clusterprop urlscheme   # shows a single prop
> solr zk clusterprop urlscheme=https # sets a prop
> solr zk clusterprop urlscheme=  # deletes a prop
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8351) TestLargeCluster fails often

2018-06-08 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8351:


 Summary: TestLargeCluster fails often
 Key: LUCENE-8351
 URL: https://issues.apache.org/jira/browse/LUCENE-8351
 Project: Lucene - Core
  Issue Type: Test
Reporter: Adrien Grand


This test failed 3 of the last 10 smoke-release builds.
 - 
https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-SmokeRelease-7.x/237/consoleFull
 - 
https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-SmokeRelease-7.x/235/consoleFull
 - 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1043/consoleFull





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9006) RealTime Get (RTG) does not return child documents from the transaction log

2018-06-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506046#comment-16506046
 ] 

David Smiley commented on SOLR-9006:


This looks like a nasty oversight.  I figured RTG didn't supported nested docs 
as I was just looking through the RTG code and see no mention of "child" or 
"nested" and I see some code that should be doing child/nested stuff but is not.

> RealTime Get (RTG) does not return child documents from the transaction log
> ---
>
> Key: SOLR-9006
> URL: https://issues.apache.org/jira/browse/SOLR-9006
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.5, 6.0
>Reporter: Ariel Lieberman
>Priority: Major
>  Labels: RTK, RealTimeGet, TransactionLog
> Attachments: SOLR-9006(6.0).patch
>
>
> {{RealTimeGet}} component does not retrieves child documents from the 
> transaction (update) log.   Although there is no mechanism, using {{/get}} to 
> retrieve parent document with all its children.  Note: that {{\_version\_}} 
> field appears only on in the parent document and the update is only as a 
> whole (parent with all children.)  Therefore, I think the capability (e.g. 
> additional flag) to get parent with all its children is very important..  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12469) Use the term TLS instead of SSL in documentation

2018-06-08 Thread JIRA
Jan Høydahl created SOLR-12469:
--

 Summary: Use the term TLS instead of SSL in documentation
 Key: SOLR-12469
 URL: https://issues.apache.org/jira/browse/SOLR-12469
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Jan Høydahl


In our documentation we should use the correct term TLS instead of SSL. We 
could still mention SSL in the text for searchability. We should probably not 
rename the refguide page file name in 
[https://lucene.apache.org/solr/guide/7_3/enabling-ssl.html] and the title of 
this page could be "Enabling TLS / SSL" since our refguide search is title-only 
right now :) 

I'm not proposing to rename the {{solr.in.sh}} environment variables for SSL or 
java code.

[~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 780 - Still Unstable

2018-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/780/

[...truncated 49 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2558/consoleText

[repro] Revision: a4fa16896225e08b72bf64fba97a216bb6a83fbb

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testTriggerThrottling -Dtests.seed=FF551A54CE11CD1D 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=el-GR 
-Dtests.timezone=Africa/Bangui -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
01aeb3aa4a9311d7e1b06cf1153059fef22994a6
[repro] git fetch
[repro] git checkout a4fa16896225e08b72bf64fba97a216bb6a83fbb

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3300 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestTriggerIntegration" -Dtests.showOutput=onerror  
-Dtests.seed=FF551A54CE11CD1D -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=el-GR -Dtests.timezone=Africa/Bangui -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 4731 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro] git checkout 01aeb3aa4a9311d7e1b06cf1153059fef22994a6

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1043 - Still Failing

2018-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1043/

No tests ran.

Build Log:
[...truncated 24156 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2220 links (1772 relative) to 3116 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[jira] [Updated] (SOLR-8207) Modernise cloud tab on Admin UI

2018-06-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8207:
--
Fix Version/s: 7.5
   master (8.0)

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12131) Authorization plugin support for getting user's roles from the outside

2018-06-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12131:
---
Fix Version/s: (was: 7.4)
   7.5

> Authorization plugin support for getting user's roles from the outside
> --
>
> Key: SOLR-12131
> URL: https://issues.apache.org/jira/browse/SOLR-12131
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the {{RuleBasedAuthorizationPlugin}} relies on explicitly mapping 
> users to roles. However, when users are authenticated by an external Identity 
> service (e.g. JWT as implemented in SOLR-12121), that external service keeps 
> track of the user's roles, and will pass that as a "claim" in the token (JWT).
> In order for Solr to be able to Authorise requests based on those roles, the 
> Authorization plugin should be able to accept (verified) roles from the 
> request instead of explicit mapping.
> Suggested approach is to create a new interface {{VerifiedUserRoles}} and a 
> {{PrincipalWithUserRoles}} which implements the interface. The Authorization 
> plugin can then pull the roles from request. By piggy-backing on the 
> Principal, we have a seamless way to transfer extra external information, and 
> there is also a natural relationship:
> {code:java}
> User Authentication -> Role validation -> Creating a Principal{code}
> I plan to add the interface, the custom Principal class and restructure 
> {{RuleBasedAuthorizationPlugin}} in an abstract base class and two 
> implementations: {{RuleBasedAuthorizationPlugin}} (as today) and a new 
> {{ExternalRoleRuleBasedAuthorizationPlugin.}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12121) JWT Authentication plugin

2018-06-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12121:
---
Fix Version/s: (was: 7.4)
   7.5

> JWT Authentication plugin
> -
>
> Key: SOLR-12121
> URL: https://issues.apache.org/jira/browse/SOLR-12121
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A new Authentication plugin that will accept a [Json Web 
> Token|https://en.wikipedia.org/wiki/JSON_Web_Token] (JWT) in the 
> Authorization header and validate it by checking the cryptographic signature. 
> The plugin will not perform the authentication itself but assert that the 
> user was authenticated by the service that issued the JWT token.
> JWT defined a number of standard claims, and user principal can be fetched 
> from the {{sub}} (subject) claim and passed on to Solr. The plugin will 
> always check the {{exp}} (expiry) claim and optionally enforce checks on the 
> {{iss}} (issuer) and {{aud}} (audience) claims.
> The first version of the plugin will only support RSA signing keys and will 
> support fetching the public key of the issuer through a [Json Web 
> Key|https://tools.ietf.org/html/rfc7517] (JWK) file, either from a https URL 
> or from local file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12350) Do not use docValues as stored for _str (copy)fields in _default configset

2018-06-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12350.

Resolution: Fixed

> Do not use docValues as stored for _str (copy)fields in _default configset
> --
>
> Key: SOLR-12350
> URL: https://issues.apache.org/jira/browse/SOLR-12350
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Data-driven Schema
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12350.patch, SOLR-12350.patch
>
>
> When improving data-driven mode in SOLR-9526 we discussed back and forth 
> whether to set {{useDocValuesAsStored}} for the {{*_str}} copy of text 
> fields. This dynamic field is currently defined as
> {code:xml}
>  indexed="false" />{code}
> Having lived with the current setting since 7.0, I think it is too noisy to 
> return all the _str fields since this is redundant content from the analysed 
> original field. Thus I propose to do as [~hossman] initially suggested, and 
> explicitly set it to false starting from 7.4:
> {code:xml}
>  docValues="true" useDocValuesAsStored="false" />
> {code}
> Note that this does not change how things are stored, only whether to display 
> these by default. The {{*_str}} fields will still be available for sorting, 
> faceting etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights

2018-06-08 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506007#comment-16506007
 ] 

Lucene/Solr QA commented on LUCENE-8343:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  0m 47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} suggest in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8343 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926941/LUCENE-8343.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  validaterefguide  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 36b7cdd |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/30/testReport/ |
| modules | C: lucene/suggest solr/solr-ref-guide U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/30/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> BlendedInfixSuggester bad score calculus for certain suggestion weights
> ---
>
> Key: LUCENE-8343
> URL: https://issues.apache.org/jira/browse/LUCENE-8343
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the BlendedInfixSuggester return a (long) score to rank the 
> suggestions.
> This score is calculated as a multiplication between :
> long *Weight* : the suggestion weight, coming from a document field, it can 
> be any long value ( including 1, 0,.. )
> double *Coefficient* : 0<=x<=1, calculated based on the position match, 
> earlier the better
> The resulting score is a long, which means that at the moment, any weight<10 
> can bring inconsistencies.
> *Edge cases* 
> Weight =1
> Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for 
> any other match)
> Weight =0
> Score = 0 ( independently of the position match coefficient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12350) Do not use docValues as stored for _str (copy)fields in _default configset

2018-06-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12350:
---
Fix Version/s: (was: 7.4)
   7.5

> Do not use docValues as stored for _str (copy)fields in _default configset
> --
>
> Key: SOLR-12350
> URL: https://issues.apache.org/jira/browse/SOLR-12350
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Data-driven Schema
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12350.patch, SOLR-12350.patch
>
>
> When improving data-driven mode in SOLR-9526 we discussed back and forth 
> whether to set {{useDocValuesAsStored}} for the {{*_str}} copy of text 
> fields. This dynamic field is currently defined as
> {code:xml}
>  indexed="false" />{code}
> Having lived with the current setting since 7.0, I think it is too noisy to 
> return all the _str fields since this is redundant content from the analysed 
> original field. Thus I propose to do as [~hossman] initially suggested, and 
> explicitly set it to false starting from 7.4:
> {code:xml}
>  docValues="true" useDocValuesAsStored="false" />
> {code}
> Note that this does not change how things are stored, only whether to display 
> these by default. The {{*_str}} fields will still be available for sorting, 
> faceting etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 74 - Still Unstable

2018-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/74/

3 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalMove

Error Message:
Collection not found: MoveReplicaHDFSTest_coll_false

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: 
MoveReplicaHDFSTest_coll_false
at 
__randomizedtesting.SeedInfo.seed([EA2FB78042EDA802:4CF70027C70C6A18]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.cloud.MoveReplicaTest.addDocs(MoveReplicaTest.java:374)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:111)
at 
org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalMove(MoveReplicaHDFSTest.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-12467) allow to change the autoscaling configuration via SolrJ

2018-06-08 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505983#comment-16505983
 ] 

Shalin Shekhar Mangar commented on SOLR-12467:
--

No, we should not be writing to Zookeeper directly. But we should add SolrJ 
request classes to modify autoscaling configuration the way we use the 
collection APIs etc.

> allow to change the autoscaling configuration via SolrJ
> ---
>
> Key: SOLR-12467
> URL: https://issues.apache.org/jira/browse/SOLR-12467
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.3.1
>Reporter: Hendrik Haddorp
>Priority: Minor
>
> Using SolrJ's CloudSolrClient it is possible to read the autoscaling 
> configuration:
> cloudSolrClient.getZkStateReader().getAutoScalingConfig()
> There is however no way to update it. One can only read out the list of life 
> nodes and then do a call to Solr using the LBHttpSolrClient for example. 
> Given that the config is stored in ZooKeeper and thus could be written 
> directly and even when no Solr instance is running this is not optimal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Propose hiding/removing JIRA Environment text input

2018-06-08 Thread Cassandra Targett
I've been debating saying something about this too - I think it happened
when INFRA added some text to direct users to use the mailing list or IRC
if they really have a support question instead of a bug (INFRA-16507).

The most basic solution is a simple re-ordering of the form, which in JIRA
is really easy to do. We could put the environment field near the bottom
and if someone is paying attention to the form and wants to fill it in,
fine, but the rest of us can get at the most commonly used/needed fields
quicker.

As I was writing that I thought I'd refresh my memory of where screen
editing is done in JIRA, and it looks like those of us with committer
status have access to edit that form. So we can solve this quickly, and
probably we can do it on our own without asking INFRA.

If we come to consensus on either burying or removing the field, I'd be
happy to be the one that makes the change.

On Fri, Jun 8, 2018 at 7:24 AM David Smiley 
wrote:

> Many of us have accidentally added a long-form description of our JIRA
> issues into the Environment field of JIRA instead of the Description.  I
> think we can agree this is pretty annoying.  It seems to have been
> happening more lately with a change to JIRA that for whatever reason has
> made it more visually tempting to start typing there.  I want to arrange
> for some sort of fix with infra.  I'm willing to work with them to explore
> what can be done.  But what should we propose infra do exactly?  I'd like
> to get a sense of that here with our community first.
>
> IMO, I don't think a dedicated Environment input field is useful when
> someone could just as easily type anything pertinent into the description
> field of a bug report.  Less input fields means a simpler JIRA UI -- a good
> thing IMO.  But since it's been used in the past, it may be impossible to
> actually remove it while keeping the text on old issues.  Nonetheless I'm
> ambivalent if it were to be outright removed and others here want this
> since I think it's of such low value that data loss wouldn't bother me.
>
> Can it be retained as a purely read-only on display but otherwise can't
> edit? I'd like that.
>
> Perhaps the path of least change and thus "safest" path is for it to be
> removed from the "Create Issue" screen, yet retain it on other screens for
> those that are fans of adding/editing it?
>
> ~ David
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


Re: Propose hiding/removing JIRA Environment text input

2018-06-08 Thread Jan Høydahl
+1 David
Just moving it below the description would also help, or making the input text 
box for environment much smaller or something.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 8. jun. 2018 kl. 14:24 skrev David Smiley :
> 
> Many of us have accidentally added a long-form description of our JIRA issues 
> into the Environment field of JIRA instead of the Description.  I think we 
> can agree this is pretty annoying.  It seems to have been happening more 
> lately with a change to JIRA that for whatever reason has made it more 
> visually tempting to start typing there.  I want to arrange for some sort of 
> fix with infra.  I'm willing to work with them to explore what can be done.  
> But what should we propose infra do exactly?  I'd like to get a sense of that 
> here with our community first.
> 
> IMO, I don't think a dedicated Environment input field is useful when someone 
> could just as easily type anything pertinent into the description field of a 
> bug report.  Less input fields means a simpler JIRA UI -- a good thing IMO.  
> But since it's been used in the past, it may be impossible to actually remove 
> it while keeping the text on old issues.  Nonetheless I'm ambivalent if it 
> were to be outright removed and others here want this since I think it's of 
> such low value that data loss wouldn't bother me.
> 
> Can it be retained as a purely read-only on display but otherwise can't edit? 
> I'd like that.
> 
> Perhaps the path of least change and thus "safest" path is for it to be 
> removed from the "Create Issue" screen, yet retain it on other screens for 
> those that are fans of adding/editing it?
> 
> ~ David
> -- 
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley 
>  | Book: 
> http://www.solrenterprisesearchserver.com 
> 


[jira] [Commented] (SOLR-12283) Unable To Load ZKPropertiesWriter when dih.jar is added as runtimelib BLOB in .system collection

2018-06-08 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505982#comment-16505982
 ] 

Shawn Heisey commented on SOLR-12283:
-

bq. So, the solution is : to add local .jar files locally on all servers?

I would call it a workaround more than a solution.

But yes, if you go to the solr home (usually where your core directories are), 
create a lib directory there, and place all extra jars in it, they will be 
available to all cores.  It is my recommendation when using this method to 
remove all the  config elements from solrconfig.xml and make sure that all 
the jars you need are in the new directory.

I honestly have no idea whether it will be possible to load the DIH jar from 
.system like you're trying to do.  I would think it SHOULD be possible, but 
classloading in Java is not always straightforward.  Somebody else is going to 
have to look into the problem.


> Unable To Load ZKPropertiesWriter when dih.jar is added as runtimelib BLOB in 
> .system collection
> 
>
> Key: SOLR-12283
> URL: https://issues.apache.org/jira/browse/SOLR-12283
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 6.6.1, 7.3
> Environment: Debian
> SolrCloud
>Reporter: Maxence SAUNIER
>Priority: Blocker
> Attachments: modified-DIH.zip, modified-DIH.zip, 
> mysql-connector-java-5.1.46-bin.jar, mysql-connector-java-5.1.46.jar, 
> request_handler_config.json, solr-core-7.3.0.jar, 
> solr-dataimporthandler-7.3.0.jar, solr-dataimporthandler-extras-7.3.0.jar, 
> solr-solrj-7.3.0.jar, solr.log, solr.log, solr.log, solr.log
>
>
> Hello,
> It's been 2 weeks that I try to correct this problem with the community 
> user-solr but no success. I seriously wonder if this is not a problem in the 
> code. I do not have the impression that many people use DIH with Solr's cloud 
> version.
> On Internet, no similar problem.
> For information, the following configuration of DIH comes from DIHs that work 
> in production on a single Solr server. The connections to the databases are 
> therefore correct.
> *Errors messages:*
> {panel:title=DataImporter}
> {code:java}
> Full Import 
> failed:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable 
> to PropertyWriter implementation:ZKPropertiesWriter
>   at 
> org.apache.solr.handler.dataimport.DataImporter.createPropertyWriter(DataImporter.java:339)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:420)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
>   at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:183)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> 

Re: discuss: stop adding 'via' from CHANGES.txt entries (take two)

2018-06-08 Thread Jan Høydahl
I agree with Mark that it is a huge and important part of keeping Lucene/Solr a 
welcoming
community, that the existing committers take time to guide contributors. 
Keeping the "via"
part of the changelog also makes it very easy to spot potential candiates for 
committership,
and avoid the "I thought he/she was already a committer" type of comments we've 
seen :)

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 7. jun. 2018 kl. 14:59 skrev Joel Bernstein :
> 
> I agree with Marks position on this, the information of who committed has 
> significant value.
> 
> Joel Bernstein
> http://joelsolr.blogspot.com/ 
> 
> On Wed, Jun 6, 2018 at 10:14 PM, Mark Miller  > wrote:
> I  have the same opinion as last time. Taking ownership of actually 
> committing something to the code base is an important attribution and that is 
> why it has been included in CHANGES. I don't agree that it takes away credit 
> at all - via means the commit went through you, which is an accurate 
> reflection of things. Committing others work is a major contribution and 
> should be called out, for the positives that it creates as well as the 
> responsibility for that change you have undertaken by being a very key part 
> of the via route.
> 
> - Mark
> 
> On Wed, Jun 6, 2018 at 8:10 AM Yonik Seeley  > wrote:
> I don't have much of an opinion about "via" one way or the other,
> however I think we should avoid using the mental model of authorship
> for CHANGES.txt.
> We've generally been listing people who made meaningful contributions
> to the patch, including sometimes the person who opened the issue for
> example.
> 
> -Yonik
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
> 
> 
> -- 
> - Mark 
> about.me/markrmiller 



Propose hiding/removing JIRA Environment text input

2018-06-08 Thread David Smiley
Many of us have accidentally added a long-form description of our JIRA
issues into the Environment field of JIRA instead of the Description.  I
think we can agree this is pretty annoying.  It seems to have been
happening more lately with a change to JIRA that for whatever reason has
made it more visually tempting to start typing there.  I want to arrange
for some sort of fix with infra.  I'm willing to work with them to explore
what can be done.  But what should we propose infra do exactly?  I'd like
to get a sense of that here with our community first.

IMO, I don't think a dedicated Environment input field is useful when
someone could just as easily type anything pertinent into the description
field of a bug report.  Less input fields means a simpler JIRA UI -- a good
thing IMO.  But since it's been used in the past, it may be impossible to
actually remove it while keeping the text on old issues.  Nonetheless I'm
ambivalent if it were to be outright removed and others here want this
since I think it's of such low value that data loss wouldn't bother me.

Can it be retained as a purely read-only on display but otherwise can't
edit? I'd like that.

Perhaps the path of least change and thus "safest" path is for it to be
removed from the "Create Issue" screen, yet retain it on other screens for
those that are fans of adding/editing it?

~ David
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-12467) allow to change the autoscaling configuration via SolrJ

2018-06-08 Thread Hendrik Haddorp (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505921#comment-16505921
 ] 

Hendrik Haddorp commented on SOLR-12467:


If no update straight in ZK is desired then it would still be nice if SolrJ 
would offer an update request. Given that this is not done all the time this 
would be fine for me as well. Now the SolrJ API just looks a bit incomplete ;-)

> allow to change the autoscaling configuration via SolrJ
> ---
>
> Key: SOLR-12467
> URL: https://issues.apache.org/jira/browse/SOLR-12467
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.3.1
>Reporter: Hendrik Haddorp
>Priority: Minor
>
> Using SolrJ's CloudSolrClient it is possible to read the autoscaling 
> configuration:
> cloudSolrClient.getZkStateReader().getAutoScalingConfig()
> There is however no way to update it. One can only read out the list of life 
> nodes and then do a call to Solr using the LBHttpSolrClient for example. 
> Given that the config is stored in ZooKeeper and thus could be written 
> directly and even when no Solr instance is running this is not optimal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12468) Update Jetty to 9.4.11.v20180605

2018-06-08 Thread Michael Braun (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Braun updated SOLR-12468:
-
Environment: (was: Summary of changes from 
https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt

{code}
jetty-9.4.11.v20180605 - 05 June 2018
 + 1785 Support for vhost@connectorname syntax of virtual hosts
 + 2346 Revert stack trace logging for HTTPChannel.onException
 + 2439 Remove HTTP/2 data copy
 + 2472 central.maven.org doesn't work with https
 + 2484 Repeated null check in MimeTypes.getDefaultMimeByExtension
 + 2496 Jetty Maven Plugin should skip execution on projects it cannot support
 + 2516 NPE at SslClientConnectionFactory.newConnection()
 + 2518 HttpClient cannot handle bad servers that report multiple 100-continue
   responses in the same conversation
 + 2525 Deprecate BlockingTimeout mechanism for removal in future release
 + 2529 HttpParser cleanup
 + 2532 Improve parser handing of tokens
 + 2545 Slow HTTP2 per-stream download performance
 + 2546 Incorrect parsing of PROXY protocol v2
 + 2548 Possible deadlock failing HTTP/2 stream creation
 + 2549 ConsumeAll and requestRecycle
 + 2550 Coalesce overlapping HTTP requested byte ranges
 + 2556 "file:" prefix in jetty.base variable
 + 2559 Use Configurator declared in ServerEndpointConfig over one declared in
   the @ServerEndpoint annotation
 + 2560 PathResource exception handling
 + 2568 QueuedThreadPool.getBusyThreads() should take into account
   ReservedThreadExecutor.getAvailable()
 + 2571 Jetty Client 9.4.x incorrectly handles too large fields from nginx 1.14
   server
 + 2574 Clarify max request queued exception message
 + 2575 Work around broken OSGi implementations Bundle.getEntry() behavior 
returning
   with unescaped URLs
 + 2580 Stop creating unnecessary exceptions with MultiException
 + 2586 Update to asm 6.2
 + 2603 WebSocket ByteAccumulator initialized with wrong maximum
 + 2604 WebSocket ByteAccumulator should report sizes in
   MessageTooLargeException
 + 2616 Trailers preventing client from processing all the data
 + 2619 QueuedThreadPool race can shrink newly created idle threads before use

{code})
Description: 
Summary of changes from 
https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt

{code}
jetty-9.4.11.v20180605 - 05 June 2018
 + 1785 Support for vhost@connectorname syntax of virtual hosts
 + 2346 Revert stack trace logging for HTTPChannel.onException
 + 2439 Remove HTTP/2 data copy
 + 2472 central.maven.org doesn't work with https
 + 2484 Repeated null check in MimeTypes.getDefaultMimeByExtension
 + 2496 Jetty Maven Plugin should skip execution on projects it cannot support
 + 2516 NPE at SslClientConnectionFactory.newConnection()
 + 2518 HttpClient cannot handle bad servers that report multiple 100-continue
   responses in the same conversation
 + 2525 Deprecate BlockingTimeout mechanism for removal in future release
 + 2529 HttpParser cleanup
 + 2532 Improve parser handing of tokens
 + 2545 Slow HTTP2 per-stream download performance
 + 2546 Incorrect parsing of PROXY protocol v2
 + 2548 Possible deadlock failing HTTP/2 stream creation
 + 2549 ConsumeAll and requestRecycle
 + 2550 Coalesce overlapping HTTP requested byte ranges
 + 2556 "file:" prefix in jetty.base variable
 + 2559 Use Configurator declared in ServerEndpointConfig over one declared in
   the @ServerEndpoint annotation
 + 2560 PathResource exception handling
 + 2568 QueuedThreadPool.getBusyThreads() should take into account
   ReservedThreadExecutor.getAvailable()
 + 2571 Jetty Client 9.4.x incorrectly handles too large fields from nginx 1.14
   server
 + 2574 Clarify max request queued exception message
 + 2575 Work around broken OSGi implementations Bundle.getEntry() behavior 
returning
   with unescaped URLs
 + 2580 Stop creating unnecessary exceptions with MultiException
 + 2586 Update to asm 6.2
 + 2603 WebSocket ByteAccumulator initialized with wrong maximum
 + 2604 WebSocket ByteAccumulator should report sizes in
   MessageTooLargeException
 + 2616 Trailers preventing client from processing all the data
 + 2619 QueuedThreadPool race can shrink newly created idle threads before use

{code}

> Update Jetty to 9.4.11.v20180605
> 
>
> Key: SOLR-12468
> URL: https://issues.apache.org/jira/browse/SOLR-12468
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Braun
>Priority: Major
>
> Summary of changes from 
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt
> {code}
> jetty-9.4.11.v20180605 - 05 June 2018
>  + 1785 Support for vhost@connectorname syntax of virtual hosts
>  + 2346 Revert stack trace logging for HTTPChannel.onException
>  + 2439 Remove HTTP/2 data copy
>  + 2472 central.maven.org 

[jira] [Created] (SOLR-12468) Update Jetty to 9.4.11.v20180605

2018-06-08 Thread Michael Braun (JIRA)
Michael Braun created SOLR-12468:


 Summary: Update Jetty to 9.4.11.v20180605
 Key: SOLR-12468
 URL: https://issues.apache.org/jira/browse/SOLR-12468
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: Summary of changes from 
https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt

{code}
jetty-9.4.11.v20180605 - 05 June 2018
 + 1785 Support for vhost@connectorname syntax of virtual hosts
 + 2346 Revert stack trace logging for HTTPChannel.onException
 + 2439 Remove HTTP/2 data copy
 + 2472 central.maven.org doesn't work with https
 + 2484 Repeated null check in MimeTypes.getDefaultMimeByExtension
 + 2496 Jetty Maven Plugin should skip execution on projects it cannot support
 + 2516 NPE at SslClientConnectionFactory.newConnection()
 + 2518 HttpClient cannot handle bad servers that report multiple 100-continue
   responses in the same conversation
 + 2525 Deprecate BlockingTimeout mechanism for removal in future release
 + 2529 HttpParser cleanup
 + 2532 Improve parser handing of tokens
 + 2545 Slow HTTP2 per-stream download performance
 + 2546 Incorrect parsing of PROXY protocol v2
 + 2548 Possible deadlock failing HTTP/2 stream creation
 + 2549 ConsumeAll and requestRecycle
 + 2550 Coalesce overlapping HTTP requested byte ranges
 + 2556 "file:" prefix in jetty.base variable
 + 2559 Use Configurator declared in ServerEndpointConfig over one declared in
   the @ServerEndpoint annotation
 + 2560 PathResource exception handling
 + 2568 QueuedThreadPool.getBusyThreads() should take into account
   ReservedThreadExecutor.getAvailable()
 + 2571 Jetty Client 9.4.x incorrectly handles too large fields from nginx 1.14
   server
 + 2574 Clarify max request queued exception message
 + 2575 Work around broken OSGi implementations Bundle.getEntry() behavior 
returning
   with unescaped URLs
 + 2580 Stop creating unnecessary exceptions with MultiException
 + 2586 Update to asm 6.2
 + 2603 WebSocket ByteAccumulator initialized with wrong maximum
 + 2604 WebSocket ByteAccumulator should report sizes in
   MessageTooLargeException
 + 2616 Trailers preventing client from processing all the data
 + 2619 QueuedThreadPool race can shrink newly created idle threads before use

{code}
Reporter: Michael Braun






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8350) RandomPolygonTest times out

2018-06-08 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera resolved LUCENE-8350.
--
   Resolution: Fixed
Fix Version/s: 7.5
   6.6.5
   master (8.0)
   7.4

> RandomPolygonTest times out
> ---
>
> Key: LUCENE-8350
> URL: https://issues.apache.org/jira/browse/LUCENE-8350
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Fix For: 7.4, master (8.0), 6.6.5, 7.5
>
> Attachments: LUCENE-8350.patch
>
>
> I saw a failure on the Elastic CI that I can reproduce locally. The test 
> either never finishes or is very very slow. This is due to the fact that the 
> {{do ... polygon = ... 
> while(polygon.getClass().equals(largePolygon.getClass()));}} never returns. 
> NOTE: the test result below was on branch_7x
> {noformat}
> 21:13:34[junit4] Suite: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2> jun 08, 2018 12:13:10 AM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
> 21:13:34[junit4]   2> ADVERTENCIA: Suite execution timed out: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2>1) Thread[id=23, 
> name=TEST-RandomGeoPolygonTest.testCompareBigPolygons-seed#[E636FFE9E01130D7],
>  state=RUNNABLE, group=TGRP-RandomGeoPolygonTest]
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeLo(TimSort.java:730)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeAt(TimSort.java:514)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeCollapse(TimSort.java:439)
> 21:13:34[junit4]   2> at java.util.TimSort.sort(TimSort.java:245)
> 21:13:34[junit4]   2> at java.util.Arrays.sort(Arrays.java:1438)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.(GeoComplexPolygon.java:918)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$XTree.(GeoComplexPolygon.java:1048)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.(GeoComplexPolygon.java:129)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory$BestShape.createGeoComplexPolygon(GeoPolygonFactory.java:463)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeLargeGeoPolygon(GeoPolygonFactory.java:389)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:226)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:142)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:156)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 21:13:34[junit4]   2> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 21:13:34[junit4]   2> at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> 21:13:34[junit4]   2> at 
> 

[jira] [Commented] (LUCENE-8350) RandomPolygonTest times out

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505912#comment-16505912
 ] 

ASF subversion and git services commented on LUCENE-8350:
-

Commit 63d7f5d902d8f6ca22dd442df505b47660cc450d in lucene-solr's branch 
refs/heads/branch_6x from ivera
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=63d7f5d ]

LUCENE-8350: Fix for time-out in RandomGeoPolygonTests


> RandomPolygonTest times out
> ---
>
> Key: LUCENE-8350
> URL: https://issues.apache.org/jira/browse/LUCENE-8350
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Attachments: LUCENE-8350.patch
>
>
> I saw a failure on the Elastic CI that I can reproduce locally. The test 
> either never finishes or is very very slow. This is due to the fact that the 
> {{do ... polygon = ... 
> while(polygon.getClass().equals(largePolygon.getClass()));}} never returns. 
> NOTE: the test result below was on branch_7x
> {noformat}
> 21:13:34[junit4] Suite: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2> jun 08, 2018 12:13:10 AM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
> 21:13:34[junit4]   2> ADVERTENCIA: Suite execution timed out: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2>1) Thread[id=23, 
> name=TEST-RandomGeoPolygonTest.testCompareBigPolygons-seed#[E636FFE9E01130D7],
>  state=RUNNABLE, group=TGRP-RandomGeoPolygonTest]
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeLo(TimSort.java:730)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeAt(TimSort.java:514)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeCollapse(TimSort.java:439)
> 21:13:34[junit4]   2> at java.util.TimSort.sort(TimSort.java:245)
> 21:13:34[junit4]   2> at java.util.Arrays.sort(Arrays.java:1438)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.(GeoComplexPolygon.java:918)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$XTree.(GeoComplexPolygon.java:1048)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.(GeoComplexPolygon.java:129)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory$BestShape.createGeoComplexPolygon(GeoPolygonFactory.java:463)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeLargeGeoPolygon(GeoPolygonFactory.java:389)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:226)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:142)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:156)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 21:13:34[junit4]   2> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 21:13:34[junit4]   2> at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> 21:13:34[junit4]   2> at 
> 

[jira] [Commented] (LUCENE-8350) RandomPolygonTest times out

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505913#comment-16505913
 ] 

ASF subversion and git services commented on LUCENE-8350:
-

Commit 0e7b4c950d6a120abccca8cf8845e00d0b70c79d in lucene-solr's branch 
refs/heads/branch_7_4 from ivera
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0e7b4c9 ]

LUCENE-8350: Fix for time-out in RandomGeoPolygonTests


> RandomPolygonTest times out
> ---
>
> Key: LUCENE-8350
> URL: https://issues.apache.org/jira/browse/LUCENE-8350
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Attachments: LUCENE-8350.patch
>
>
> I saw a failure on the Elastic CI that I can reproduce locally. The test 
> either never finishes or is very very slow. This is due to the fact that the 
> {{do ... polygon = ... 
> while(polygon.getClass().equals(largePolygon.getClass()));}} never returns. 
> NOTE: the test result below was on branch_7x
> {noformat}
> 21:13:34[junit4] Suite: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2> jun 08, 2018 12:13:10 AM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
> 21:13:34[junit4]   2> ADVERTENCIA: Suite execution timed out: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2>1) Thread[id=23, 
> name=TEST-RandomGeoPolygonTest.testCompareBigPolygons-seed#[E636FFE9E01130D7],
>  state=RUNNABLE, group=TGRP-RandomGeoPolygonTest]
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeLo(TimSort.java:730)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeAt(TimSort.java:514)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeCollapse(TimSort.java:439)
> 21:13:34[junit4]   2> at java.util.TimSort.sort(TimSort.java:245)
> 21:13:34[junit4]   2> at java.util.Arrays.sort(Arrays.java:1438)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.(GeoComplexPolygon.java:918)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$XTree.(GeoComplexPolygon.java:1048)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.(GeoComplexPolygon.java:129)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory$BestShape.createGeoComplexPolygon(GeoPolygonFactory.java:463)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeLargeGeoPolygon(GeoPolygonFactory.java:389)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:226)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:142)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:156)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 21:13:34[junit4]   2> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 21:13:34[junit4]   2> at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> 21:13:34[junit4]   2> at 
> 

[jira] [Commented] (LUCENE-8350) RandomPolygonTest times out

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505908#comment-16505908
 ] 

ASF subversion and git services commented on LUCENE-8350:
-

Commit 36b7cdde06d711e7d0691e1d5bb10c458d83fb11 in lucene-solr's branch 
refs/heads/master from ivera
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=36b7cdd ]

LUCENE-8350: Fix for time-out in RandomGeoPolygonTests


> RandomPolygonTest times out
> ---
>
> Key: LUCENE-8350
> URL: https://issues.apache.org/jira/browse/LUCENE-8350
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Attachments: LUCENE-8350.patch
>
>
> I saw a failure on the Elastic CI that I can reproduce locally. The test 
> either never finishes or is very very slow. This is due to the fact that the 
> {{do ... polygon = ... 
> while(polygon.getClass().equals(largePolygon.getClass()));}} never returns. 
> NOTE: the test result below was on branch_7x
> {noformat}
> 21:13:34[junit4] Suite: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2> jun 08, 2018 12:13:10 AM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
> 21:13:34[junit4]   2> ADVERTENCIA: Suite execution timed out: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2>1) Thread[id=23, 
> name=TEST-RandomGeoPolygonTest.testCompareBigPolygons-seed#[E636FFE9E01130D7],
>  state=RUNNABLE, group=TGRP-RandomGeoPolygonTest]
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeLo(TimSort.java:730)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeAt(TimSort.java:514)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeCollapse(TimSort.java:439)
> 21:13:34[junit4]   2> at java.util.TimSort.sort(TimSort.java:245)
> 21:13:34[junit4]   2> at java.util.Arrays.sort(Arrays.java:1438)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.(GeoComplexPolygon.java:918)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$XTree.(GeoComplexPolygon.java:1048)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.(GeoComplexPolygon.java:129)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory$BestShape.createGeoComplexPolygon(GeoPolygonFactory.java:463)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeLargeGeoPolygon(GeoPolygonFactory.java:389)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:226)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:142)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:156)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 21:13:34[junit4]   2> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 21:13:34[junit4]   2> at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> 21:13:34[junit4]   2> at 
> 

[jira] [Commented] (LUCENE-8350) RandomPolygonTest times out

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505909#comment-16505909
 ] 

ASF subversion and git services commented on LUCENE-8350:
-

Commit e4cbbcfb594baf5e9cd01d0cad808f6b5cac4f0f in lucene-solr's branch 
refs/heads/branch_7x from ivera
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e4cbbcf ]

LUCENE-8350: Fix for time-out in RandomGeoPolygonTests


> RandomPolygonTest times out
> ---
>
> Key: LUCENE-8350
> URL: https://issues.apache.org/jira/browse/LUCENE-8350
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Attachments: LUCENE-8350.patch
>
>
> I saw a failure on the Elastic CI that I can reproduce locally. The test 
> either never finishes or is very very slow. This is due to the fact that the 
> {{do ... polygon = ... 
> while(polygon.getClass().equals(largePolygon.getClass()));}} never returns. 
> NOTE: the test result below was on branch_7x
> {noformat}
> 21:13:34[junit4] Suite: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2> jun 08, 2018 12:13:10 AM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
> 21:13:34[junit4]   2> ADVERTENCIA: Suite execution timed out: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2>1) Thread[id=23, 
> name=TEST-RandomGeoPolygonTest.testCompareBigPolygons-seed#[E636FFE9E01130D7],
>  state=RUNNABLE, group=TGRP-RandomGeoPolygonTest]
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeLo(TimSort.java:730)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeAt(TimSort.java:514)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeCollapse(TimSort.java:439)
> 21:13:34[junit4]   2> at java.util.TimSort.sort(TimSort.java:245)
> 21:13:34[junit4]   2> at java.util.Arrays.sort(Arrays.java:1438)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.(GeoComplexPolygon.java:918)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$XTree.(GeoComplexPolygon.java:1048)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.(GeoComplexPolygon.java:129)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory$BestShape.createGeoComplexPolygon(GeoPolygonFactory.java:463)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeLargeGeoPolygon(GeoPolygonFactory.java:389)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:226)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:142)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:156)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 21:13:34[junit4]   2> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 21:13:34[junit4]   2> at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> 21:13:34[junit4]   2> at 
> 

[jira] [Commented] (LUCENE-8350) RandomPolygonTest times out

2018-06-08 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505901#comment-16505901
 ] 

Karl Wright commented on LUCENE-8350:
-

[~ivera], please feel free to commit this to the appropriate branches, which I 
think are 6x, 7x, master, and 7.4.


> RandomPolygonTest times out
> ---
>
> Key: LUCENE-8350
> URL: https://issues.apache.org/jira/browse/LUCENE-8350
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Assignee: Karl Wright
>Priority: Minor
> Attachments: LUCENE-8350.patch
>
>
> I saw a failure on the Elastic CI that I can reproduce locally. The test 
> either never finishes or is very very slow. This is due to the fact that the 
> {{do ... polygon = ... 
> while(polygon.getClass().equals(largePolygon.getClass()));}} never returns. 
> NOTE: the test result below was on branch_7x
> {noformat}
> 21:13:34[junit4] Suite: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2> jun 08, 2018 12:13:10 AM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
> 21:13:34[junit4]   2> ADVERTENCIA: Suite execution timed out: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2>1) Thread[id=23, 
> name=TEST-RandomGeoPolygonTest.testCompareBigPolygons-seed#[E636FFE9E01130D7],
>  state=RUNNABLE, group=TGRP-RandomGeoPolygonTest]
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeLo(TimSort.java:730)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeAt(TimSort.java:514)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeCollapse(TimSort.java:439)
> 21:13:34[junit4]   2> at java.util.TimSort.sort(TimSort.java:245)
> 21:13:34[junit4]   2> at java.util.Arrays.sort(Arrays.java:1438)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.(GeoComplexPolygon.java:918)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$XTree.(GeoComplexPolygon.java:1048)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.(GeoComplexPolygon.java:129)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory$BestShape.createGeoComplexPolygon(GeoPolygonFactory.java:463)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeLargeGeoPolygon(GeoPolygonFactory.java:389)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:226)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:142)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:156)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 21:13:34[junit4]   2> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 21:13:34[junit4]   2> at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> 21:13:34[junit4]   2> at 
> 

[jira] [Assigned] (LUCENE-8350) RandomPolygonTest times out

2018-06-08 Thread Karl Wright (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright reassigned LUCENE-8350:
---

Assignee: Ignacio Vera  (was: Karl Wright)

> RandomPolygonTest times out
> ---
>
> Key: LUCENE-8350
> URL: https://issues.apache.org/jira/browse/LUCENE-8350
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Attachments: LUCENE-8350.patch
>
>
> I saw a failure on the Elastic CI that I can reproduce locally. The test 
> either never finishes or is very very slow. This is due to the fact that the 
> {{do ... polygon = ... 
> while(polygon.getClass().equals(largePolygon.getClass()));}} never returns. 
> NOTE: the test result below was on branch_7x
> {noformat}
> 21:13:34[junit4] Suite: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2> jun 08, 2018 12:13:10 AM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
> 21:13:34[junit4]   2> ADVERTENCIA: Suite execution timed out: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2>1) Thread[id=23, 
> name=TEST-RandomGeoPolygonTest.testCompareBigPolygons-seed#[E636FFE9E01130D7],
>  state=RUNNABLE, group=TGRP-RandomGeoPolygonTest]
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeLo(TimSort.java:730)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeAt(TimSort.java:514)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeCollapse(TimSort.java:439)
> 21:13:34[junit4]   2> at java.util.TimSort.sort(TimSort.java:245)
> 21:13:34[junit4]   2> at java.util.Arrays.sort(Arrays.java:1438)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.(GeoComplexPolygon.java:918)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$XTree.(GeoComplexPolygon.java:1048)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.(GeoComplexPolygon.java:129)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory$BestShape.createGeoComplexPolygon(GeoPolygonFactory.java:463)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeLargeGeoPolygon(GeoPolygonFactory.java:389)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:226)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:142)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:156)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 21:13:34[junit4]   2> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 21:13:34[junit4]   2> at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> 21:13:34[junit4]   2> at 
> 

[jira] [Commented] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights

2018-06-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505893#comment-16505893
 ] 

Adrien Grand commented on LUCENE-8343:
--

[~mikemccand] What do you think would be the right fix?

> BlendedInfixSuggester bad score calculus for certain suggestion weights
> ---
>
> Key: LUCENE-8343
> URL: https://issues.apache.org/jira/browse/LUCENE-8343
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the BlendedInfixSuggester return a (long) score to rank the 
> suggestions.
> This score is calculated as a multiplication between :
> long *Weight* : the suggestion weight, coming from a document field, it can 
> be any long value ( including 1, 0,.. )
> double *Coefficient* : 0<=x<=1, calculated based on the position match, 
> earlier the better
> The resulting score is a long, which means that at the moment, any weight<10 
> can bring inconsistencies.
> *Edge cases* 
> Weight =1
> Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for 
> any other match)
> Weight =0
> Score = 0 ( independently of the position match coefficient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights

2018-06-08 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505885#comment-16505885
 ] 

Alessandro Benedetti commented on LUCENE-8343:
--

Hi Adrien,
I theoretically agree with you.
The reason I structured the patch this way is because what I noticed so far in 
my contributions is that a contribution is much more likely to be reviewed and 
accepted if it fixes a bug with the minimal impact as possible and involving 
less classes as possible.

The problem here is indeed related the data type of : 
- the suggestion score ( should be double)
- and weght ( should be Long as 0 must be considered different from null)

I would be more than happy to contribute that, but my feeling is that a patch 
that span over a lot of different classes and areas, would be ignored with the 
final result of the bug(s) to remain there.
Happy if you( the community in general) contradict me and I will proceed with 
the data types change approach :)

> BlendedInfixSuggester bad score calculus for certain suggestion weights
> ---
>
> Key: LUCENE-8343
> URL: https://issues.apache.org/jira/browse/LUCENE-8343
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the BlendedInfixSuggester return a (long) score to rank the 
> suggestions.
> This score is calculated as a multiplication between :
> long *Weight* : the suggestion weight, coming from a document field, it can 
> be any long value ( including 1, 0,.. )
> double *Coefficient* : 0<=x<=1, calculated based on the position match, 
> earlier the better
> The resulting score is a long, which means that at the moment, any weight<10 
> can bring inconsistencies.
> *Edge cases* 
> Weight =1
> Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for 
> any other match)
> Weight =0
> Score = 0 ( independently of the position match coefficient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12467) allow to change the autoscaling configuration via SolrJ

2018-06-08 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505877#comment-16505877
 ] 

Andrzej Bialecki  commented on SOLR-12467:
--

To clarify, we could add a method for updating the config from SolrJ directly 
to ZK, even when Solr is not running, but then we would lose important parts of 
the validation. In rare cases where this kind of access is required the above 
workaround seems sufficient.

> allow to change the autoscaling configuration via SolrJ
> ---
>
> Key: SOLR-12467
> URL: https://issues.apache.org/jira/browse/SOLR-12467
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.3.1
>Reporter: Hendrik Haddorp
>Priority: Minor
>
> Using SolrJ's CloudSolrClient it is possible to read the autoscaling 
> configuration:
> cloudSolrClient.getZkStateReader().getAutoScalingConfig()
> There is however no way to update it. One can only read out the list of life 
> nodes and then do a call to Solr using the LBHttpSolrClient for example. 
> Given that the config is stored in ZooKeeper and thus could be written 
> directly and even when no Solr instance is running this is not optimal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11911) TestLargeCluster.testSearchRate() failure

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505871#comment-16505871
 ] 

ASF subversion and git services commented on SOLR-11911:


Commit 933ea923529858f9f0c7376dce73252534a0c75b in lucene-solr's branch 
refs/heads/branch_7x from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=933ea923 ]

SOLR-11911: this is still failing too often, add BadApple again.


> TestLargeCluster.testSearchRate() failure
> -
>
> Key: SOLR-11911
> URL: https://issues.apache.org/jira/browse/SOLR-11911
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> My Jenkins found a branch_7x seed that reproduced 4/5 times for me:
> {noformat}
> Checking out Revision af9706cb89335a5aa04f9bcae0c2558a61803b50 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLargeCluster 
> -Dtests.method=testSearchRate -Dtests.seed=2D7724685882A83D -Dtests.slow=true 
> -Dtests.locale=be-BY -Dtests.timezone=Africa/Ouagadougou -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 1.24s J0  | TestLargeCluster.testSearchRate <<<
>[junit4]> Throwable #1: java.lang.AssertionError: The trigger did not 
> fire at all
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([2D7724685882A83D:703F3AE197440E72]:0)
>[junit4]>  at 
> org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testSearchRate(TestLargeCluster.java:547)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=true): {}, locale=be-BY, 
> timezone=Africa/Ouagadougou
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_151 (64-bit)/cpus=16,threads=1,free=388243840,total=502267904
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8350) RandomPolygonTest times out

2018-06-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505872#comment-16505872
 ] 

Adrien Grand commented on LUCENE-8350:
--

Thanks for jumping on it [~ivera]! +1

> RandomPolygonTest times out
> ---
>
> Key: LUCENE-8350
> URL: https://issues.apache.org/jira/browse/LUCENE-8350
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Assignee: Karl Wright
>Priority: Minor
> Attachments: LUCENE-8350.patch
>
>
> I saw a failure on the Elastic CI that I can reproduce locally. The test 
> either never finishes or is very very slow. This is due to the fact that the 
> {{do ... polygon = ... 
> while(polygon.getClass().equals(largePolygon.getClass()));}} never returns. 
> NOTE: the test result below was on branch_7x
> {noformat}
> 21:13:34[junit4] Suite: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2> jun 08, 2018 12:13:10 AM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
> 21:13:34[junit4]   2> ADVERTENCIA: Suite execution timed out: 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest
> 21:13:34[junit4]   2>1) Thread[id=23, 
> name=TEST-RandomGeoPolygonTest.testCompareBigPolygons-seed#[E636FFE9E01130D7],
>  state=RUNNABLE, group=TGRP-RandomGeoPolygonTest]
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeLo(TimSort.java:730)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeAt(TimSort.java:514)
> 21:13:34[junit4]   2> at 
> java.util.TimSort.mergeCollapse(TimSort.java:439)
> 21:13:34[junit4]   2> at java.util.TimSort.sort(TimSort.java:245)
> 21:13:34[junit4]   2> at java.util.Arrays.sort(Arrays.java:1438)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.(GeoComplexPolygon.java:918)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$XTree.(GeoComplexPolygon.java:1048)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.(GeoComplexPolygon.java:129)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory$BestShape.createGeoComplexPolygon(GeoPolygonFactory.java:463)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeLargeGeoPolygon(GeoPolygonFactory.java:389)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:226)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.GeoPolygonFactory.makeGeoPolygon(GeoPolygonFactory.java:142)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testComparePolygons(RandomGeoPolygonTest.java:156)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons(RandomGeoPolygonTest.java:98)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 21:13:34[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 21:13:34[junit4]   2> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 21:13:34[junit4]   2> at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> 21:13:34[junit4]   2> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> 21:13:34[junit4]   2> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> 21:13:34[junit4]   2> at 
> 

[jira] [Commented] (SOLR-11911) TestLargeCluster.testSearchRate() failure

2018-06-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505873#comment-16505873
 ] 

ASF subversion and git services commented on SOLR-11911:


Commit f3aa19583ccba4ff46b365402683ab9c8c8e3d81 in lucene-solr's branch 
refs/heads/branch_7_4 from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f3aa195 ]

SOLR-11911: this is still failing too often, add BadApple again.


> TestLargeCluster.testSearchRate() failure
> -
>
> Key: SOLR-11911
> URL: https://issues.apache.org/jira/browse/SOLR-11911
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> My Jenkins found a branch_7x seed that reproduced 4/5 times for me:
> {noformat}
> Checking out Revision af9706cb89335a5aa04f9bcae0c2558a61803b50 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLargeCluster 
> -Dtests.method=testSearchRate -Dtests.seed=2D7724685882A83D -Dtests.slow=true 
> -Dtests.locale=be-BY -Dtests.timezone=Africa/Ouagadougou -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 1.24s J0  | TestLargeCluster.testSearchRate <<<
>[junit4]> Throwable #1: java.lang.AssertionError: The trigger did not 
> fire at all
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([2D7724685882A83D:703F3AE197440E72]:0)
>[junit4]>  at 
> org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testSearchRate(TestLargeCluster.java:547)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=true): {}, locale=be-BY, 
> timezone=Africa/Ouagadougou
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_151 (64-bit)/cpus=16,threads=1,free=388243840,total=502267904
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights

2018-06-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505868#comment-16505868
 ] 

Adrien Grand commented on LUCENE-8343:
--

Thanks for the explanations. I 'm not familiar enough with suggesters to move 
this forward, but this patch still feels hacky to me. It looks like it is 
working around index-time issues and the fact that the weight is a long rather 
than a double.

> BlendedInfixSuggester bad score calculus for certain suggestion weights
> ---
>
> Key: LUCENE-8343
> URL: https://issues.apache.org/jira/browse/LUCENE-8343
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the BlendedInfixSuggester return a (long) score to rank the 
> suggestions.
> This score is calculated as a multiplication between :
> long *Weight* : the suggestion weight, coming from a document field, it can 
> be any long value ( including 1, 0,.. )
> double *Coefficient* : 0<=x<=1, calculated based on the position match, 
> earlier the better
> The resulting score is a long, which means that at the moment, any weight<10 
> can bring inconsistencies.
> *Edge cases* 
> Weight =1
> Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for 
> any other match)
> Weight =0
> Score = 0 ( independently of the position match coefficient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >