[JENKINS] Lucene-Solr-7.6-Windows (64bit/jdk-10.0.1) - Build # 19 - Still unstable!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Windows/19/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerSetPropertiesIntegrationTest.testSetProperties

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([907891F3BC7F84AE:FB1C46BA0F52112A]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.autoscaling.TriggerSetPropertiesIntegrationTest.testSetProperties(TriggerSetPropertiesIntegrationTest.java:111)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerSetPropertiesIntegrationTest.testSetProperties

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([907891F3BC7F84AE:FB1C46BA0F52112A]:0)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11) - Build # 7646 - Still Unstable!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7646/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerSetPropertiesIntegrationTest.testSetProperties

Error Message:
conf(four sec delay): Delta between timestamps (154467987345800ns - 
154467986961500ns = 384300ns) is not at least as much as min expected 
delay: 40ns

Stack Trace:
java.lang.AssertionError: conf(four sec delay): Delta between timestamps 
(154467987345800ns - 154467986961500ns = 384300ns) is not at least 
as much as min expected delay: 40ns
at 
__randomizedtesting.SeedInfo.seed([E694227529A10EC4:8DF0F53C9A8C9B40]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerSetPropertiesIntegrationTest.waitForAndDiffTimestamps(TriggerSetPropertiesIntegrationTest.java:241)
at 
org.apache.solr.cloud.autoscaling.TriggerSetPropertiesIntegrationTest.testSetProperties(TriggerSetPropertiesIntegrationTest.java:110)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-http2-Linux (64bit/jdk1.8.0_172) - Build # 66 - Failure!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Linux/66/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=23454, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[1E6BCDB945967733]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
2) Thread[id=23449, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[1E6BCDB945967733]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
3) Thread[id=23457, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[1E6BCDB945967733]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=23454, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[1E6BCDB945967733]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
   2) Thread[id=23449, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[1E6BCDB945967733]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
   3) Thread[id=23457, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[1E6BCDB945967733]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
at __randomizedtesting.SeedInfo.seed([1E6BCDB945967733]:0)


FAILED:  org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

Error Message:
KeeperErrorCode = AuthFailed for /solr

Stack Trace:
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
AuthFailed for /solr
at 
__randomizedtesting.SeedInfo.seed([1E6BCDB945967733:1A9F9B61DCD2D43D]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:792)
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$makePath$8(SolrZkClient.java:545)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:71)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:544)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:436)
at 

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 3208 - Failure!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3208/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseG1GC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=25080, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[F78FC817E862A34D]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
2) Thread[id=25085, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[F78FC817E862A34D]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
3) Thread[id=25089, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[F78FC817E862A34D]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=25080, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[F78FC817E862A34D]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
   2) Thread[id=25085, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[F78FC817E862A34D]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
   3) Thread[id=25089, 
name=TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[F78FC817E862A34D]-EventThread,
 state=WAITING, group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:504)
at __randomizedtesting.SeedInfo.seed([F78FC817E862A34D]:0)


FAILED:  org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

Error Message:
KeeperErrorCode = AuthFailed for /solr

Stack Trace:
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
AuthFailed for /solr
at 
__randomizedtesting.SeedInfo.seed([F78FC817E862A34D:F37B9ECF71260043]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:792)
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$makePath$8(SolrZkClient.java:545)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:71)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:544)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:436)
at 

[JENKINS] Lucene-Solr-repro - Build # 2414 - Unstable

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2414/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/398/consoleText

[repro] Revision: 51a80fb5e19ddffbb9495f4bad1d6e6ed5a954d5

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=TestSimLargeCluster 
-Dtests.method=testAddNode -Dtests.seed=8C44EBBABDEBC66F -Dtests.multiplier=2 
-Dtests.locale=zh-CN -Dtests.timezone=Africa/Djibouti -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
42f13731b3a037ee9682df49bb946ca0b4ca8544
[repro] git fetch
[repro] git checkout 51a80fb5e19ddffbb9495f4bad1d6e6ed5a954d5

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimLargeCluster
[repro] ant compile-test

[...truncated 3605 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestSimLargeCluster" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=8C44EBBABDEBC66F -Dtests.multiplier=2 -Dtests.locale=zh-CN 
-Dtests.timezone=Africa/Djibouti -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 26092 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster
[repro] git checkout 42f13731b3a037ee9682df49bb946ca0b4ca8544

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-http2-Solaris (64bit/jdk1.8.0) - Build # 16 - Still Failing!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Solaris/16/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest.testNodeMarkersRegistration

Error Message:
Path /autoscaling/nodeLost/127.0.0.1:7_solr exists

Stack Trace:
java.lang.AssertionError: Path /autoscaling/nodeLost/127.0.0.1:7_solr exists
at 
__randomizedtesting.SeedInfo.seed([C47B50F5B60B071C:DCC1D8F9B83ECAF3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest.testNodeMarkersRegistration(NodeMarkersRegistrationTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testParallelCommitStream

Error Message:

[JENKINS] Lucene-Solr-7.6-Linux (64bit/jdk-9.0.4) - Build # 93 - Still Unstable!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/93/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:33719_solr, 
127.0.0.1:39597_solr, 127.0.0.1:42097_solr] Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/12)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_true_shard1_replica_n1", 
  "base_url":"https://127.0.0.1:46733/solr;,   
"node_name":"127.0.0.1:46733_solr",   "state":"down",   
"type":"NRT",   "leader":"true"}, "core_node6":{   
"core":"raceDeleteReplica_true_shard1_replica_n5",   
"base_url":"https://127.0.0.1:46733/solr;,   
"node_name":"127.0.0.1:46733_solr",   "state":"down",   
"type":"NRT"}, "core_node4":{   
"core":"raceDeleteReplica_true_shard1_replica_n2",   
"base_url":"https://127.0.0.1:42097/solr;,   
"node_name":"127.0.0.1:42097_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:33719_solr, 127.0.0.1:39597_solr, 127.0.0.1:42097_solr]
Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/12)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_true_shard1_replica_n1",
  "base_url":"https://127.0.0.1:46733/solr;,
  "node_name":"127.0.0.1:46733_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_true_shard1_replica_n5",
  "base_url":"https://127.0.0.1:46733/solr;,
  "node_name":"127.0.0.1:46733_solr",
  "state":"down",
  "type":"NRT"},
"core_node4":{
  "core":"raceDeleteReplica_true_shard1_replica_n2",
  "base_url":"https://127.0.0.1:42097/solr;,
  "node_name":"127.0.0.1:42097_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([4B10C50CBB75F0AB:2106A4DCD387BA61]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:229)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 3052 - Still Unstable

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3052/

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster.testAddNode

Error Message:
unexpected number of MOVEREPLICA ops

Stack Trace:
java.lang.AssertionError: unexpected number of MOVEREPLICA ops
at 
__randomizedtesting.SeedInfo.seed([AE28E86C45BF8A88:9C7F5CF8AF20590]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster.testAddNode(TestSimLargeCluster.java:381)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14704 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-13062) SimpleBlockJoinUpdateRequestProcessorFactory for Atomic update and adding child doc

2018-12-12 Thread Lucky Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719663#comment-16719663
 ] 

Lucky Sharma commented on SOLR-13062:
-

[~mkhludnev] what could be the possible release month for solr 8 ?

 

> SimpleBlockJoinUpdateRequestProcessorFactory for Atomic update and adding 
> child doc 
> 
>
> Key: SOLR-13062
> URL: https://issues.apache.org/jira/browse/SOLR-13062
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Reporter: Lucky Sharma
>Priority: Minor
> Fix For: 6.6.3, 7.2.1
>
> Attachments: SimpleBlockJoinUpdateProcessor.patch
>
>
> This processor will be responsible for the block join updates/create 
> documents in one update request
>  This processor will fetch the complete block, update the block where 
> updation is needed and push back into the SOLR
> {color:#33}Updation of the complete block based on block 
> key,{color}{color:#7d8c93} Document must contain:{color}{color:#7d8c93} 
> {color}
>  # Parent_ID
>  #   Block_Level_Key (default is _root_){color:#7d8c93} {color}
>  #  LevelField which will be 0 for root, 1 for first child, 2 for grand child 
> and so on{color:#7d8c93} {color}
>  # Primary Field (i.e the ID)
> I/p will be in form of doc wrapper, with parent will hold only the block key 
> & option for{color:#7d8c93} update only.{color}{color:#7d8c93} Its child docs 
> will contain the which doc in this block you want to update with what 
> values.{color}{color:#7d8c93} This is always an atomic update.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13042) Miscellaneous JSON Facet API docs improvements

2018-12-12 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-13042:
---
Attachment: SOLR-13042.patch

> Miscellaneous JSON Facet API docs improvements
> --
>
> Key: SOLR-13042
> URL: https://issues.apache.org/jira/browse/SOLR-13042
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.5, master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Minor
> Attachments: SOLR-13042.patch, SOLR-13042.patch
>
>
> While working on SOLR-12965 I noticed a few minor issues with the JSON 
> faceting ref-guide pages.  Nothing serious, just a few annoyances.  Tweaks 
> include:
> * missing/insufficient description of some params for Heatmap facets
> * Weird formatting on "Domain Filters" example
> * missing "fields"/"fl" in the "Parameters Mapping" table
> Figured I'd just create a JIRA and fix these before I forgot about them



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13066) A failure while reloading a SolrCore can result in the SolrCore not being closed.

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719645#comment-16719645
 ] 

ASF subversion and git services commented on SOLR-13066:


Commit 4bcad18084c8b09486cc071b14e031062c6f927e in lucene-solr's branch 
refs/heads/branch_7x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4bcad18 ]

SOLR-13066: A failure while reloading a SolrCore can result in the SolrCore not 
being closed.


> A failure while reloading a SolrCore can result in the SolrCore not being 
> closed.
> -
>
> Key: SOLR-13066
> URL: https://issues.apache.org/jira/browse/SOLR-13066
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13067) Harden BasicAuthIntegrationTest.

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719646#comment-16719646
 ] 

ASF subversion and git services commented on SOLR-13067:


Commit ca0ded6f878fcd57e0640ed056e0b63b92ed78c2 in lucene-solr's branch 
refs/heads/branch_7x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ca0ded6 ]

SOLR-13067: Harden BasicAuthIntegrationTest.

# Conflicts:
#   
solr/core/src/test/org/apache/solr/security/BasicAuthIntegrationTest.java
#   
solr/test-framework/src/java/org/apache/solr/cloud/SolrCloudAuthTestCase.java


> Harden BasicAuthIntegrationTest.
> 
>
> Key: SOLR-13067
> URL: https://issues.apache.org/jira/browse/SOLR-13067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719647#comment-16719647
 ] 

ASF subversion and git services commented on SOLR-12801:


Commit 3d6a09e9d96a57637293ccde795bf170ec410621 in lucene-solr's branch 
refs/heads/branch_7x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3d6a09e ]

SOLR-12801: Harden SimSolrCloudTests.


> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13067) Harden BasicAuthIntegrationTest.

2018-12-12 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719633#comment-16719633
 ] 

Mark Miller commented on SOLR-13067:


Yeah, I went for relaxing for now as well. I don't like it, but better to have 
a passing test than a flaky one or losing coverage. These changes survived my 
difficult beasting settings (which slow envs tend to hit surprisingly often).

> Harden BasicAuthIntegrationTest.
> 
>
> Key: SOLR-13067
> URL: https://issues.apache.org/jira/browse/SOLR-13067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719619#comment-16719619
 ] 

ASF subversion and git services commented on SOLR-12801:


Commit 42f13731b3a037ee9682df49bb946ca0b4ca8544 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=42f1373 ]

SOLR-12801: Harden SimSolrCloudTests.


> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13068) many cloud/autoscaling tests are using System.currentTimeMillis() for timing comparisons (under the covers)

2018-12-12 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-13068:
---

  Assignee: Hoss Man
Attachment: SOLR-13068.patch

better patch, passess precommit. still hammering on tests

> many cloud/autoscaling tests are using System.currentTimeMillis() for timing 
> comparisons (under the covers)
> ---
>
> Key: SOLR-13068
> URL: https://issues.apache.org/jira/browse/SOLR-13068
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13068.patch, SOLR-13068.patch
>
>
> After rewriting TriggerSetPropertiesIntegrationTest in SOLR-13054 to use 
> better concurrency handling/signalling and log the timestamps the triggers 
> were firing at, i noticed we stil got a failure from Uwe's "Windows" jenkins 
> machine (on the http2 branch, but after my fix was merged to that branch.  
> The nature of the failure seemed to suggest that the JVM's 
> {{ScheduledExecutorService.scheduleWithFixedDelay}} wasn't living up to it's 
> contract -- and was running successive iterations before the full delay had 
> lapsed.
> But then i realized that in spite of using {{timeSource.getTimeNs()}} in the 
> test, the TimeSource (being used in the test) was a lie -- and under the 
> covers {{System.currentTimeInMillis}} is being used (via 
> {{TimeSource.CURRENT_TIME}} ) ... which IIUC is susceptible to clock drift, 
> particularly in VMs like those used on Uwe's jenkins machines...
> Any code in the following tests that relies on the TimeSource for doing 
> comparisons or delta calculations should be suspect...
> {noformat}
> hossman@tray:~/lucene/dev [master] $ find solr/core/src/test -name \*.java | 
> xargs grep CURRENT_TIME
> solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeAddedTriggerTest.java:
>   private static final TimeSource timeSource = TimeSource.CURRENT_TIME;
> solr/core/src/test/org/apache/solr/cloud/autoscaling/ExecutePlanActionTest.java:
>   "mock_trigger_name", 
> Collections.singletonList(TimeSource.CURRENT_TIME.getTimeNs()),
> solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerIntegrationTest.java:
>   static final TimeSource timeSource = TimeSource.CURRENT_TIME;
> solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerTest.java:
>   private final TimeSource timeSource = TimeSource.CURRENT_TIME;
> solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestSimExecutePlanAction.java:
>   "mock_trigger_name", 
> Collections.singletonList(TimeSource.CURRENT_TIME.getTimeNs()),
> solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java:TimeOut 
> timeOut = new TimeOut(ms, TimeUnit.MILLISECONDS, TimeSource.CURRENT_TIME);
> solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java:
> TimeOut timeOut = new TimeOut(10, TimeUnit.SECONDS, TimeSource.CURRENT_TIME);
> solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java:
> TimeOut timeOut = new TimeOut(10, TimeUnit.SECONDS, TimeSource.CURRENT_TIME);
> hossman@tray:~/lucene/dev [master] $ find -name \*.java | xargs grep 
> TriggerIntegrationTest.timeSource
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/SearchRateTriggerIntegrationTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerSetPropertiesIntegrationTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/RestoreTriggerStateTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerIntegrationTest.java:
>   long currentTimeNanos = 
> TriggerIntegrationTest.timeSource.getTimeNs();
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/MetricTriggerIntegrationTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeAddedTriggerIntegrationTest.java:
>   long currentTimeNanos = 
> TriggerIntegrationTest.timeSource.getTimeNs();
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerCooldownIntegrationTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Commented] (SOLR-13067) Harden BasicAuthIntegrationTest.

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719618#comment-16719618
 ] 

ASF subversion and git services commented on SOLR-13067:


Commit 44b51cd041371051d0b73b54afebc99fc0fa4862 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=44b51cd ]

SOLR-13067: Harden BasicAuthIntegrationTest.


> Harden BasicAuthIntegrationTest.
> 
>
> Key: SOLR-13067
> URL: https://issues.apache.org/jira/browse/SOLR-13067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13066) A failure while reloading a SolrCore can result in the SolrCore not being closed.

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719617#comment-16719617
 ] 

ASF subversion and git services commented on SOLR-13066:


Commit 7de72c9bc7069dd4f59c54924fe8435f524023bd in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7de72c9 ]

SOLR-13066: A failure while reloading a SolrCore can result in the SolrCore not 
being closed.


> A failure while reloading a SolrCore can result in the SolrCore not being 
> closed.
> -
>
> Key: SOLR-13066
> URL: https://issues.apache.org/jira/browse/SOLR-13066
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12373) DocBasedVersionConstraintsProcessor doesn't work when schema has required fields

2018-12-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719616#comment-16719616
 ] 

Tomás Fernández Löbbe commented on SOLR-12373:
--

Thanks for your review [~hossman], and sorry for the super late response, I got 
dragged into other million things. Your comments make total sense (specially 
about breaking the behavior of {{DocBasedVersionConstraintsProcessorFactory}}, 
I didn’t think about it). I like your suggestion of the tombstone config. In my 
use case, I want to be able to set a config that works and I don’t want to 
worry about changes in the schema (new required fields added, etc), which is 
why I suggested the per field-type approach, so I could extend the tombstone 
config idea to support, not only field to value mapping but also field type to 
value (probably in different sections). Though the field type to value should 
be for those fields that are required in the schema only (otherwise, the 
tombstone will be full of values)

I’m also wondering if what I’m trying to solve is very unique to my use case, 
and deserves to be a custom URP instead of complicating the config too much for 
everyone. Honestly, the {{protected createTombstoneDocument(…)}} is already a 
big win, and I could just extend that single method in a custom plugin. I’m 
more inclined into this now.

> DocBasedVersionConstraintsProcessor doesn't work when schema has required 
> fields
> 
>
> Key: SOLR-12373
> URL: https://issues.apache.org/jira/browse/SOLR-12373
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-12373.patch, SOLR-12373.patch
>
>
> DocBasedVersionConstraintsProcessor creates tombstones when processing a 
> delete by id. Those tombstones only have id (or whatever the unique key name 
> is) and version field(s), however, if the schema defines some required 
> fields, adding the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Proposed Additional Hooks in SolrEventListener

2018-12-12 Thread Kevin Jia
Hi Everyone,


I'm looking to add Prospective Search functionality to Solr, similar to what 
Luwak (https://github.com/flaxsearch/luwak) does - see existing JIRA ticket : 
https://issues.apache.org/jira/browse/SOLR-4587.
 To do this I need to maintain an in-memory cache that is in sync with the 
document index (custom cache that is not a straightforwards field cache).

To maintain my in-memory cache, I wanted to add functionality after updates (in 
DirectUpdateHandler2) and SolrCore instantiation. Instead of changing the code 
directly I wanted to add more hooks to SolrEventListener, namely these:


public void postCoreConstruct(SolrCore core);
public void preAddDoc(AddUpdateCommand cmd);
public void postAddDoc(AddUpdateCommand cmd);
public void preDelete(DeleteUpdateCommand cmd);
public void postDelete(DeleteUpdateCommand cmd);


I also made a ticket: https://issues.apache.org/jira/browse/SOLR-4587. If 
anyone ever needs similar custom behavior, they would be able to use these 
hooks as well.

Does anyone have any thoughts or suggestions on my proposed changes? Is there a 
better way to do this? If not, I can submit a patch soon.


Best,

Kevin




[jira] [Resolved] (SOLR-12791) Add Metrics reporting for AuthenticationPlugin

2018-12-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12791.

Resolution: Fixed

> Add Metrics reporting for AuthenticationPlugin
> --
>
> Key: SOLR-12791
> URL: https://issues.apache.org/jira/browse/SOLR-12791
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0)
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Propose to add Metrics support for all Auth plugins. Will let abstract 
> {{AuthenticationPlugin}} base class implement {{SolrMetricProducer}} and keep 
> the counters, such as:
>  * requests
>  * req authenticated
>  * req pass-through (no credentials and blockUnknown false)
>  * req with auth failures due to wrong or malformed credentials
>  * req auth failures due to missing credentials
>  * errors
>  * timeouts
>  * timing stats
> Each implementation still needs to increment the counters etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12791) Add Metrics reporting for AuthenticationPlugin

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719538#comment-16719538
 ] 

ASF subversion and git services commented on SOLR-12791:


Commit 9728dbc1675bb7fd4ca84071d40ae3c0528e424c in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9728dbc ]

SOLR-12791, SOLR-13067: Fix test failure for BasicAuthIntegrationTest
Make PkiAuthenticationIntegrationTest beast-able


> Add Metrics reporting for AuthenticationPlugin
> --
>
> Key: SOLR-12791
> URL: https://issues.apache.org/jira/browse/SOLR-12791
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0)
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Propose to add Metrics support for all Auth plugins. Will let abstract 
> {{AuthenticationPlugin}} base class implement {{SolrMetricProducer}} and keep 
> the counters, such as:
>  * requests
>  * req authenticated
>  * req pass-through (no credentials and blockUnknown false)
>  * req with auth failures due to wrong or malformed credentials
>  * req auth failures due to missing credentials
>  * errors
>  * timeouts
>  * timing stats
> Each implementation still needs to increment the counters etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13067) Harden BasicAuthIntegrationTest.

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719540#comment-16719540
 ] 

ASF subversion and git services commented on SOLR-13067:


Commit 9728dbc1675bb7fd4ca84071d40ae3c0528e424c in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9728dbc ]

SOLR-12791, SOLR-13067: Fix test failure for BasicAuthIntegrationTest
Make PkiAuthenticationIntegrationTest beast-able


> Harden BasicAuthIntegrationTest.
> 
>
> Key: SOLR-13067
> URL: https://issues.apache.org/jira/browse/SOLR-13067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-12791) Add Metrics reporting for AuthenticationPlugin

2018-12-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reopened SOLR-12791:


Reopening to fix test failures

> Add Metrics reporting for AuthenticationPlugin
> --
>
> Key: SOLR-12791
> URL: https://issues.apache.org/jira/browse/SOLR-12791
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0)
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Propose to add Metrics support for all Auth plugins. Will let abstract 
> {{AuthenticationPlugin}} base class implement {{SolrMetricProducer}} and keep 
> the counters, such as:
>  * requests
>  * req authenticated
>  * req pass-through (no credentials and blockUnknown false)
>  * req with auth failures due to wrong or malformed credentials
>  * req auth failures due to missing credentials
>  * errors
>  * timeouts
>  * timing stats
> Each implementation still needs to increment the counters etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13067) Harden BasicAuthIntegrationTest.

2018-12-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719534#comment-16719534
 ] 

Jan Høydahl commented on SOLR-13067:


Retrying on fail does not help, even several seconds. Simply relaxing the 
expected metric count from 4 to 3 (and 7 to 6) seems to be a better medicine, 
at least short term. Will commit a fix to SOLR-12791 to make Jenkins happy and 
then we can work on further improvements.

> Harden BasicAuthIntegrationTest.
> 
>
> Key: SOLR-13067
> URL: https://issues.apache.org/jira/browse/SOLR-13067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-http2-MacOSX (64bit/jdk-9) - Build # 9 - Failure!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-MacOSX/9/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC

29 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.QueryFacetTest

Error Message:
Error starting up MiniSolrCloudCluster

Stack Trace:
java.lang.Exception: Error starting up MiniSolrCloudCluster
at __randomizedtesting.SeedInfo.seed([E386BCA701DA5FAE]:0)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:630)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:276)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:206)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:198)
at 
org.apache.solr.analytics.SolrAnalyticsTestCase.setupCollection(SolrAnalyticsTestCase.java:60)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Suppressed: java.lang.RuntimeException: Jetty/Solr unresponsive
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:493)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:451)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:443)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:272)
at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
... 1 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.QueryFacetTest

Error Message:
9 threads leaked from SUITE scope at 
org.apache.solr.analytics.facet.QueryFacetTest: 1) Thread[id=54, 
name=qtp97125182-54-acceptor-0@316dd3c4-ServerConnector@3faa3b99{HTTP/1.1,[http/1.1,
 h2c]}{127.0.0.1:60798}, state=RUNNABLE, group=TGRP-QueryFacetTest] at 
java.base@9/sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) 
at 
java.base@9/sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:424)
 at 
java.base@9/sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:252)
 at 

[jira] [Commented] (SOLR-13068) many cloud/autoscaling tests are using System.currentTimeMillis() for timing comparisons (under the covers)

2018-12-12 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719522#comment-16719522
 ] 

Hoss Man commented on SOLR-13068:
-

The biggest lie of all...

{noformat}
hossman@tray:~/lucene/dev [master] $ grep -B1 CURRENT_TIME 
solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerIntegrationTest.java
 
  // use the same time source as triggers use
  static final TimeSource timeSource = TimeSource.CURRENT_TIME;
{noformat}

> many cloud/autoscaling tests are using System.currentTimeMillis() for timing 
> comparisons (under the covers)
> ---
>
> Key: SOLR-13068
> URL: https://issues.apache.org/jira/browse/SOLR-13068
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> After rewriting TriggerSetPropertiesIntegrationTest in SOLR-13054 to use 
> better concurrency handling/signalling and log the timestamps the triggers 
> were firing at, i noticed we stil got a failure from Uwe's "Windows" jenkins 
> machine (on the http2 branch, but after my fix was merged to that branch.  
> The nature of the failure seemed to suggest that the JVM's 
> {{ScheduledExecutorService.scheduleWithFixedDelay}} wasn't living up to it's 
> contract -- and was running successive iterations before the full delay had 
> lapsed.
> But then i realized that in spite of using {{timeSource.getTimeNs()}} in the 
> test, the TimeSource (being used in the test) was a lie -- and under the 
> covers {{System.currentTimeInMillis}} is being used (via 
> {{TimeSource.CURRENT_TIME}} ) ... which IIUC is susceptible to clock drift, 
> particularly in VMs like those used on Uwe's jenkins machines...
> Any code in the following tests that relies on the TimeSource for doing 
> comparisons or delta calculations should be suspect...
> {noformat}
> hossman@tray:~/lucene/dev [master] $ find solr/core/src/test -name \*.java | 
> xargs grep CURRENT_TIME
> solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeAddedTriggerTest.java:
>   private static final TimeSource timeSource = TimeSource.CURRENT_TIME;
> solr/core/src/test/org/apache/solr/cloud/autoscaling/ExecutePlanActionTest.java:
>   "mock_trigger_name", 
> Collections.singletonList(TimeSource.CURRENT_TIME.getTimeNs()),
> solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerIntegrationTest.java:
>   static final TimeSource timeSource = TimeSource.CURRENT_TIME;
> solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerTest.java:
>   private final TimeSource timeSource = TimeSource.CURRENT_TIME;
> solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestSimExecutePlanAction.java:
>   "mock_trigger_name", 
> Collections.singletonList(TimeSource.CURRENT_TIME.getTimeNs()),
> solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java:TimeOut 
> timeOut = new TimeOut(ms, TimeUnit.MILLISECONDS, TimeSource.CURRENT_TIME);
> solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java:
> TimeOut timeOut = new TimeOut(10, TimeUnit.SECONDS, TimeSource.CURRENT_TIME);
> solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java:
> TimeOut timeOut = new TimeOut(10, TimeUnit.SECONDS, TimeSource.CURRENT_TIME);
> hossman@tray:~/lucene/dev [master] $ find -name \*.java | xargs grep 
> TriggerIntegrationTest.timeSource
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/SearchRateTriggerIntegrationTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerSetPropertiesIntegrationTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/RestoreTriggerStateTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerIntegrationTest.java:
>   long currentTimeNanos = 
> TriggerIntegrationTest.timeSource.getTimeNs();
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/MetricTriggerIntegrationTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeAddedTriggerIntegrationTest.java:
>   long currentTimeNanos = 
> TriggerIntegrationTest.timeSource.getTimeNs();
> ./solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerCooldownIntegrationTest.java:import
>  static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (SOLR-13068) many cloud/autoscaling tests are using System.currentTimeMillis() for timing comparisons (under the covers)

2018-12-12 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13068:
---

 Summary: many cloud/autoscaling tests are using 
System.currentTimeMillis() for timing comparisons (under the covers)
 Key: SOLR-13068
 URL: https://issues.apache.org/jira/browse/SOLR-13068
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


After rewriting TriggerSetPropertiesIntegrationTest in SOLR-13054 to use better 
concurrency handling/signalling and log the timestamps the triggers were firing 
at, i noticed we stil got a failure from Uwe's "Windows" jenkins machine (on 
the http2 branch, but after my fix was merged to that branch.  The nature of 
the failure seemed to suggest that the JVM's 
{{ScheduledExecutorService.scheduleWithFixedDelay}} wasn't living up to it's 
contract -- and was running successive iterations before the full delay had 
lapsed.

But then i realized that in spite of using {{timeSource.getTimeNs()}} in the 
test, the TimeSource (being used in the test) was a lie -- and under the covers 
{{System.currentTimeInMillis}} is being used (via {{TimeSource.CURRENT_TIME}} ) 
... which IIUC is susceptible to clock drift, particularly in VMs like those 
used on Uwe's jenkins machines...

Any code in the following tests that relies on the TimeSource for doing 
comparisons or delta calculations should be suspect...

{noformat}
hossman@tray:~/lucene/dev [master] $ find solr/core/src/test -name \*.java | 
xargs grep CURRENT_TIME
solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeAddedTriggerTest.java: 
 private static final TimeSource timeSource = TimeSource.CURRENT_TIME;
solr/core/src/test/org/apache/solr/cloud/autoscaling/ExecutePlanActionTest.java:
  "mock_trigger_name", 
Collections.singletonList(TimeSource.CURRENT_TIME.getTimeNs()),
solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerIntegrationTest.java:
  static final TimeSource timeSource = TimeSource.CURRENT_TIME;
solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerTest.java:  
private final TimeSource timeSource = TimeSource.CURRENT_TIME;
solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestSimExecutePlanAction.java:
  "mock_trigger_name", 
Collections.singletonList(TimeSource.CURRENT_TIME.getTimeNs()),
solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java:TimeOut 
timeOut = new TimeOut(ms, TimeUnit.MILLISECONDS, TimeSource.CURRENT_TIME);
solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java:TimeOut 
timeOut = new TimeOut(10, TimeUnit.SECONDS, TimeSource.CURRENT_TIME);
solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java:TimeOut 
timeOut = new TimeOut(10, TimeUnit.SECONDS, TimeSource.CURRENT_TIME);
hossman@tray:~/lucene/dev [master] $ find -name \*.java | xargs grep 
TriggerIntegrationTest.timeSource
./solr/core/src/test/org/apache/solr/cloud/autoscaling/SearchRateTriggerIntegrationTest.java:import
 static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
./solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerSetPropertiesIntegrationTest.java:import
 static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
./solr/core/src/test/org/apache/solr/cloud/autoscaling/RestoreTriggerStateTest.java:import
 static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
./solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeLostTriggerIntegrationTest.java:
  long currentTimeNanos = TriggerIntegrationTest.timeSource.getTimeNs();
./solr/core/src/test/org/apache/solr/cloud/autoscaling/MetricTriggerIntegrationTest.java:import
 static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
./solr/core/src/test/org/apache/solr/cloud/autoscaling/NodeAddedTriggerIntegrationTest.java:
  long currentTimeNanos = TriggerIntegrationTest.timeSource.getTimeNs();
./solr/core/src/test/org/apache/solr/cloud/autoscaling/TriggerCooldownIntegrationTest.java:import
 static org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.timeSource;
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719475#comment-16719475
 ] 

Joel Bernstein edited comment on SOLR-13040 at 12/12/18 9:41 PM:
-

I committed the annotations to suppress older codecs and by mistake attached it 
to the parent ticket.

Here is the link to the master commit:

[https://github.com/apache/lucene-solr/commit/1e687268316369102306085f8c5410d62b5dafaf]

And the branch_7x commit:

https://github.com/apache/lucene-solr/commit/7ac559df9add1beac4d5c4102a0c48895ce074ef

 


was (Author: joel.bernstein):
I committed the annotations to suppress older codecs and by mistake attached it 
to the parent ticket. Here is the link to the master commit:

https://github.com/apache/lucene-solr/commit/1e687268316369102306085f8c5410d62b5dafaf

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719475#comment-16719475
 ] 

Joel Bernstein commented on SOLR-13040:
---

I committed the annotations to suppress older codecs and by mistake attached it 
to the parent ticket. Here is the link to the master commit:

https://github.com/apache/lucene-solr/commit/1e687268316369102306085f8c5410d62b5dafaf

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8585) Create jump-tables for DocValues at index-time

2018-12-12 Thread Toke Eskildsen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719464#comment-16719464
 ] 

Toke Eskildsen commented on LUCENE-8585:


Just a quick note: Running {{ant test -Dtestcase=TestLucene80DocValuesFormat}} 
took 1:19 minutes with 300 documents and 7:30 minutes with 200K. There are 
about 120 tests in {{BaseDocValuesFormatTestCase}} and 14 of them calls 
{{doTestNumericsVsStoredFields}}. Back-of-the-envelope: Increasing to 200K 
documents adds half a minute for each of the 14 tests. They all pass BTW.

> Create jump-tables for DocValues at index-time
> --
>
> Key: LUCENE-8585
> URL: https://issues.apache.org/jira/browse/LUCENE-8585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: master (8.0)
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: performance
> Attachments: LUCENE-8585.patch, LUCENE-8585.patch, 
> make_patch_lucene8585.sh
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible for creating 
> index-structures. I'm quite willing to take a stab at this, although I 
> probably won't do much about it before January 2019. Should anyone else wish 
> to adopt this JIRA-issue or co-work on it, I'll be happy to share.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13067) Harden BasicAuthIntegrationTest.

2018-12-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719461#comment-16719461
 ] 

Jan Høydahl commented on SOLR-13067:


There are three test classes that currently use the metric counting assertion 
from {{SolrCloudAuthTestCase}}, and I think you may be right that this is a 
timing issue which is more likely to appear on some fast servers than on a 
laptop etc.

I'll try adding some timeout to the assert in 
{{SolrCloudAuthTestCase.assertExpectedMetrics()}}

> Harden BasicAuthIntegrationTest.
> 
>
> Key: SOLR-13067
> URL: https://issues.apache.org/jira/browse/SOLR-13067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.6-Linux (64bit/jdk-11) - Build # 92 - Unstable!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/92/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

13 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
Could not find collection : AutoscalingHistoryHandlerTest_collection

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
AutoscalingHistoryHandlerTest_collection
at __randomizedtesting.SeedInfo.seed([2FA2C7F3CA097991]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
Could not find collection : AutoscalingHistoryHandlerTest_collection

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
AutoscalingHistoryHandlerTest_collection
at __randomizedtesting.SeedInfo.seed([2FA2C7F3CA097991]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

Re: [JENKINS] Solr-reference-guide-master - Build # 12309 - Still Failing

2018-12-12 Thread Steve Rowe
Looks like the newly updated "stable" version of RVM has been signed by a new 
release manager, and their public key hasn't been installed on the websites1 
Jenkins VM yet.  I (temporarily) changed the build script to download the key, 
first using the documented mechanism printed in the log, which failed on manual 
job kickoff, and then by downloading the key gpg says is the release is signed 
by; built manually (success!); and then commented out the key download lines.

Steve

> On Dec 12, 2018, at 1:04 PM, Apache Jenkins Server 
>  wrote:
> 
> Build: https://builds.apache.org/job/Solr-reference-guide-master/12309/
> 
> Log: 
> Started by timer
> [EnvInject] - Loading node environment variables.
> Building remotely on websites1 (git-websites svn-websites) in workspace 
> /home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
>> git rev-parse --is-inside-work-tree # timeout=10
> Fetching changes from the remote Git repository
>> git config remote.origin.url git://git.apache.org/lucene-solr.git # 
>> timeout=10
> Cleaning workspace
>> git rev-parse --verify HEAD # timeout=10
> Resetting working tree
>> git reset --hard # timeout=10
>> git clean -fdx # timeout=10
> Fetching upstream changes from git://git.apache.org/lucene-solr.git
>> git --version # timeout=10
>> git fetch --tags --progress git://git.apache.org/lucene-solr.git 
>> +refs/heads/*:refs/remotes/origin/*
>> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
>> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
> Checking out Revision 1e687268316369102306085f8c5410d62b5dafaf 
> (refs/remotes/origin/master)
>> git config core.sparsecheckout # timeout=10
>> git checkout -f 1e687268316369102306085f8c5410d62b5dafaf
> Commit message: "SOLR-12801: Suppress SSL and older codecs"
>> git rev-list --no-walk 1e687268316369102306085f8c5410d62b5dafaf # timeout=10
> No emails were triggered.
> [Solr-reference-guide-master] $ /bin/bash -xe 
> /tmp/jenkins9204071449082796145.sh
> + gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
> /tmp/jenkins9204071449082796145.sh: line 2: gpg2: command not found
> + command curl -sSL https://rvm.io/mpapis.asc
> + curl -sSL https://rvm.io/mpapis.asc
> + gpg --import -
> gpg: key D39DC0E3: "Michal Papis (RVM signing) " not changed
> gpg: Total number processed: 1
> gpg:  unchanged: 1
> + bash dev-tools/scripts/jenkins.build.ref.guide.sh
> + set -e
> + RVM_PATH=/home/jenkins/.rvm
> + RUBY_VERSION=ruby-2.3.3
> + GEMSET=solr-refguide-gemset
> + curl -sSL https://get.rvm.io
> + bash -s -- --ignore-dotfiles stable
> Turning on ignore dotfiles mode.
> Downloading https://github.com/rvm/rvm/archive/1.29.5.tar.gz
> Downloading 
> https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc
> gpg: Signature made Wed 12 Dec 2018 11:25:22 AM UTC using RSA key ID 39499BDB
> gpg: Can't check signature: public key not found
> Warning, RVM 1.26.0 introduces signed releases and automated check of 
> signatures when GPG software found. Assuming you trust Michal Papis import 
> the mpapis public key (downloading the signatures).
> 
> GPG signature verification failed for 
> '/home/jenkins/shared/.rvm/archives/rvm-1.29.5.tgz' - 
> 'https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc'! Try 
> to install GPG v2 and then fetch the public key:
> 
>gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
> 
> or if it fails:
> 
>command curl -sSL https://rvm.io/mpapis.asc | gpg --import -
> 
> the key can be compared with:
> 
>https://rvm.io/mpapis.asc
>https://keybase.io/mpapis
> 
> NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys 
> from remote server. Please downgrade or upgrade to newer version (if 
> available) or use the second method described above.
> 
> Build step 'Execute shell' marked build as failure
> Archiving artifacts
> Publishing Javadoc
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13065) Harden TestSimExecuteActionPlan

2018-12-12 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719443#comment-16719443
 ] 

Jason Gerlowski commented on SOLR-13065:


When I disable SimClusterStateProvider's caching, the error disappears in a 
beast run of {{-Dbeast.iters=400 -Dtests.dupes=30 -Dtests.iters=20}}, which 
implies that the cluster state caching is the only issue, and we'll need to 
follow a similar fix to SOLR-13045. 

> Harden TestSimExecuteActionPlan
> ---
>
> Key: SOLR-13065
> URL: https://issues.apache.org/jira/browse/SOLR-13065
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
>
> TestSimExecuteActionPlan is a serial offender in our failed Jenkins jobs.  
> Would like to look into improving it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13067) Harden BasicAuthIntegrationTest.

2018-12-12 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719436#comment-16719436
 ] 

Mark Miller commented on SOLR-13067:


I see lots of other fails in beasting today, including due to a core reload 
cleanup bug, but largely around a bad assumption about metric read/write 
timing. I'm going to commit a hack fix for that, but someone should really make 
the metric count checking method wait for a timeout to see expected counts or 
something.

> Harden BasicAuthIntegrationTest.
> 
>
> Key: SOLR-13067
> URL: https://issues.apache.org/jira/browse/SOLR-13067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-master - Build # 12309 - Still Failing

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/12309/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 1e687268316369102306085f8c5410d62b5dafaf 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1e687268316369102306085f8c5410d62b5dafaf
Commit message: "SOLR-12801: Suppress SSL and older codecs"
 > git rev-list --no-walk 1e687268316369102306085f8c5410d62b5dafaf # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /bin/bash -xe /tmp/jenkins9204071449082796145.sh
+ gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
/tmp/jenkins9204071449082796145.sh: line 2: gpg2: command not found
+ command curl -sSL https://rvm.io/mpapis.asc
+ curl -sSL https://rvm.io/mpapis.asc
+ gpg --import -
gpg: key D39DC0E3: "Michal Papis (RVM signing) " not changed
gpg: Total number processed: 1
gpg:  unchanged: 1
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.5.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc
gpg: Signature made Wed 12 Dec 2018 11:25:22 AM UTC using RSA key ID 39499BDB
gpg: Can't check signature: public key not found
Warning, RVM 1.26.0 introduces signed releases and automated check of 
signatures when GPG software found. Assuming you trust Michal Papis import the 
mpapis public key (downloading the signatures).

GPG signature verification failed for 
'/home/jenkins/shared/.rvm/archives/rvm-1.29.5.tgz' - 
'https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc'! Try to 
install GPG v2 and then fetch the public key:

gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

the key can be compared with:

https://rvm.io/mpapis.asc
https://keybase.io/mpapis

NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys 
from remote server. Please downgrade or upgrade to newer version (if available) 
or use the second method described above.

Build step 'Execute shell' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13067) Harden BasicAuthIntegrationTest.

2018-12-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719434#comment-16719434
 ] 

Jan Høydahl commented on SOLR-13067:


Hmm, this is caused by SOLR-12791 which was committed today. The restarting of 
Jetty's in the MiniSolrCloudCluster probably causes this, but I only saw this 
error when beasting with >3 parallell runs. Will try to reproduce.

> Harden BasicAuthIntegrationTest.
> 
>
> Key: SOLR-13067
> URL: https://issues.apache.org/jira/browse/SOLR-13067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-master - Build # 12308 - Still Failing

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/12308/

Log: 
Started by user sarowe
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 1e687268316369102306085f8c5410d62b5dafaf 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1e687268316369102306085f8c5410d62b5dafaf
Commit message: "SOLR-12801: Suppress SSL and older codecs"
 > git rev-list --no-walk 7e4555a2fdb863d6aac2f785116f8f13e51bf16b # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /bin/bash -xe /tmp/jenkins8743295472418087611.sh
+ gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
/tmp/jenkins8743295472418087611.sh: line 2: gpg2: command not found
+ command curl -sSL https://rvm.io/mpapis.asc
+ curl -sSL https://rvm.io/mpapis.asc
+ gpg --import -
gpg: key D39DC0E3: "Michal Papis (RVM signing) " not changed
gpg: Total number processed: 1
gpg:  unchanged: 1
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.5.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc
gpg: Signature made Wed 12 Dec 2018 11:25:22 AM UTC using RSA key ID 39499BDB
gpg: Can't check signature: public key not found
Warning, RVM 1.26.0 introduces signed releases and automated check of 
signatures when GPG software found. Assuming you trust Michal Papis import the 
mpapis public key (downloading the signatures).

GPG signature verification failed for 
'/home/jenkins/shared/.rvm/archives/rvm-1.29.5.tgz' - 
'https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc'! Try to 
install GPG v2 and then fetch the public key:

gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

the key can be compared with:

https://rvm.io/mpapis.asc
https://keybase.io/mpapis

NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys 
from remote server. Please downgrade or upgrade to newer version (if available) 
or use the second method described above.

Build step 'Execute shell' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719422#comment-16719422
 ] 

ASF subversion and git services commented on SOLR-12801:


Commit 7ac559df9add1beac4d5c4102a0c48895ce074ef in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7ac559d ]

SOLR-12801: Suppress SSL and older codecs


> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719417#comment-16719417
 ] 

ASF subversion and git services commented on SOLR-12801:


Commit 1e687268316369102306085f8c5410d62b5dafaf in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1e68726 ]

SOLR-12801: Suppress SSL and older codecs


> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.6 - Build # 24 - Still unstable

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.6/24/

5 tests failed.
FAILED:  org.apache.solr.cloud.RestartWhileUpdatingTest.test

Error Message:
There are still nodes recoverying - waited for 320 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 320 
seconds
at 
__randomizedtesting.SeedInfo.seed([21858C97DB913823:A9D1B34D756D55DB]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:920)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1477)
at 
org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Solr-reference-guide-master - Build # 12307 - Still Failing

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/12307/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 7e4555a2fdb863d6aac2f785116f8f13e51bf16b 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7e4555a2fdb863d6aac2f785116f8f13e51bf16b
Commit message: "SOLR-13057: Allow search, facet and timeseries Streaming 
Expressions to accept a comma delimited list of collections"
 > git rev-list --no-walk 7e4555a2fdb863d6aac2f785116f8f13e51bf16b # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /bin/bash -xe /tmp/jenkins6281264741346905833.sh
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.5.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc
gpg: Signature made Wed 12 Dec 2018 11:25:22 AM UTC using RSA key ID 39499BDB
gpg: Can't check signature: public key not found
Warning, RVM 1.26.0 introduces signed releases and automated check of 
signatures when GPG software found. Assuming you trust Michal Papis import the 
mpapis public key (downloading the signatures).

GPG signature verification failed for 
'/home/jenkins/shared/.rvm/archives/rvm-1.29.5.tgz' - 
'https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc'! Try to 
install GPG v2 and then fetch the public key:

gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

the key can be compared with:

https://rvm.io/mpapis.asc
https://keybase.io/mpapis

NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys 
from remote server. Please downgrade or upgrade to newer version (if available) 
or use the second method described above.

Build step 'Execute shell' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-http2-Solaris (64bit/jdk1.8.0) - Build # 15 - Failure!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Solaris/15/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

Error Message:
KeeperErrorCode = AuthFailed for /solr

Stack Trace:
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
AuthFailed for /solr
at 
__randomizedtesting.SeedInfo.seed([BEE0CC1E85AD51EE:BA149AC61CE9F2E0]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:792)
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$makePath$8(SolrZkClient.java:545)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:71)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:544)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:436)
at 
org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:83)
at sun.reflect.GeneratedMethodAccessor137.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:969)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: [VOTE] Release Lucene/Solr 7.6.0 RC2

2018-12-12 Thread Nicholas Knize
The vote has passed. Huge thanks to all that voted and helped in the
process. I will begin the remaining steps and announce the release either
tomorrow or Friday.

Thanks again!

On Wed, Dec 12, 2018 at 6:47 AM Dawid Weiss  wrote:

> SUCCESS! [1:07:17.725159]
>
> +1 to release.
>
> On Wed, Dec 12, 2018 at 12:49 AM Uwe Schindler  wrote:
> >
> > Hi,
> >
> >
> >
> > I also did some local tests with Solr (on Windows, with whitespaces in
> path name): Starts and works fine with Java 8, Java 9, Java 10 and also
> Java 11.
> >
> >
> >
> > +1 to release (the Smoke tester was ran with Java 8 and Java 9 using
> Policeman Jenkins, see mail before)!
> >
> >
> >
> > Uwe
> >
> >
> >
> > -
> >
> > Uwe Schindler
> >
> > Achterdiek 19, D-28357 Bremen
> >
> > http://www.thetaphi.de
> >
> > eMail: u...@thetaphi.de
> >
> >
> >
> > From: Uwe Schindler 
> > Sent: Tuesday, December 11, 2018 9:30 AM
> > To: dev@lucene.apache.org
> > Subject: RE: [VOTE] Release Lucene/Solr 7.6.0 RC2
> >
> >
> >
> > Hi,
> >
> >
> >
> > Policeman Jenkins checked the release on my request, results are here:
> https://jenkins.thetaphi.de/job/Lucene-Solr-Release-Tester/10/console
> >
> >
> >
> > In short:
> >
> >
> >
> > SUCCESS! [2:26:45.209399]
> >
> >
> >
> > Finished: SUCCESS
> >
> >
> >
> > This run took 2 times as long as for most others here, because it tested
> with Java 8 along with “--test-java9” parameter! So this verified that the
> release works both with Java 8, but also Java 9+ (MR-JAR, Unsafe,…):
> >
> >
> >
> > + python3 -u dev-tools/scripts/smokeTestRelease.py --test-java9
> /home/jenkins/tools/java/64bit/latest-jdk9 --tmp-dir
> /var/lib/jenkins/workspace/Lucene-Solr-Release-Tester/smoketmp
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC2-rev719cde97f84640faa1e3525690d262946571245f/
> >
> > Revision: 719cde97f84640faa1e3525690d262946571245f
> >
> > Java 1.8 JAVA_HOME=/home/jenkins/tools/java/64bit/latest-jdk8
> >
> > Java 9 JAVA_HOME=/home/jenkins/tools/java/64bit/latest-jdk9
> >
> >
> >
> > Here is Policeman’s +1
> >
> >
> >
> > Uwe Schindler will do some quick checks on the binary ZIP with Windows
> on his local computer (to see if Solr starts and all scripts can handle
> whitespace in pathnames) and give his +1 a bit later!
> >
> >
> >
> > Uwe
> >
> >
> >
> > -
> >
> > Uwe Schindler
> >
> > Achterdiek 19, D-28357 Bremen
> >
> > http://www.thetaphi.de
> >
> > eMail: u...@thetaphi.de
> >
> >
> >
> > From: Nicholas Knize 
> > Sent: Friday, December 7, 2018 11:47 PM
> > To: Lucene/Solr dev 
> > Subject: [VOTE] Release Lucene/Solr 7.6.0 RC2
> >
> >
> >
> > Please vote for release candidate 2 for Lucene/Solr 7.6.0
> >
> >
> >
> > The artifacts can be downloaded from:
> >
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC2-rev719cde97f84640faa1e3525690d262946571245f/
> >
> >
> >
> > You can run the smoke tester directly with this command:
> >
> >
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py \
> >
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC2-rev719cde97f84640faa1e3525690d262946571245f/
> >
> >
> >
> > Here's my +1
> >
> > SUCCESS! [0:50:22.047749]
> >
> > --
> >
> > Nicholas Knize, Ph.D., GISP
> > Geospatial Software Guy  |  Elasticsearch
> > Apache Lucene Committer
> > nkn...@apache.org
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --

Nicholas Knize, Ph.D., GISP
Geospatial Software Guy  |  Elasticsearch
Apache Lucene Committer
nkn...@apache.org


[jira] [Commented] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719357#comment-16719357
 ] 

Joel Bernstein commented on SOLR-13040:
---

I looked into the deleteCore calls and they were certainly wrong. deleteCore 
was being called after each test method ran. Strangely this didn't break the 
test completely. But it did break the test completely when beasted.

I will beast all new tests going forward.

I plan on reinstating this test after adding the suppress annotations. 
[~markrmil...@gmail.com], thanks for fixing this issue and all the work you've 
been doing on the tests. 

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-master - Build # 12306 - Still Failing

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/12306/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 7e4555a2fdb863d6aac2f785116f8f13e51bf16b 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7e4555a2fdb863d6aac2f785116f8f13e51bf16b
Commit message: "SOLR-13057: Allow search, facet and timeseries Streaming 
Expressions to accept a comma delimited list of collections"
 > git rev-list --no-walk 7e4555a2fdb863d6aac2f785116f8f13e51bf16b # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /bin/bash -xe /tmp/jenkins2543464749245206512.sh
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.5.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc
gpg: Signature made Wed 12 Dec 2018 11:25:22 AM UTC using RSA key ID 39499BDB
gpg: Can't check signature: public key not found
Warning, RVM 1.26.0 introduces signed releases and automated check of 
signatures when GPG software found. Assuming you trust Michal Papis import the 
mpapis public key (downloading the signatures).

GPG signature verification failed for 
'/home/jenkins/shared/.rvm/archives/rvm-1.29.5.tgz' - 
'https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc'! Try to 
install GPG v2 and then fetch the public key:

gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

the key can be compared with:

https://rvm.io/mpapis.asc
https://keybase.io/mpapis

NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys 
from remote server. Please downgrade or upgrade to newer version (if available) 
or use the second method described above.

Build step 'Execute shell' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-13067) Harden BasicAuthIntegrationTest.

2018-12-12 Thread Mark Miller (JIRA)
Mark Miller created SOLR-13067:
--

 Summary: Harden BasicAuthIntegrationTest.
 Key: SOLR-13067
 URL: https://issues.apache.org/jira/browse/SOLR-13067
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13066) A failure while reloading a SolrCore can result in the SolrCore not being closed.

2018-12-12 Thread Mark Miller (JIRA)
Mark Miller created SOLR-13066:
--

 Summary: A failure while reloading a SolrCore can result in the 
SolrCore not being closed.
 Key: SOLR-13066
 URL: https://issues.apache.org/jira/browse/SOLR-13066
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13014) URI Too Long with large streaming expressions in SolrJ

2018-12-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719171#comment-16719171
 ] 

Jan Høydahl edited comment on SOLR-13014 at 12/12/18 6:14 PM:
--

Thanks Joel. Are you aware of other expressions that are likely to become too 
large, e.g. because they can take a query string?


was (Author: janhoy):
Thanks Joel

> URI Too Long with large streaming expressions in SolrJ
> --
>
> Key: SOLR-13014
> URL: https://issues.apache.org/jira/browse/SOLR-13014
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ, streaming expressions
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.7
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> For very large expressions (e.g. with a complex search string) we'll hit the 
> max HTTP GET limit since SolrJ does not enforce POST for all expressions. 
> This goes at least for {{FacetStream}}, {{StatsStream}} and 
> {{TimeSeriesStream}}, and I'll link a Pull Request fixing these three.
> Here is an example of a stack trace when using TimeSeriesStream with a very 
> large expression: [https://gist.github.com/ea626cf1ec579daaf253aeb805d1532c]
> The fix is simply to use {{new QueryRequest(parameters, 
> SolrRequest.METHOD.POST);}} to explicitly force POST.
> See also solr-user thread 
> [http://lucene.472066.n3.nabble.com/Streaming-Expressions-GET-vs-POST-td4415044.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8464) Implement ConstantScoreScorer#setMinCompetitiveScore

2018-12-12 Thread Christophe Bismuth (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719272#comment-16719272
 ] 

Christophe Bismuth commented on LUCENE-8464:


Thanks a lot [~romseygeek], you made my day :D
 [~jim.ferenczi] has made some really great mentoring with me on this one (y) I 
hope to find some other great issues to work on!

> Implement ConstantScoreScorer#setMinCompetitiveScore
> 
>
> Key: LUCENE-8464
> URL: https://issues.apache.org/jira/browse/LUCENE-8464
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: newdev
> Fix For: master (8.0)
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> We should make it so the iterator returns NO_MORE_DOCS after 
> setMinCompetitiveScore is called with a value that is greater than the 
> constant score.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-master - Build # 12305 - Still Failing

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/12305/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 7e4555a2fdb863d6aac2f785116f8f13e51bf16b 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7e4555a2fdb863d6aac2f785116f8f13e51bf16b
Commit message: "SOLR-13057: Allow search, facet and timeseries Streaming 
Expressions to accept a comma delimited list of collections"
 > git rev-list --no-walk 7e4555a2fdb863d6aac2f785116f8f13e51bf16b # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /bin/bash -xe /tmp/jenkins4740626152705739613.sh
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.5.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc
gpg: Signature made Wed 12 Dec 2018 11:25:22 AM UTC using RSA key ID 39499BDB
gpg: Can't check signature: public key not found
Warning, RVM 1.26.0 introduces signed releases and automated check of 
signatures when GPG software found. Assuming you trust Michal Papis import the 
mpapis public key (downloading the signatures).

GPG signature verification failed for 
'/home/jenkins/shared/.rvm/archives/rvm-1.29.5.tgz' - 
'https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc'! Try to 
install GPG v2 and then fetch the public key:

gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

the key can be compared with:

https://rvm.io/mpapis.asc
https://keybase.io/mpapis

NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys 
from remote server. Please downgrade or upgrade to newer version (if available) 
or use the second method described above.

Build step 'Execute shell' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13065) Harden TestSimExecuteActionPlan

2018-12-12 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719267#comment-16719267
 ] 

Jason Gerlowski commented on SOLR-13065:


At first glance, this looks like a similar problem to what I recently saw in 
SOLR-13045.  The test fails in a {{waitForState}} block, but there's some 
indication that we're using an outdated (cached?) copy of the clusterstatus 
info.

Here's a partial stack from a recent failure I got:

{code}
  [beaster]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestSimExecutePlanAction -Dtests.method=testIntegration 
-Dtests.seed=18902C9108C137F1 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=es-GT -Dtests.timezone=Asia/Rangoon -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
  [beaster]   2> 24745 INFO  (simCloudManagerPool-112-thread-8) [] 
o.a.s.c.CloudTestUtils -- wrong number of active replicas in slice shard1, 
expected=1, found=2
  [beaster] [12:26:46.105] FAILURE 2.13s | 
TestSimExecutePlanAction.testIntegration 
{seed=[18902C9108C137F1:7163CC06353074F9]} <<< 
  [beaster]> Throwable #1: java.lang.AssertionError: Timed out waiting for 
replicas of collection to be 2 again
  [beaster]> Live Nodes: [127.0.0.1:10016_solr]
  [beaster]> Last available state: 
DocCollection(testIntegration//clusterstate.json/444)={
 ...
  [beaster]>  at 
__randomizedtesting.SeedInfo.seed([18902C9108C137F1:7163CC06353074F9]:0)
  [beaster]>  at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
  [beaster]>  at 
org.apache.solr.cloud.autoscaling.sim.TestSimExecutePlanAction.testIntegration(TestSimExecutePlanAction.java:200
...
  [beaster]> Caused by: java.util.concurrent.TimeoutException: last 
ClusterState: znodeVersion: 445
{code}

Note the different reported "last" clusterstate versions.  We see that there's 
a clusterstate.json version 445, but the failing assertion only has 444.  
That's not to say definitively that version 445 would pass the assertion, but 
it's a place to start. 

> Harden TestSimExecuteActionPlan
> ---
>
> Key: SOLR-13065
> URL: https://issues.apache.org/jira/browse/SOLR-13065
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
>
> TestSimExecuteActionPlan is a serial offender in our failed Jenkins jobs.  
> Would like to look into improving it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8581) Change LatLonShape encoding to use 4 BYTES Per Dimension

2018-12-12 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719248#comment-16719248
 ] 

Ignacio Vera edited comment on LUCENE-8581 at 12/12/18 5:44 PM:


{quote}Is my assumption correct that with your changes to tests, whether we 
pick CW or CCW doesn't matter and is just a matter of convention?
{quote}
Yes, that is the idea because the differences are only numeric and they were 
showing in the tests for sub-atomic values. A good example is  
{{TestLatLonShape.testLUCENE8454}}, it will be a hit in CW and a non-hit in CCW.
{quote}simplify the encoding
{quote}
I got you, new patch rotates edges and indeed simplifies the logic.


was (Author: ivera):
{quote}
Is my assumption correct that with your changes to tests, whether we pick CW or 
CCW doesn't matter and is just a matter of convention?
{quote}

Yes, that is the idea because the differences are only numeric and they were 
showing in the tests for sub-atomic values.

{quote}
simplify the encoding
{quote}

I got you, new patch rotates edges and indeed simplifies the logic.


> Change LatLonShape encoding to use 4 BYTES Per Dimension
> 
>
> Key: LUCENE-8581
> URL: https://issues.apache.org/jira/browse/LUCENE-8581
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8581.patch, LUCENE-8581.patch, LUCENE-8581.patch, 
> LUCENE-8581.patch, LUCENE-8581.patch, LUCENE-8581.patch
>
>
> {{LatLonShape}} tessellated triangles currently use a relatively naive 
> encoding with the first four dimensions as the bounding box of the triangle 
> and the last three dimensions as the vertices of the triangle. To encode the 
> {{x,y}} vertices in the last three dimensions requires {{bytesPerDim}} to be 
> set to 8, with 4 bytes for the x & y axis, respectively. We can reduce 
> {{bytesPerDim}} to 4 by encoding the index(es) of the vertices shared by the 
> bounding box along with the orientation of the triangle. This also opens the 
> door for supporting {{CONTAINS}} queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8585) Create jump-tables for DocValues at index-time

2018-12-12 Thread Toke Eskildsen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719252#comment-16719252
 ] 

Toke Eskildsen commented on LUCENE-8585:


Thank you for reviewing, [~jpountz].

I'll try and run an index-upgrade on one of our large shards and measure the 
difference. I'll also take another look at {{doTestNumericsVsStoredFields}} to 
see if anything can be done there.

> Create jump-tables for DocValues at index-time
> --
>
> Key: LUCENE-8585
> URL: https://issues.apache.org/jira/browse/LUCENE-8585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: master (8.0)
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: performance
> Attachments: LUCENE-8585.patch, LUCENE-8585.patch, 
> make_patch_lucene8585.sh
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible for creating 
> index-structures. I'm quite willing to take a stab at this, although I 
> probably won't do much about it before January 2019. Should anyone else wish 
> to adopt this JIRA-issue or co-work on it, I'll be happy to share.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13063) Open file limit warning when starting solr

2018-12-12 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719219#comment-16719219
 ] 

Dawid Weiss commented on SOLR-13063:


This is operating-system specific. Jira is for tracking bugs in Solr. The page 
I referred you to does have a big box explaining what insufficient file handles 
may result in (see "File Handles and Processes (ulimit settings)" chapter) -- 
this is something you asked about.

> Open file limit warning when starting solr
> --
>
> Key: SOLR-13063
> URL: https://issues.apache.org/jira/browse/SOLR-13063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI
>Affects Versions: 7.5
>Reporter: Rony
>Priority: Major
>
> Hello, When launching solr (Ubuntu 16.04) I'm getting:
>  * 
>  ** 
>  *** [WARN] *** Your open file limit is currently 1024.
>   It should be set to 65000 to avoid operational disruption.
>   If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false 
> in your profile or solr.in.sh
>  *** [WARN] ***  Your Max Processes Limit is currently 15058.
>   It should be set to 65000 to avoid operational disruption.
>   If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false 
> in your profile or solr.in.sh
> This appears to be related to a known bug in [#Ubuntu] 
> [https://blog.jayway.com/2012/02/11/how-to-really-fix-the-too-many-open-files-problem-for-tomcat-in-ubuntu/]
> I was wondering if you have some workaround. I followed the solutions in the 
> following threads:
> [https://vufind.org/jira/browse/VUFIND-1290]
> [https://underyx.me/2015/05/18/raising-the-maximum-number-of-file-descriptors]
> and was able to resolve Max Processes Limit but not File limit:
>  * 
>  ** 
>  *** [WARN] *** Your open file limit is currently 1024.  
>   It should be set to 65000 to avoid operational disruption. 
>   If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false 
> in your profile or solr.in.sh
>  Waiting up to 180 seconds to see Solr running on port 8983 []  
>  Started Solr server on port 8983 (pid=2843). Happy searching!
> cd proc# cat 2843/limits:
>  Max processes 65000    65000    
> processes 
>  Max open files    4096 4096 files
>  
> The problem persisted after upgrade to Ubuntu 18.10
> Any other solution would be appreciated.
> Otherwise can you please tell me what are the likely consequences of the open 
> file limit? 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719218#comment-16719218
 ] 

Joel Bernstein commented on SOLR-13040:
---

The commits last night did resolve the issues that were appearing with beasting 
this test. Thanks [~markrmil...@gmail.com] for making the fix.

I'll add a few suppress annotations for this test which have been needed in 
other streaming expression related test cases. And if it's ok with everyone 
I'll remove the AwaitsFix notation and put the test back in rotation. 

I'll also look into how the delete core calls were causing the schema problems 
and report back.

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8606) ConstantScoreQuery looses explain details of wrapped query

2018-12-12 Thread Christian Ziech (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719221#comment-16719221
 ] 

Christian Ziech commented on LUCENE-8606:
-

Those two failing tests are now also fixed, but I'm not sure if I did so in the 
proper way. Someone with more insight on what the SpanWeight should return in 
explain() if the simScorer field is null should have a look.

> ConstantScoreQuery looses explain details of wrapped query
> --
>
> Key: LUCENE-8606
> URL: https://issues.apache.org/jira/browse/LUCENE-8606
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Christian Ziech
>Priority: Major
> Attachments: 
> 0001-LUCENE-8606-adding-a-constructor-for-the-ConstantSco.patch, 
> 0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch
>
>
> Right now the ConstantScoreWeigth used by the ConstantScoreQuery is not 
> adding the details of the wrapped query to the explanation. 
> {code}
> if (exists) {
> return Explanation.match(score, getQuery().toString() + (score == 1f ? "" 
> : "^" + score));
> } else {
> return Explanation.noMatch(getQuery().toString() + " doesn't match id " + 
> doc);
> }
> {code}
> This is kind of inconvenient as it makes it kind of hard to figure out which 
> term finally really matched when one e.g. puts a BooleanQuery into the FILTER 
> clause of another BooleanQuery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719218#comment-16719218
 ] 

Joel Bernstein edited comment on SOLR-13040 at 12/12/18 5:16 PM:
-

The commits last night did resolve the issues that were appearing when beasting 
this test. Thanks [~markrmil...@gmail.com] for making the fix.

I'll add a few suppress annotations for this test which have been needed in 
other streaming expression related test cases. And if it's ok with everyone 
I'll remove the AwaitsFix notation and put the test back in rotation. 

I'll also look into how the delete core calls were causing the schema problems 
during beasting and report back.


was (Author: joel.bernstein):
The commits last night did resolve the issues that were appearing with beasting 
this test. Thanks [~markrmil...@gmail.com] for making the fix.

I'll add a few suppress annotations for this test which have been needed in 
other streaming expression related test cases. And if it's ok with everyone 
I'll remove the AwaitsFix notation and put the test back in rotation. 

I'll also look into how the delete core calls were causing the schema problems 
during beasting and report back.

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719218#comment-16719218
 ] 

Joel Bernstein edited comment on SOLR-13040 at 12/12/18 5:14 PM:
-

The commits last night did resolve the issues that were appearing with beasting 
this test. Thanks [~markrmil...@gmail.com] for making the fix.

I'll add a few suppress annotations for this test which have been needed in 
other streaming expression related test cases. And if it's ok with everyone 
I'll remove the AwaitsFix notation and put the test back in rotation. 

I'll also look into how the delete core calls were causing the schema problems 
during beasting and report back.


was (Author: joel.bernstein):
The commits last night did resolve the issues that were appearing with beasting 
this test. Thanks [~markrmil...@gmail.com] for making the fix.

I'll add a few suppress annotations for this test which have been needed in 
other streaming expression related test cases. And if it's ok with everyone 
I'll remove the AwaitsFix notation and put the test back in rotation. 

I'll also look into how the delete core calls were causing the schema problems 
and report back.

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8606) ConstantScoreQuery looses explain details of wrapped query

2018-12-12 Thread Christian Ziech (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Ziech updated LUCENE-8606:

Attachment: (was: 
0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch)

> ConstantScoreQuery looses explain details of wrapped query
> --
>
> Key: LUCENE-8606
> URL: https://issues.apache.org/jira/browse/LUCENE-8606
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Christian Ziech
>Priority: Major
> Attachments: 
> 0001-LUCENE-8606-adding-a-constructor-for-the-ConstantSco.patch, 
> 0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch
>
>
> Right now the ConstantScoreWeigth used by the ConstantScoreQuery is not 
> adding the details of the wrapped query to the explanation. 
> {code}
> if (exists) {
> return Explanation.match(score, getQuery().toString() + (score == 1f ? "" 
> : "^" + score));
> } else {
> return Explanation.noMatch(getQuery().toString() + " doesn't match id " + 
> doc);
> }
> {code}
> This is kind of inconvenient as it makes it kind of hard to figure out which 
> term finally really matched when one e.g. puts a BooleanQuery into the FILTER 
> clause of another BooleanQuery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8606) ConstantScoreQuery looses explain details of wrapped query

2018-12-12 Thread Christian Ziech (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Ziech updated LUCENE-8606:

Attachment: 0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch

> ConstantScoreQuery looses explain details of wrapped query
> --
>
> Key: LUCENE-8606
> URL: https://issues.apache.org/jira/browse/LUCENE-8606
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Christian Ziech
>Priority: Major
> Attachments: 
> 0001-LUCENE-8606-adding-a-constructor-for-the-ConstantSco.patch, 
> 0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch
>
>
> Right now the ConstantScoreWeigth used by the ConstantScoreQuery is not 
> adding the details of the wrapped query to the explanation. 
> {code}
> if (exists) {
> return Explanation.match(score, getQuery().toString() + (score == 1f ? "" 
> : "^" + score));
> } else {
> return Explanation.noMatch(getQuery().toString() + " doesn't match id " + 
> doc);
> }
> {code}
> This is kind of inconvenient as it makes it kind of hard to figure out which 
> term finally really matched when one e.g. puts a BooleanQuery into the FILTER 
> clause of another BooleanQuery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-master - Build # 12304 - Still Failing

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/12304/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 7e4555a2fdb863d6aac2f785116f8f13e51bf16b 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7e4555a2fdb863d6aac2f785116f8f13e51bf16b
Commit message: "SOLR-13057: Allow search, facet and timeseries Streaming 
Expressions to accept a comma delimited list of collections"
 > git rev-list --no-walk 7e4555a2fdb863d6aac2f785116f8f13e51bf16b # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /bin/bash -xe /tmp/jenkins6412811184932493787.sh
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.5.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc
gpg: Signature made Wed 12 Dec 2018 11:25:22 AM UTC using RSA key ID 39499BDB
gpg: Can't check signature: public key not found
Warning, RVM 1.26.0 introduces signed releases and automated check of 
signatures when GPG software found. Assuming you trust Michal Papis import the 
mpapis public key (downloading the signatures).

GPG signature verification failed for 
'/home/jenkins/shared/.rvm/archives/rvm-1.29.5.tgz' - 
'https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc'! Try to 
install GPG v2 and then fetch the public key:

gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

the key can be compared with:

https://rvm.io/mpapis.asc
https://keybase.io/mpapis

NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys 
from remote server. Please downgrade or upgrade to newer version (if available) 
or use the second method described above.

Build step 'Execute shell' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8585) Create jump-tables for DocValues at index-time

2018-12-12 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719167#comment-16719167
 ] 

Adrien Grand commented on LUCENE-8585:
--

Thanks Toke, I'll give it a look by the end of the week.

bq. I could make it switch from 300 to 200,000 when running Nightly or I could 
hand-pick some of the tests and increase documents for them, which would mean 
worse coverage but better speed?

This trade-off is hard indeed. We should try to optimize coverage while keeping 
the test reasonably fast, I guess it should run in under 10 seconds or so. 
Maybe there are things that we can improve in tests that are dedicated for the 
sparse case, such as avoiding tiny flushes or too aggressive merge settings.

> Create jump-tables for DocValues at index-time
> --
>
> Key: LUCENE-8585
> URL: https://issues.apache.org/jira/browse/LUCENE-8585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: master (8.0)
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: performance
> Attachments: LUCENE-8585.patch, LUCENE-8585.patch, 
> make_patch_lucene8585.sh
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible for creating 
> index-structures. I'm quite willing to take a stab at this, although I 
> probably won't do much about it before January 2019. Should anyone else wish 
> to adopt this JIRA-issue or co-work on it, I'll be happy to share.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8606) ConstantScoreQuery looses explain details of wrapped query

2018-12-12 Thread Christian Ziech (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719183#comment-16719183
 ] 

Christian Ziech commented on LUCENE-8606:
-

Attached a new patch that fixes all but 2 test failures:
{noformat}
   [junit4] Tests with failures [seed: 9F8CCC24EB4194B4]:
   [junit4]   - org.apache.lucene.search.TestComplexExplanations.test2
   [junit4]   - 
org.apache.lucene.search.TestComplexExplanationsOfNonMatches.test2
{noformat}
Those two failures are both caused by a NPE in the LeafSimScorer which is 
caused by the SpanWeight trying to explain a result with a "null" scorer.

Also I had to include a kind of controversial change in the patch which removes 
the assertion "assert scoreMode.needsScores()" from the score() method of the 
AssertingScorer. The problem is that the explain method of the BooleanQuery is 
invoking the score() function to fill the value of the Explanation() object and 
if that BooleanQuery is explained in the context of a ConstantScoreQuery, this 
assertion would fire. 
I first tried to compute the value of the Explanation based on the detail 
explanations in the BooleanQuery, but that didn't quite add up due to 
double/float inaccuracies.


> ConstantScoreQuery looses explain details of wrapped query
> --
>
> Key: LUCENE-8606
> URL: https://issues.apache.org/jira/browse/LUCENE-8606
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Christian Ziech
>Priority: Major
> Attachments: 
> 0001-LUCENE-8606-adding-a-constructor-for-the-ConstantSco.patch, 
> 0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch
>
>
> Right now the ConstantScoreWeigth used by the ConstantScoreQuery is not 
> adding the details of the wrapped query to the explanation. 
> {code}
> if (exists) {
> return Explanation.match(score, getQuery().toString() + (score == 1f ? "" 
> : "^" + score));
> } else {
> return Explanation.noMatch(getQuery().toString() + " doesn't match id " + 
> doc);
> }
> {code}
> This is kind of inconvenient as it makes it kind of hard to figure out which 
> term finally really matched when one e.g. puts a BooleanQuery into the FILTER 
> clause of another BooleanQuery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719168#comment-16719168
 ] 

Joel Bernstein commented on SOLR-13040:
---

Beasting right now following the changes Mark made last night. So far, it looks 
good.

If it does turn out to resolve the problem I'll dig and try to understand why 
removing delete core calls resolve the issue.

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13014) URI Too Long with large streaming expressions in SolrJ

2018-12-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719171#comment-16719171
 ] 

Jan Høydahl commented on SOLR-13014:


Thanks Joel

> URI Too Long with large streaming expressions in SolrJ
> --
>
> Key: SOLR-13014
> URL: https://issues.apache.org/jira/browse/SOLR-13014
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ, streaming expressions
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.7
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> For very large expressions (e.g. with a complex search string) we'll hit the 
> max HTTP GET limit since SolrJ does not enforce POST for all expressions. 
> This goes at least for {{FacetStream}}, {{StatsStream}} and 
> {{TimeSeriesStream}}, and I'll link a Pull Request fixing these three.
> Here is an example of a stack trace when using TimeSeriesStream with a very 
> large expression: [https://gist.github.com/ea626cf1ec579daaf253aeb805d1532c]
> The fix is simply to use {{new QueryRequest(parameters, 
> SolrRequest.METHOD.POST);}} to explicitly force POST.
> See also solr-user thread 
> [http://lucene.472066.n3.nabble.com/Streaming-Expressions-GET-vs-POST-td4415044.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8606) ConstantScoreQuery looses explain details of wrapped query

2018-12-12 Thread Christian Ziech (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Ziech updated LUCENE-8606:

Attachment: (was: 
0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch)

> ConstantScoreQuery looses explain details of wrapped query
> --
>
> Key: LUCENE-8606
> URL: https://issues.apache.org/jira/browse/LUCENE-8606
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Christian Ziech
>Priority: Major
> Attachments: 
> 0001-LUCENE-8606-adding-a-constructor-for-the-ConstantSco.patch, 
> 0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch
>
>
> Right now the ConstantScoreWeigth used by the ConstantScoreQuery is not 
> adding the details of the wrapped query to the explanation. 
> {code}
> if (exists) {
> return Explanation.match(score, getQuery().toString() + (score == 1f ? "" 
> : "^" + score));
> } else {
> return Explanation.noMatch(getQuery().toString() + " doesn't match id " + 
> doc);
> }
> {code}
> This is kind of inconvenient as it makes it kind of hard to figure out which 
> term finally really matched when one e.g. puts a BooleanQuery into the FILTER 
> clause of another BooleanQuery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8606) ConstantScoreQuery looses explain details of wrapped query

2018-12-12 Thread Christian Ziech (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Ziech updated LUCENE-8606:

Attachment: 0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch

> ConstantScoreQuery looses explain details of wrapped query
> --
>
> Key: LUCENE-8606
> URL: https://issues.apache.org/jira/browse/LUCENE-8606
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Christian Ziech
>Priority: Major
> Attachments: 
> 0001-LUCENE-8606-adding-a-constructor-for-the-ConstantSco.patch, 
> 0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch
>
>
> Right now the ConstantScoreWeigth used by the ConstantScoreQuery is not 
> adding the details of the wrapped query to the explanation. 
> {code}
> if (exists) {
> return Explanation.match(score, getQuery().toString() + (score == 1f ? "" 
> : "^" + score));
> } else {
> return Explanation.noMatch(getQuery().toString() + " doesn't match id " + 
> doc);
> }
> {code}
> This is kind of inconvenient as it makes it kind of hard to figure out which 
> term finally really matched when one e.g. puts a BooleanQuery into the FILTER 
> clause of another BooleanQuery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8600) DocValuesFieldUpdates should use a better sort

2018-12-12 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719161#comment-16719161
 ] 

Dawid Weiss commented on LUCENE-8600:
-

Yup, sounds good to me.

> DocValuesFieldUpdates should use a better sort
> --
>
> Key: LUCENE-8600
> URL: https://issues.apache.org/jira/browse/LUCENE-8600
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8600.patch
>
>
> This is a follow-up to LUCENE-8598: Simon identified that swaps are a 
> bottleneck to applying doc-value updates, in particular due to the overhead 
> of packed ints. It turns out that InPlaceMergeSorter does LOTS of swaps in 
> order to perform in-place. Replacing with a more efficient sort should help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8607) Allow MatchAllDocsQuery to skip counting hits

2018-12-12 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719145#comment-16719145
 ] 

Adrien Grand commented on LUCENE-8607:
--

+1

> Allow MatchAllDocsQuery to skip counting hits
> -
>
> Key: LUCENE-8607
> URL: https://issues.apache.org/jira/browse/LUCENE-8607
> Project: Lucene - Core
>  Issue Type: Task
>Affects Versions: master (8.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8607.patch, LUCENE-8607.patch
>
>
> MatchAllDocsQuery currently uses a private bulk scorer with no 
> specialisations for setMinCompetitiveScore().  We've seen what looks to be 
> something like a halving of the performance of MatchAllDocsQuery in 
> elasticsearch benchmarks running on 8.0 snapshots, and it looks as though 
> this is because it's paying the price of keeping track of competitive scores, 
> but not actually making use of the new infrastructure.  We should modify the 
> bulk scorer to early-terminate if setMinCompetitiveScore() is called with a 
> value greater than the query's boost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8607) Allow MatchAllDocsQuery to skip counting hits

2018-12-12 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8607:
--
Attachment: LUCENE-8607.patch

> Allow MatchAllDocsQuery to skip counting hits
> -
>
> Key: LUCENE-8607
> URL: https://issues.apache.org/jira/browse/LUCENE-8607
> Project: Lucene - Core
>  Issue Type: Task
>Affects Versions: master (8.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8607.patch, LUCENE-8607.patch
>
>
> MatchAllDocsQuery currently uses a private bulk scorer with no 
> specialisations for setMinCompetitiveScore().  We've seen what looks to be 
> something like a halving of the performance of MatchAllDocsQuery in 
> elasticsearch benchmarks running on 8.0 snapshots, and it looks as though 
> this is because it's paying the price of keeping track of competitive scores, 
> but not actually making use of the new infrastructure.  We should modify the 
> bulk scorer to early-terminate if setMinCompetitiveScore() is called with a 
> value greater than the query's boost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13065) Harden TestSimExecuteActionPlan

2018-12-12 Thread Jason Gerlowski (JIRA)
Jason Gerlowski created SOLR-13065:
--

 Summary: Harden TestSimExecuteActionPlan
 Key: SOLR-13065
 URL: https://issues.apache.org/jira/browse/SOLR-13065
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (8.0)
Reporter: Jason Gerlowski
Assignee: Jason Gerlowski


TestSimExecuteActionPlan is a serial offender in our failed Jenkins jobs.  
Would like to look into improving it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues

2018-12-12 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719143#comment-16719143
 ] 

Adrien Grand commented on LUCENE-8374:
--

Thanks, Toke.

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374_branch_7_3.patch, LUCENE-8374_branch_7_3.patch.20181005, 
> LUCENE-8374_branch_7_4.patch, LUCENE-8374_branch_7_5.patch, 
> LUCENE-8374_part_1.patch, LUCENE-8374_part_2.patch, LUCENE-8374_part_3.patch, 
> LUCENE-8374_part_4.patch, entire_index_logs.txt, 
> image-2018-10-24-07-30-06-663.png, image-2018-10-24-07-30-56-962.png, 
> single_vehicle_logs.txt, 
> start-2018-10-24-1_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png,
>  
> start-2018-10-24_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with existing 
> indexes.
> h2. The lookup structure inside each block
> If {{ALL}} of the 2^16 values are defined, the structure is empty and the 
> ordinal is simply the requested docID with some modulo and multiply math. 
> Nothing to improve there.
> If the block is {{DENSE}} (2^12 to 2^16 values are defined), a bitmap is used 
> and the number of set bits up to the wanted index (the docID modulo the block 
> origo) are counted. That bitmap is a long[1024], meaning that worst case is 
> to lookup and count all set bits for 1024 longs!
> One known solution to this is to use a [rank 
> structure|[https://en.wikipedia.org/wiki/Succinct_data_structure]]. I 
> [implemented 
> it|[https://github.com/tokee/lucene-solr/blob/solr5894/solr/core/src/java/org/apache/solr/search/sparse/count/plane/RankCache.java]]
>  for a related project and with that (), the rank-overhead for a {{DENSE}} 
> block would be long[32] and would ensure a maximum of 9 lookups. It is not 
> trivial to build the rank-structure and caching it (assuming all blocks are 
> dense) for 6M documents would require 22 KB (3.17% overhead). It would be far 
> better to generate the rank-structure at index time and store it immediately 
> before the bitset (this is possible with streaming as each block is fully 
> resolved before flushing), but of course that would require a change to the 
> codec.
> If {{SPARSE}} (< 2^12 values ~= 6%) are defined, the 

[jira] [Commented] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719139#comment-16719139
 ] 

Yonik Seeley commented on SOLR-13040:
-

It's pretty strange... that error message "can not sort on a field..." is from 
a schema check and has nothing to do with what is in the index.
I tried looping the test overnight but couldn't reproduce it.
If I were to guess, it might be an issue in the test framework occasionally 
picking up the wrong schema or something?

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8607) Allow MatchAllDocsQuery to skip counting hits

2018-12-12 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719130#comment-16719130
 ] 

Alan Woodward commented on LUCENE-8607:
---

Here's the benchmark results using Adrien's idea:
{code:java}
TaskQPS baseline StdDevQPS my_modified_version StdDev Pct diff

MatchAll 503.00 (13.5%) 12164.40 (188.7%) 2318.4% (1864% - 2914%)
{code}
Not quite as much speedup, but it's much simpler in terms of implementation and 
is guaranteed not to hit performance when we're collecting all docs while still 
being much better than today, so I think this is the way to go.

> Allow MatchAllDocsQuery to skip counting hits
> -
>
> Key: LUCENE-8607
> URL: https://issues.apache.org/jira/browse/LUCENE-8607
> Project: Lucene - Core
>  Issue Type: Task
>Affects Versions: master (8.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8607.patch
>
>
> MatchAllDocsQuery currently uses a private bulk scorer with no 
> specialisations for setMinCompetitiveScore().  We've seen what looks to be 
> something like a halving of the performance of MatchAllDocsQuery in 
> elasticsearch benchmarks running on 8.0 snapshots, and it looks as though 
> this is because it's paying the price of keeping track of competitive scores, 
> but not actually making use of the new infrastructure.  We should modify the 
> bulk scorer to early-terminate if setMinCompetitiveScore() is called with a 
> value greater than the query's boost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-master - Build # 12303 - Still Failing

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/12303/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 7e4555a2fdb863d6aac2f785116f8f13e51bf16b 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7e4555a2fdb863d6aac2f785116f8f13e51bf16b
Commit message: "SOLR-13057: Allow search, facet and timeseries Streaming 
Expressions to accept a comma delimited list of collections"
 > git rev-list --no-walk 7e4555a2fdb863d6aac2f785116f8f13e51bf16b # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /bin/bash -xe /tmp/jenkins1987498850901524845.sh
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.5.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc
gpg: Signature made Wed 12 Dec 2018 11:25:22 AM UTC using RSA key ID 39499BDB
gpg: Can't check signature: public key not found
Warning, RVM 1.26.0 introduces signed releases and automated check of 
signatures when GPG software found. Assuming you trust Michal Papis import the 
mpapis public key (downloading the signatures).

GPG signature verification failed for 
'/home/jenkins/shared/.rvm/archives/rvm-1.29.5.tgz' - 
'https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc'! Try to 
install GPG v2 and then fetch the public key:

gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

the key can be compared with:

https://rvm.io/mpapis.asc
https://keybase.io/mpapis

NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys 
from remote server. Please downgrade or upgrade to newer version (if available) 
or use the second method described above.

Build step 'Execute shell' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (LUCENE-8608) Extract utility class to iterate over terms docs

2018-12-12 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-8608:
---

 Summary: Extract utility class to iterate over terms docs
 Key: LUCENE-8608
 URL: https://issues.apache.org/jira/browse/LUCENE-8608
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Simon Willnauer
 Fix For: master (8.0), 7.7


Today we re-implement the same algorithm in various places
when we want to consume all docs for a set/list of terms. This
caused serious slowdowns for instance in the case of applying
updates fixed in LUCENE-8602. This change extracts the common
usage and shares the interation code including logic to reuse
Terms and PostingsEnum instances as much as possble and adds
tests for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12883) Upgrade to Jetty 9.4.14

2018-12-12 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719111#comment-16719111
 ] 

Erick Erickson edited comment on SOLR-12883 at 12/12/18 4:02 PM:
-

Jetty was upgraded to Jetty 9.4.14.v20181114 as part of SOLR-13030. This was 
done as part of test hardening,


was (Author: erickerickson):
Jetty was upgraded to Jetty 9.4.14.v20181114 as part of SOLR-12=3030. This was 
done as part of test hardening,

> Upgrade to Jetty 9.4.14
> ---
>
> Key: SOLR-12883
> URL: https://issues.apache.org/jira/browse/SOLR-12883
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: master (8.0), 7.7
>
>
> Start from 9.4.13 Jetty Client started to support SPNEGO authentication, 
> therefore this is the crucial missed part for jira/http2 branch which 
> switched internal communication from Apache HttpClient to Jetty Client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Java 11.

2018-12-12 Thread Erick Erickson
Thanks for the explanation Uwe!

I just tracked this down and Jetty has been upgraded to Jetty
9.4.14.v20181114 in 7.7 as well as master, in SOLR-13030 as part of
Mark's bug swatting work.

So to recap, Java 9, 10, 11 are OK for production use as long as TLS
1.3 is _not_ in the picture. The (seemingly) increased number of test
failures are timing related test issues, not relevant to production
use of Solr/Lucene.  Solr 7.7 is the minimum version that _may_
support TLS 1.3 since Jetty has been upgraded to 9.4.14 in that
version, but that has not been verified yet.

Is the above accurate? I'm assuming this includes OpenJDK as well
(people will ask).

I'm trying to get a succinct statement here for clients...

Thanks again,
Erick

Thanks again
Erick
On Wed, Dec 12, 2018 at 12:54 AM Uwe Schindler  wrote:
>
> Hi Erick,
>
> according to Jetty release logs: For Java 11 support, Jetty should be updated 
> to latest version released this November: 9.4.14 - which we have already done 
> in master (not sure about 7.x). Unfortunately the release candidate of 7.6 is 
> on Jetty 9.4.11 which has no Java 11 support for TLS 1.3 at all. IMHO, we 
> should update this ASAP, but this only affects support for TLS 1.3 - so it's 
> more a security fix.
>
> From what I see, the issues are more test issues because it happens randomly 
> like "no live nodes available, connection timed out /" I think the issues 
> here is that timing slightly changed, so tests fail more often (there seems 
> to be the problem that tests don't really get the correct time when "jetty 
> has fully started and accepts connections" - this is at least how it was 
> explained to me on berlinbuzzwords). Depending on timing (e.g, Jetty is slow 
> in starting up, it's hammered with requests already). The new TLS 
> functionality in JDK 11 seem to slow down startup times - no idea. Maybe Mark 
> Miller can explain better what's sometimes wrong with timing. The previous 
> statement can be nonsense at all, it was just my understanding when talking 
> with other committers on berlinbuzzwords.
>
> But the test failures seemed to happen inside HttpClient. So not only the 
> server can be the problem, maybe it's the client, too. I can say: those 
> issues do not happen in production, I had no problem to connect to a jetty 
> server with standard TLS browser on Java 11.
>
> The HTTP2 branch has some problems, but when tests pass it almost 100% and it 
> looks much more stable. Recently we just had some other test problems 
> regarding "what HTTP version randomly to support".
>
> Uwe
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Erick Erickson 
> > Sent: Wednesday, December 12, 2018 1:45 AM
> > To: dev@lucene.apache.org
> > Subject: Re: Java 11.
> >
> > Well, you've done a lot more thorough testing of the different
> > versions of Java than I have. I regularly see many more tests failing
> > that have JDK11 in the title, but perhaps it's all the TLS stuff.
> >
> > bq.  ...think, if you want to use SSL/TLS, the status in Java 11 is 
> > undefined
> >
> > This is a pretty important caveat, I have numerous clients that insist
> > on TLS. So are you saying that this problem is in the _tests_ or
> > enabling SSL/TLS with JDK11 in general?
> > On Tue, Dec 11, 2018 at 3:57 PM Uwe Schindler  wrote:
> > >
> > > Hi Erick,
> > >
> > > > I just noticed that Solr's CHANGES.txt has this at the beginning:
> > > >
> > > > You need a Java 1.8 VM or later installed.
> > > >
> > > > Is this still what we want to say between now and whenever we
> > > > understand the various failures on jdk 9, 10, 11 and 12? Do we want to
> > > > specifically say that 9 and 10 are not recommended?
> > >
> > > I think, if you want to use SSL/TLS, the status in Java 11 is undefined. 
> > > The
> > error rate with Java 11 is higher than with Java 8 to Java 10 (because of
> > support for TLS 1.3 which seem to cause some SSL-related tests to fail). But
> > standard Solr usage is perfectly possible with Java 8 to 11 and actually 
> > some
> > of my customers (none of them uses SSL) have already changed without any
> > problems.
> > >
> > > As far as I see, the HTTP2 branch is also in a good shape (with Java 11), 
> > > so
> > the statement in the CHANGES.txt and SYSTEM_REQUIREMENTS.txt is in
> > perfect shape. No idea what you are talking about? Why do you think that
> > Java 9 or 10 does not work?
> > >
> > > On my machines, Smoke tester on Policeman running with Java 9 finished
> > on the first run (it ran tests with both Java 8 and Java 9). SUCCESS!
> > >
> > > The issues with tests is less Java version than more flakey tests. I 
> > > cannot
> > see any significance depending on Java version - sorry!
> > >
> > > Uwe
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, 

[jira] [Resolved] (SOLR-12883) Upgrade to Jetty 9.4.14

2018-12-12 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-12883.
---
   Resolution: Fixed
Fix Version/s: 7.7
   master (8.0)

Upgraded as part of SOLR-13030.

Dat: let me know if you disagree with closing this. Note also that I changed 
the version to correspond to the one actually in the code now..

> Upgrade to Jetty 9.4.14
> ---
>
> Key: SOLR-12883
> URL: https://issues.apache.org/jira/browse/SOLR-12883
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: master (8.0), 7.7
>
>
> Start from 9.4.13 Jetty Client started to support SPNEGO authentication, 
> therefore this is the crucial missed part for jira/http2 branch which 
> switched internal communication from Apache HttpClient to Jetty Client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13037) Harden TestSimGenericDistributedQueue.

2018-12-12 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719102#comment-16719102
 ] 

Jason Gerlowski commented on SOLR-13037:


I've attached a patch which takes approach #2 above.  With it, I haven't seen 
any GDQ test failures, though I'll be more confident with more beasting.  Will 
run some tests in the background the rest of today and then commit tonight if 
things still look good. 

> Harden TestSimGenericDistributedQueue.
> --
>
> Key: SOLR-13037
> URL: https://issues.apache.org/jira/browse/SOLR-13037
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13037.patch, repro-log.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12883) Upgrade to Jetty 9.4.14

2018-12-12 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719111#comment-16719111
 ] 

Erick Erickson commented on SOLR-12883:
---

Jetty was upgraded to Jetty 9.4.14.v20181114 as part of SOLR-12=3030. This was 
done as part of test hardening,

> Upgrade to Jetty 9.4.14
> ---
>
> Key: SOLR-12883
> URL: https://issues.apache.org/jira/browse/SOLR-12883
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
>
> Start from 9.4.13 Jetty Client started to support SPNEGO authentication, 
> therefore this is the crucial missed part for jira/http2 branch which 
> switched internal communication from Apache HttpClient to Jetty Client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8607) Allow MatchAllDocsQuery to skip counting hits

2018-12-12 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719108#comment-16719108
 ] 

Adrien Grand commented on LUCENE-8607:
--

I'm wondering what the slowdown is for someone who would collect all matches 
(eg. to compute facets across the whole index), as this patch adds some 
overhead to the bulk scorer. If it's not negligible, maybe an alternative would 
be to do something like {{if (scoreMode == TOP_SCORES) return 
super.bulkScorer();}} in Weight#bulkScorer? Otherwise, this is great, let's 
push. :)

> Allow MatchAllDocsQuery to skip counting hits
> -
>
> Key: LUCENE-8607
> URL: https://issues.apache.org/jira/browse/LUCENE-8607
> Project: Lucene - Core
>  Issue Type: Task
>Affects Versions: master (8.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8607.patch
>
>
> MatchAllDocsQuery currently uses a private bulk scorer with no 
> specialisations for setMinCompetitiveScore().  We've seen what looks to be 
> something like a halving of the performance of MatchAllDocsQuery in 
> elasticsearch benchmarks running on 8.0 snapshots, and it looks as though 
> this is because it's paying the price of keeping track of competitive scores, 
> but not actually making use of the new infrastructure.  We should modify the 
> bulk scorer to early-terminate if setMinCompetitiveScore() is called with a 
> value greater than the query's boost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12883) Upgrade to Jetty 9.4.14

2018-12-12 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12883:
--
Summary: Upgrade to Jetty 9.4.14  (was: Upgrade to Jetty 9.4.13)

> Upgrade to Jetty 9.4.14
> ---
>
> Key: SOLR-12883
> URL: https://issues.apache.org/jira/browse/SOLR-12883
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
>
> Start from 9.4.13 Jetty Client started to support SPNEGO authentication, 
> therefore this is the crucial missed part for jira/http2 branch which 
> switched internal communication from Apache HttpClient to Jetty Client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #526: LUCENE-8608: Extract utility class to iterate...

2018-12-12 Thread s1monw
GitHub user s1monw opened a pull request:

https://github.com/apache/lucene-solr/pull/526

LUCENE-8608: Extract utility class to iterate over terms docs

Today we re-implement the same algorithm in various places
when we want to consume all docs for a set/list of terms. This
caused serious slowdowns for instance in the case of applying
updates fixed in LUCENE-8602. This change extracts the common
usage and shares the interation code including logic to reuse
Terms and PostingsEnum instances as much as possble and adds
tests for it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/s1monw/lucene-solr extract_terms_seeker

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/526.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #526


commit ed7f8531c0274c36d7cacf1abc4894d27592167c
Author: Simon Willnauer 
Date:   2018-12-07T21:17:26Z

LUCENE-8608: Extract utility class to iterate over terms docs

Today we re-implement the same algorithm in various places
when we want to consume all docs for a set/list of terms. This
caused serious slowdowns for instance in the case of applying
updates fixed in LUCENE-8602. This change extracts the common
usage and shares the interation code including logic to reuse
Terms and PostingsEnum instances as much as possble and adds
tests for it.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13037) Harden TestSimGenericDistributedQueue.

2018-12-12 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-13037:
---
Attachment: SOLR-13037.patch

> Harden TestSimGenericDistributedQueue.
> --
>
> Key: SOLR-13037
> URL: https://issues.apache.org/jira/browse/SOLR-13037
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13037.patch, repro-log.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8464) Implement ConstantScoreScorer#setMinCompetitiveScore

2018-12-12 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719100#comment-16719100
 ] 

Alan Woodward commented on LUCENE-8464:
---

[~cbismuth] thought you'd like to know that this looks to have made an 
impressive change to the performance of Wildcard and Prefix queries.  Nicely 
done!

[https://home.apache.org/~mikemccand/lucenebench/Wildcard.html]

[https://home.apache.org/~mikemccand/lucenebench/Prefix3.html]

 

> Implement ConstantScoreScorer#setMinCompetitiveScore
> 
>
> Key: LUCENE-8464
> URL: https://issues.apache.org/jira/browse/LUCENE-8464
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: newdev
> Fix For: master (8.0)
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> We should make it so the iterator returns NO_MORE_DOCS after 
> setMinCompetitiveScore is called with a value that is greater than the 
> constant score.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8581) Change LatLonShape encoding to use 4 BYTES Per Dimension

2018-12-12 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719097#comment-16719097
 ] 

Adrien Grand commented on LUCENE-8581:
--

bq. The last thing remaining is which orientation should we use in the encoding 
(currently CW).

Is my assumption correct that with your changes to tests, whether we pick CW or 
CCW doesn't matter and is just a matter of convention?

I still suspect that we could greatly simplify the encoding logic by first 
rotating vertices of the triangles so that we always have eg. ax == minx so 
that we never need to check this condition later on?

> Change LatLonShape encoding to use 4 BYTES Per Dimension
> 
>
> Key: LUCENE-8581
> URL: https://issues.apache.org/jira/browse/LUCENE-8581
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8581.patch, LUCENE-8581.patch, LUCENE-8581.patch, 
> LUCENE-8581.patch, LUCENE-8581.patch
>
>
> {{LatLonShape}} tessellated triangles currently use a relatively naive 
> encoding with the first four dimensions as the bounding box of the triangle 
> and the last three dimensions as the vertices of the triangle. To encode the 
> {{x,y}} vertices in the last three dimensions requires {{bytesPerDim}} to be 
> set to 8, with 4 bytes for the x & y axis, respectively. We can reduce 
> {{bytesPerDim}} to 4 by encoding the index(es) of the vertices shared by the 
> bounding box along with the orientation of the triangle. This also opens the 
> door for supporting {{CONTAINS}} queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8607) Allow MatchAllDocsQuery to skip counting hits

2018-12-12 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719094#comment-16719094
 ] 

Alan Woodward commented on LUCENE-8607:
---

Here's a patch implementing early termination.  I modified the wikimedium 
benchmark tasks to include a MatchAllDocsQuery, and got the following:
{code:java}
TaskQPS baseline StdDevQPS my_modified_version StdDev Pct diff

MatchAll 504.27 (12.8%) 17273.74 (375.3%) 3325.5% (2603% - 4260%){code}
Not too shabby :)

> Allow MatchAllDocsQuery to skip counting hits
> -
>
> Key: LUCENE-8607
> URL: https://issues.apache.org/jira/browse/LUCENE-8607
> Project: Lucene - Core
>  Issue Type: Task
>Affects Versions: master (8.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8607.patch
>
>
> MatchAllDocsQuery currently uses a private bulk scorer with no 
> specialisations for setMinCompetitiveScore().  We've seen what looks to be 
> something like a halving of the performance of MatchAllDocsQuery in 
> elasticsearch benchmarks running on 8.0 snapshots, and it looks as though 
> this is because it's paying the price of keeping track of competitive scores, 
> but not actually making use of the new infrastructure.  We should modify the 
> bulk scorer to early-terminate if setMinCompetitiveScore() is called with a 
> value greater than the query's boost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8607) Allow MatchAllDocsQuery to skip counting hits

2018-12-12 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8607:
--
Attachment: LUCENE-8607.patch

> Allow MatchAllDocsQuery to skip counting hits
> -
>
> Key: LUCENE-8607
> URL: https://issues.apache.org/jira/browse/LUCENE-8607
> Project: Lucene - Core
>  Issue Type: Task
>Affects Versions: master (8.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8607.patch
>
>
> MatchAllDocsQuery currently uses a private bulk scorer with no 
> specialisations for setMinCompetitiveScore().  We've seen what looks to be 
> something like a halving of the performance of MatchAllDocsQuery in 
> elasticsearch benchmarks running on 8.0 snapshots, and it looks as though 
> this is because it's paying the price of keeping track of competitive scores, 
> but not actually making use of the new infrastructure.  We should modify the 
> bulk scorer to early-terminate if setMinCompetitiveScore() is called with a 
> value greater than the query's boost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8607) Allow MatchAllDocsQuery to skip counting hits

2018-12-12 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-8607:
-

 Summary: Allow MatchAllDocsQuery to skip counting hits
 Key: LUCENE-8607
 URL: https://issues.apache.org/jira/browse/LUCENE-8607
 Project: Lucene - Core
  Issue Type: Task
Affects Versions: master (8.0)
Reporter: Alan Woodward
Assignee: Alan Woodward


MatchAllDocsQuery currently uses a private bulk scorer with no specialisations 
for setMinCompetitiveScore().  We've seen what looks to be something like a 
halving of the performance of MatchAllDocsQuery in elasticsearch benchmarks 
running on 8.0 snapshots, and it looks as though this is because it's paying 
the price of keeping track of competitive scores, but not actually making use 
of the new infrastructure.  We should modify the bulk scorer to early-terminate 
if setMinCompetitiveScore() is called with a value greater than the query's 
boost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 970 - Still Unstable!

2018-12-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/970/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC

88 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.PivotFacetTest

Error Message:
Error starting up MiniSolrCloudCluster

Stack Trace:
java.lang.Exception: Error starting up MiniSolrCloudCluster
at __randomizedtesting.SeedInfo.seed([3C16CC045188ABE5]:0)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:630)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:276)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:206)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:198)
at 
org.apache.solr.analytics.SolrAnalyticsTestCase.setupCollection(SolrAnalyticsTestCase.java:60)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Suppressed: java.lang.RuntimeException: Jetty/Solr unresponsive
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:459)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:417)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:443)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:272)
at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
... 1 more
Suppressed: java.lang.RuntimeException: Jetty/Solr unresponsive
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:459)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:417)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:443)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:272)
at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 

[jira] [Comment Edited] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719066#comment-16719066
 ] 

Joel Bernstein edited comment on SOLR-13040 at 12/12/18 3:07 PM:
-

I'll dig into this more today. I'll start testing with the delete cores 
removed. The failures I'm seeing would be a strange side effect of the delete 
cores calls if that cleans up the issue. 

I think this test could fail without beasting because nothing at all was being 
suppressed. So there may to be unrelated, reproducible by seed test failures.

The test failures I was seeing were not reproducible by the seed shown in the 
beast failure output. So that does lead me to think that it is something 
related to a side effect that occurs while beasting.


was (Author: joel.bernstein):
I'll dig into this more today. I'll start testing with the delete cores 
removed. The failures I'm seeing would be a strange side effect of the delete 
cores calls if that cleans up the issue. 

I think this test could fail without beasting because nothing at all was being 
suppressed. So there may to be unrelated, reproducible by seed test failures.

The test failures I was seeing were not reproducible by the seed shown in the 
beast failure output. So that does lead me to think that it is something 
related to side effect that occurs while beasting.

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8606) ConstantScoreQuery looses explain details of wrapped query

2018-12-12 Thread Christian Ziech (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719068#comment-16719068
 ] 

Christian Ziech commented on LUCENE-8606:
-

Getting the tests you mentioned to work is the easy part. The harder part is 
that explaining a BooleanWeight which was created with "needsScores == false" 
is running into assertions...

> ConstantScoreQuery looses explain details of wrapped query
> --
>
> Key: LUCENE-8606
> URL: https://issues.apache.org/jira/browse/LUCENE-8606
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Christian Ziech
>Priority: Major
> Attachments: 
> 0001-LUCENE-8606-adding-a-constructor-for-the-ConstantSco.patch, 
> 0001-LUCENE-8606-overwriting-the-explain-method-for-Cachi.patch
>
>
> Right now the ConstantScoreWeigth used by the ConstantScoreQuery is not 
> adding the details of the wrapped query to the explanation. 
> {code}
> if (exists) {
> return Explanation.match(score, getQuery().toString() + (score == 1f ? "" 
> : "^" + score));
> } else {
> return Explanation.noMatch(getQuery().toString() + " doesn't match id " + 
> doc);
> }
> {code}
> This is kind of inconvenient as it makes it kind of hard to figure out which 
> term finally really matched when one e.g. puts a BooleanQuery into the FILTER 
> clause of another BooleanQuery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719066#comment-16719066
 ] 

Joel Bernstein commented on SOLR-13040:
---

I'll dig into this more today. I'll start testing with the delete cores 
removed. The failures I'm seeing would be a strange side effect of the delete 
cores calls if that cleans up the issue. 

I think this test could fail without beasting because nothing at all was being 
suppressed. So there may to be unrelated, reproducible by seed test failures.

The test failures I was seeing were not reproducible by the seed shown in the 
beast failure output. So that does lead me to think that it is something 
related to side effect that occurs while beasting.

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-master - Build # 12302 - Still Failing

2018-12-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/12302/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 7e4555a2fdb863d6aac2f785116f8f13e51bf16b 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7e4555a2fdb863d6aac2f785116f8f13e51bf16b
Commit message: "SOLR-13057: Allow search, facet and timeseries Streaming 
Expressions to accept a comma delimited list of collections"
 > git rev-list --no-walk ce9a8012c080dbf2a96a6755a0b7048ab5739419 # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /bin/bash -xe /tmp/jenkins8055666091360977223.sh
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.5.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc
gpg: Signature made Wed 12 Dec 2018 11:25:22 AM UTC using RSA key ID 39499BDB
gpg: Can't check signature: public key not found
Warning, RVM 1.26.0 introduces signed releases and automated check of 
signatures when GPG software found. Assuming you trust Michal Papis import the 
mpapis public key (downloading the signatures).

GPG signature verification failed for 
'/home/jenkins/shared/.rvm/archives/rvm-1.29.5.tgz' - 
'https://github.com/rvm/rvm/releases/download/1.29.5/1.29.5.tar.gz.asc'! Try to 
install GPG v2 and then fetch the public key:

gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

the key can be compared with:

https://rvm.io/mpapis.asc
https://keybase.io/mpapis

NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys 
from remote server. Please downgrade or upgrade to newer version (if available) 
or use the second method described above.

Build step 'Execute shell' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13014) URI Too Long with large streaming expressions in SolrJ

2018-12-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719045#comment-16719045
 ] 

ASF subversion and git services commented on SOLR-13014:


Commit f2702a0b57420588bb99fe7a0f17bdd5894036b8 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f2702a0 ]

SOLR-13014: Fixed SearchStream in branch_7x. This was needed due to a confusing 
backport situation because the SearchStream was originally slated for 8.0 but 
was backported later to 7x after SOLR-13014 already changed the SearchStream in 
master and backported the changes to 7x, when SearchStream hadn't yet been 
moved to 7x.


> URI Too Long with large streaming expressions in SolrJ
> --
>
> Key: SOLR-13014
> URL: https://issues.apache.org/jira/browse/SOLR-13014
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ, streaming expressions
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.7
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> For very large expressions (e.g. with a complex search string) we'll hit the 
> max HTTP GET limit since SolrJ does not enforce POST for all expressions. 
> This goes at least for {{FacetStream}}, {{StatsStream}} and 
> {{TimeSeriesStream}}, and I'll link a Pull Request fixing these three.
> Here is an example of a stack trace when using TimeSeriesStream with a very 
> large expression: [https://gist.github.com/ea626cf1ec579daaf253aeb805d1532c]
> The fix is simply to use {{new QueryRequest(parameters, 
> SolrRequest.METHOD.POST);}} to explicitly force POST.
> See also solr-user thread 
> [http://lucene.472066.n3.nabble.com/Streaming-Expressions-GET-vs-POST-td4415044.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13040) Harden TestSQLHandler.

2018-12-12 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-13040:
-

Assignee: Joel Bernstein

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8585) Create jump-tables for DocValues at index-time

2018-12-12 Thread Toke Eskildsen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719050#comment-16719050
 ] 

Toke Eskildsen commented on LUCENE-8585:


I have cleaned up, moved old lucene70 codec classes and in general tried to 
finish the job. For a change of pace and ease of review, I have created a 
pull-request instead of a patch. I'll create a patch, should anyone want that 
instead.

I have a single pending issue with unit-testing: The method 
{{BaseDocValuesformatTestCase.doTestNumericsVsStoredFields}} is used a lot and 
currently operates with 300 documents. This is far from enough when testing 
jumps. Upping it to 200,000 means that jumping can be implicitly tested for all 
the different test-cases in {{BaseDocValuesformatTestCase}}, but that increases 
processing time a lot. I could make it switch from 300 to 200,000 when running 
{{Nightly}} or I could hand-pick some of the tests and increase documents for 
them, which would mean worse coverage but better speed?

> Create jump-tables for DocValues at index-time
> --
>
> Key: LUCENE-8585
> URL: https://issues.apache.org/jira/browse/LUCENE-8585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: master (8.0)
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: performance
> Attachments: LUCENE-8585.patch, LUCENE-8585.patch, 
> make_patch_lucene8585.sh
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible for creating 
> index-structures. I'm quite willing to take a stab at this, although I 
> probably won't do much about it before January 2019. Should anyone else wish 
> to adopt this JIRA-issue or co-work on it, I'll be happy to share.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >