[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 57 - Still unstable

2017-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/57/

9 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomBig

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([75F854A4F550C671]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.spatial3d.TestGeo3DPoint

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([75F854A4F550C671]:0)


FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test

Error Message:
The Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 45 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([1187D051961EF9D9:99D3EF8B38E29421]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 559 - Unstable!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/559/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=16492, name=jetty-launcher-3606-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=16492, name=jetty-launcher-3606-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)
at __randomizedtesting.SeedInfo.seed([AF3C106B69AC3C5B]:0)




Build Log:
[...truncated 12708 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.TestSolrCloudWithSecureImpersonation_AF3C106B69AC3C5B-001/init-core-data-001
   [junit4]   2> 958122 WARN  
(SUITE-TestSolrCloudWithSecureImpersonation-seed#[AF3C106B69AC3C5B]-worker) [   
 ] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1
   [junit4]   2> 958122 INFO  
(SUITE-TestSolrCloudWithSecureImpersonation-seed#[AF3C106B69AC3C5B]-worker) [   
 ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 958123 INFO  
(SUITE-TestSolrCloudWithSecureImpersonation-seed#[AF3C106B69AC3C5B]-worker) [   
 ] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 958123 INFO  

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9) - Build # 20621 - Still Failing!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20621/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC --illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
replica never fully recovered

Stack Trace:
java.lang.AssertionError: replica never fully recovered
at 
__randomizedtesting.SeedInfo.seed([1657712D937E3CD7:7BABD5D02936C3D0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:303)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:255)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 14036 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest
   [junit4]   2> 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20620 - Still Failing!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20620/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
replica never fully recovered

Stack Trace:
java.lang.AssertionError: replica never fully recovered
at 
__randomizedtesting.SeedInfo.seed([8E102D73904FB4A1:E3EC898E2A074BA6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:303)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13967 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest
   [junit4]  

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 55 - Still Failing

2017-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/55/

No tests ran.

Build Log:
[...truncated 28018 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.08 sec (2.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.1.0-src.tgz...
   [smoker] 30.9 MB in 0.07 sec (459.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.1.0.tgz...
   [smoker] 69.5 MB in 0.06 sec (1129.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.1.0.zip...
   [smoker] 79.9 MB in 0.13 sec (625.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.1.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6221 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.1.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6221 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.1.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (26.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.1.0-src.tgz...
   [smoker] 52.6 MB in 1.09 sec (48.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.1.0.tgz...
   [smoker] 143.6 MB in 1.86 sec (77.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.1.0.zip...
   [smoker] 144.6 MB in 3.66 sec (39.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.1.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.1.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]   [-]   [\]   [|]   [/]  

[jira] [Commented] (SOLR-11426) TestLazyCores fails too often

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195472#comment-16195472
 ] 

ASF subversion and git services commented on SOLR-11426:


Commit f0a4b2dafe13e2b372e33ce13d552f169187a44e in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f0a4b2d ]

Revert "SOLR-11426: TestLazyCores fails too often. Adding debugging code MASTER 
ONLY since I can't get it to fail locally"

This reverts commit 37fb60d


> TestLazyCores fails too often
> -
>
> Key: SOLR-11426
> URL: https://issues.apache.org/jira/browse/SOLR-11426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Rather then re-opening SOLR-10101 I thought I'd start a new issue. I may have 
> to put some code up on Jenkins to test, last time I tried to get this to fail 
> locally I couldn't



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11426) TestLazyCores fails too often

2017-10-06 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195466#comment-16195466
 ] 

Erick Erickson commented on SOLR-11426:
---

Well, rats. I was thinking that the issue might be related to SOLR-11035, but 
apparently not so. The code I put in the method that's failing (check10) tries 
to add another document to the core if it doesn't find the 10 it should the 
first time. If it had come back with finding 11 documents I could (possibly) 
point to SOLR-11035 and the like.

But the single doc that I added is the only one found, proving that the core is 
OK. That is I'm able to see the new doc added to the core, but the 10 added 
before closing the core aren't there. Have to try something else...

> TestLazyCores fails too often
> -
>
> Key: SOLR-11426
> URL: https://issues.apache.org/jira/browse/SOLR-11426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Rather then re-opening SOLR-10101 I thought I'd start a new issue. I may have 
> to put some code up on Jenkins to test, last time I tried to get this to fail 
> locally I couldn't



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11306) Solr example schemas inaccurate comments on docValues and StrField

2017-10-06 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-11306.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.1

Thanks everyone! 

> Solr example schemas inaccurate comments on  docValues and StrField
> ---
>
> Key: SOLR-11306
> URL: https://issues.apache.org/jira/browse/SOLR-11306
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: examples
>Affects Versions: 6.6, 7.0
>Reporter: Tom Burton-West
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: 7.1, master (8.0)
>
> Attachments: SOLR-11306.patch
>
>
> Several of the example managed-schema files have an outdated comment about 
> docValues and StrField.  In Solr 6.6.0 these are under solr-6.6.0/solr/server 
> and the lines where the comment starts for each file are:
> solr/configsets/basic_configs/conf/managed-schema:216:   
> solr/configsets/data_driven_schema_configs/conf/managed-schema:221:
> solr/configsets/sample_techproducts_configs/conf/managed-schema:317
> In the case of 
> Solr-6.6.0/server/solr/configsets/basic_configs/conf/managed-schema, shortly 
> after the comment  are some lines which seem to directly contradict the 
> comment:
> 216  
> On line 221 a StrField is declared with docValues that is multiValued:
> 221   sortMissingLast="true" multiValued="true" docValues="true" />
> Also note that the comments above say that the field must either be required 
> or have a default value, but line 221 appears to satisfy neither condition.
> The JavaDocs indicate that StrField can be multi-valued 
> https://lucene.apache.org/core/6_6_0//core/org/apache/lucene/index/DocValuesType.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11306) Solr example schemas inaccurate comments on docValues and StrField

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195403#comment-16195403
 ] 

ASF subversion and git services commented on SOLR-11306:


Commit e30171397e54ad7214a8ff743871c97d55775a7f in lucene-solr's branch 
refs/heads/master from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e301713 ]

SOLR-11306: Fix inaccurate comments on docValues and StrField in the example 
schemas


> Solr example schemas inaccurate comments on  docValues and StrField
> ---
>
> Key: SOLR-11306
> URL: https://issues.apache.org/jira/browse/SOLR-11306
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: examples
>Affects Versions: 6.6, 7.0
>Reporter: Tom Burton-West
>Priority: Minor
> Attachments: SOLR-11306.patch
>
>
> Several of the example managed-schema files have an outdated comment about 
> docValues and StrField.  In Solr 6.6.0 these are under solr-6.6.0/solr/server 
> and the lines where the comment starts for each file are:
> solr/configsets/basic_configs/conf/managed-schema:216:   
> solr/configsets/data_driven_schema_configs/conf/managed-schema:221:
> solr/configsets/sample_techproducts_configs/conf/managed-schema:317
> In the case of 
> Solr-6.6.0/server/solr/configsets/basic_configs/conf/managed-schema, shortly 
> after the comment  are some lines which seem to directly contradict the 
> comment:
> 216  
> On line 221 a StrField is declared with docValues that is multiValued:
> 221   sortMissingLast="true" multiValued="true" docValues="true" />
> Also note that the comments above say that the field must either be required 
> or have a default value, but line 221 appears to satisfy neither condition.
> The JavaDocs indicate that StrField can be multi-valued 
> https://lucene.apache.org/core/6_6_0//core/org/apache/lucene/index/DocValuesType.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11306) Solr example schemas inaccurate comments on docValues and StrField

2017-10-06 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-11306:


Assignee: Varun Thacker

> Solr example schemas inaccurate comments on  docValues and StrField
> ---
>
> Key: SOLR-11306
> URL: https://issues.apache.org/jira/browse/SOLR-11306
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: examples
>Affects Versions: 6.6, 7.0
>Reporter: Tom Burton-West
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11306.patch
>
>
> Several of the example managed-schema files have an outdated comment about 
> docValues and StrField.  In Solr 6.6.0 these are under solr-6.6.0/solr/server 
> and the lines where the comment starts for each file are:
> solr/configsets/basic_configs/conf/managed-schema:216:   
> solr/configsets/data_driven_schema_configs/conf/managed-schema:221:
> solr/configsets/sample_techproducts_configs/conf/managed-schema:317
> In the case of 
> Solr-6.6.0/server/solr/configsets/basic_configs/conf/managed-schema, shortly 
> after the comment  are some lines which seem to directly contradict the 
> comment:
> 216  
> On line 221 a StrField is declared with docValues that is multiValued:
> 221   sortMissingLast="true" multiValued="true" docValues="true" />
> Also note that the comments above say that the field must either be required 
> or have a default value, but line 221 appears to satisfy neither condition.
> The JavaDocs indicate that StrField can be multi-valued 
> https://lucene.apache.org/core/6_6_0//core/org/apache/lucene/index/DocValuesType.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11306) Solr example schemas inaccurate comments on docValues and StrField

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195404#comment-16195404
 ] 

ASF subversion and git services commented on SOLR-11306:


Commit 6bceced607a6f0cd1d93f54361b39f23a5b94e7f in lucene-solr's branch 
refs/heads/branch_7x from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6bceced ]

SOLR-11306: Fix inaccurate comments on docValues and StrField in the example 
schemas


> Solr example schemas inaccurate comments on  docValues and StrField
> ---
>
> Key: SOLR-11306
> URL: https://issues.apache.org/jira/browse/SOLR-11306
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: examples
>Affects Versions: 6.6, 7.0
>Reporter: Tom Burton-West
>Priority: Minor
> Attachments: SOLR-11306.patch
>
>
> Several of the example managed-schema files have an outdated comment about 
> docValues and StrField.  In Solr 6.6.0 these are under solr-6.6.0/solr/server 
> and the lines where the comment starts for each file are:
> solr/configsets/basic_configs/conf/managed-schema:216:   
> solr/configsets/data_driven_schema_configs/conf/managed-schema:221:
> solr/configsets/sample_techproducts_configs/conf/managed-schema:317
> In the case of 
> Solr-6.6.0/server/solr/configsets/basic_configs/conf/managed-schema, shortly 
> after the comment  are some lines which seem to directly contradict the 
> comment:
> 216  
> On line 221 a StrField is declared with docValues that is multiValued:
> 221   sortMissingLast="true" multiValued="true" docValues="true" />
> Also note that the comments above say that the field must either be required 
> or have a default value, but line 221 appears to satisfy neither condition.
> The JavaDocs indicate that StrField can be multi-valued 
> https://lucene.apache.org/core/6_6_0//core/org/apache/lucene/index/DocValuesType.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6205) Make SolrCloud Data-center, rack or zone aware

2017-10-06 Thread jefferyyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191698#comment-16191698
 ] 

jefferyyuan edited comment on SOLR-6205 at 10/6/17 9:56 PM:


Seem this (at least part of) function has already been in Solr.
Rule-based Replica Placement:
http://lucene.apache.org/solr/guide/7_0/rule-based-replica-placement.html
https://issues.apache.org/jira/browse/SOLR-6220



was (Author: yuanyun.cn):
Make Solr rack awareness can help prevent data loss and improve query 
performance.
Elastic-search already supported it:
https://www.elastic.co/guide/en/elasticsearch/reference/5.4/allocation-awareness.html

And a lot of projects support this: Hadoop, Cassandra, Kafka etc.

> Make SolrCloud Data-center, rack or zone aware
> --
>
> Key: SOLR-6205
> URL: https://issues.apache.org/jira/browse/SOLR-6205
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.8.1
>Reporter: Arcadius Ahouansou
>Assignee: Noble Paul
>
> Use case:
> Let's say we have SolrCloud deployed across 2 Datacenters, racks or zones A 
> and B
> There is a need to have a SolrCloud deployment that will make it possible to 
> have a working system even if one of the Datacenter/rack/zone A or B is lost.
> - This has been discussed on the mailing list at
> http://lucene.472066.n3.nabble.com/SolrCloud-multiple-data-center-support-td4115097.html
> and there are many workarounds that require adding more moving parts to the 
> system.
> - On the above thread, Daniel Collins mentioned  
> https://issues.apache.org/jira/browse/ZOOKEEPER-107 
>  which could help solve this issue.
> - Note that this is a very important feature that is overlooked most of the 
> time.
> - Note that this feature is available in ElasticSearch.
> See 
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html#allocation-awareness
> and
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html#forced-awareness



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11445) Overseer.processQueueItem().... zkStateWriter.enqueueUpdate might ideally have a try{}catch{} around it

2017-10-06 Thread Greg Harris (JIRA)
Greg Harris created SOLR-11445:
--

 Summary: Overseer.processQueueItem()  
zkStateWriter.enqueueUpdate might ideally have a try{}catch{} around it
 Key: SOLR-11445
 URL: https://issues.apache.org/jira/browse/SOLR-11445
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.0, 6.6.1, master (8.0)
Reporter: Greg Harris



So we had the following stack trace with a customer:

2017-10-04 11:25:30.339 ERROR () [ ] o.a.s.c.Overseer Exception in Overseer 
main queue loop
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /collections//state.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at 
org.apache.solr.common.cloud.SolrZkClient$9.execute(SolrZkClient.java:391)
at 
org.apache.solr.common.cloud.SolrZkClient$9.execute(SolrZkClient.java:388)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at org.apache.solr.common.cloud.SolrZkClient.create(SolrZkClient.java:388)
at 
org.apache.solr.cloud.overseer.ZkStateWriter.writePendingUpdates(ZkStateWriter.java:235)
at 
org.apache.solr.cloud.overseer.ZkStateWriter.enqueueUpdate(ZkStateWriter.java:152)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.processQueueItem(Overseer.java:271)
at org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:199)
at java.lang.Thread.run(Thread.java:748)

I want to highlight: 
  at 
org.apache.solr.cloud.overseer.ZkStateWriter.enqueueUpdate(ZkStateWriter.java:152)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.processQueueItem(Overseer.java:271)

This ends up coming from Overseer:
while (data != null)  {
final ZkNodeProps message = ZkNodeProps.load(data);
log.debug("processMessage: workQueueSize: {}, message = {}", 
workQueue.getStats().getQueueLength(), message);
// force flush to ZK after each message because there is no 
fallback if workQueue items
// are removed from workQueue but fail to be written to ZK
*clusterState = processQueueItem(message, clusterState, 
zkStateWriter, false, null);
workQueue.poll(); // poll-ing removes the element we got by 
peek-ing*
data = workQueue.peek();
  }

Note: The processQueueItem comes before the poll, therefore upon a thrown 
exception the same node/message that won't process becomes stuck. This made a 
large cluster unable to come up on it's own without deleting the problem node. 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9) - Build # 20619 - Still Failing!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20619/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC --illegal-access=deny

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=5802, name=searcherExecutor-2805-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=5802, name=searcherExecutor-2805-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base@9/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([BFA7602D472C9803]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=5802, name=searcherExecutor-2805-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=5802, name=searcherExecutor-2805-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base@9/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([BFA7602D472C9803]:0)


FAILED:  
org.apache.solr.cloud.TestTlogReplica.testOutOfOrderDBQWithInPlaceUpdates

Error Message:
Can not find doc 1 in https://127.0.0.1:34245/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 1 in https://127.0.0.1:34245/solr
at 
__randomizedtesting.SeedInfo.seed([BFA7602D472C9803:396698C0187D4EE3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:861)
at 

[jira] [Reopened] (LUCENE-7983) Make IndexReaderWarmer a functional interface

2017-10-06 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss reopened LUCENE-7983:
-

> Make IndexReaderWarmer a functional interface
> -
>
> Key: LUCENE-7983
> URL: https://issues.apache.org/jira/browse/LUCENE-7983
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.1
>
>
> {{IndexReaderWarmer}} has a single method but is an abstract class with a 
> confusing protected constructor. Can we make it a proper functional interface 
> instead? This is marked as {{lucene.experimental}} API and while it would be 
> a binary incompatibility, everything remains the same at the source level, 
> even for existing implementations.
> {code}
> public static abstract class IndexReaderWarmer {
> /** Sole constructor. (For invocation by subclass 
>  *  constructors, typically implicit.) */
> protected IndexReaderWarmer() {
> }
> /** Invoked on the {@link LeafReader} for the newly
>  *  merged segment, before that segment is made visible
>  *  to near-real-time readers. */
> public abstract void warm(LeafReader reader) throws IOException;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7983) Make IndexReaderWarmer a functional interface

2017-10-06 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195202#comment-16195202
 ] 

Dawid Weiss commented on LUCENE-7983:
-

My bad, Steve. Thanks for pointing this out. I'll cherry pick tomorrow 
(terribly late now, I'd probably screw up something).

> Make IndexReaderWarmer a functional interface
> -
>
> Key: LUCENE-7983
> URL: https://issues.apache.org/jira/browse/LUCENE-7983
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.1
>
>
> {{IndexReaderWarmer}} has a single method but is an abstract class with a 
> confusing protected constructor. Can we make it a proper functional interface 
> instead? This is marked as {{lucene.experimental}} API and while it would be 
> a binary incompatibility, everything remains the same at the source level, 
> even for existing implementations.
> {code}
> public static abstract class IndexReaderWarmer {
> /** Sole constructor. (For invocation by subclass 
>  *  constructors, typically implicit.) */
> protected IndexReaderWarmer() {
> }
> /** Invoked on the {@link LeafReader} for the newly
>  *  merged segment, before that segment is made visible
>  *  to near-real-time readers. */
> public abstract void warm(LeafReader reader) throws IOException;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[ANNOUNCE] Apache Lucene 7.0.1 released

2017-10-06 Thread Steve Rowe
6 October 2017, Apache Lucene™ 7.0.1 available 

The Lucene PMC is pleased to announce the release of Apache Lucene 7.0.1 

Apache Lucene is a high-performance, full-featured text search engine 
library written entirely in Java. It is a technology suitable for nearly 
any application that requires full-text search, especially cross-platform. 

This release contains 1 bug fix since the 7.0.0 release: 

* ConjunctionScorer.getChildren was failing to return all child scorers 

The release is available for immediate download at: 

http://www.apache.org/dyn/closer.lua/lucene/java/7.0.1 

Please read CHANGES.txt for a full list of new features and changes: 

https://lucene.apache.org/core/7_0_1/changes/Changes.html 

Please report any feedback to the mailing lists 
(http://lucene.apache.org/core/discussion.html) 

Note: The Apache Software Foundation uses an extensive mirroring network 
for distributing releases. It is possible that the mirror you are using 
may not have replicated the release yet. If that is the case, please 
try another mirror. This also goes for Maven access.
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[ANNOUNCE] Apache Solr 7.0.1 released

2017-10-06 Thread Steve Rowe
6 October 2017, Apache Solr™ 7.0.1 available 

Solr is the popular, blazing fast, open source NoSQL search platform from the 
Apache Lucene project. Its major features include powerful full-text search, 
hit highlighting, faceted search and analytics, rich document parsing, 
geospatial search, extensive REST APIs as well as parallel SQL. Solr is 
enterprise grade, secure and highly scalable, providing fault tolerant 
distributed search and indexing, and powers the search and navigation 
features of many of the world's largest internet sites. 

This release includes 2 bug fixes since the 7.0.0 release: 

* Solr 7.0 cannot read indexes from 6.x versions. 

* Message "Lock held by this virtual machine" during startup. 
Solr is trying to start some cores twice. 

Furthermore, this release includes Apache Lucene 7.0.1 which includes 1 bug 
fix since the 7.0.0 release. 

The release is available for immediate download at: 

http://www.apache.org/dyn/closer.lua/lucene/solr/7.0.1 

Please read CHANGES.txt for a detailed list of changes: 

https://lucene.apache.org/solr/7_0_1/changes/Changes.html 

Please report any feedback to the mailing lists 
(http://lucene.apache.org/solr/discussion.html) 

Note: The Apache Software Foundation uses an extensive mirroring 
network for distributing releases. It is possible that the mirror you 
are using may not have replicated the release yet. If that is the 
case, please try another mirror. This also goes for Maven access.
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11425) SolrClientBuilder does not allow infinite timeout (value 0)

2017-10-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195039#comment-16195039
 ] 

Shawn Heisey commented on SOLR-11425:
-

I can understand the instinct that led the design to exclude a value of zero, 
and even though infinite timeouts can be problematic, I think the patch for 
this issue is a good change -- it's not our job to enforce a finite timeout.

I do wonder if perhaps the javadoc should have a note about zero being an 
infinite timeout, which could result in client operations that never return.


> SolrClientBuilder does not allow infinite timeout (value 0)
> ---
>
> Key: SOLR-11425
> URL: https://issues.apache.org/jira/browse/SOLR-11425
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.0
>Reporter: Peter Szantai-Kis
>Assignee: Mark Miller
> Fix For: 7.1, master (8.0)
>
> Attachments: SOLR-11425.patch, SOLR-11425.patch
>
>
> [org.apache.solr.client.solrj.impl.SolrClientBuilder#withConnectionTimeout|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientBuilder.java#L53]
>  does not allow to set the value of 0 which means infinite timeout, but 
> [RequestConfig|https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.html#getConnectTimeout()]
>  where it will be used have the option to do so.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20618 - Still Failing!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20618/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=18936, name=jetty-launcher-3143-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=18948, name=jetty-launcher-3143-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=18936, name=jetty-launcher-3143-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[JENKINS] Lucene-Solr-NightlyTests-7.0 - Build # 49 - Still Unstable

2017-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.0/49/

10 tests failed.
FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([D74B63DC9702FD62]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexSorting

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([D74B63DC9702FD62]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest: 1) Thread[id=19428, 
name=zkCallback-2169-thread-3, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeySafeLeaderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=19318, 
name=StoppableIndexingThread-EventThread, state=WAITING, 
group=TGRP-ChaosMonkeySafeLeaderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)
3) Thread[id=19427, name=zkCallback-2169-thread-2, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeySafeLeaderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=19321, 
name=zkCallback-2169-thread-1, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeySafeLeaderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)5) Thread[id=19317, 
name=StoppableIndexingThread-SendThread(127.0.0.1:41078), state=TIMED_WAITING, 
group=TGRP-ChaosMonkeySafeLeaderTest] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.ChaosMonkeySafeLeaderTest: 
   1) Thread[id=19428, name=zkCallback-2169-thread-3, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeySafeLeaderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 

[jira] [Commented] (SOLR-7733) remove/rename "optimize" references in the UI.

2017-10-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194969#comment-16194969
 ] 

David Smiley commented on SOLR-7733:


I vote for getting rid of this button altogether, though I don't object to it 
staying with the proposed warnings.  I've seen the optimize button in a 
SolrCloud sharded environment cause some failure; I forget the details.

> remove/rename "optimize" references in the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3, 6.0
>Reporter: Erick Erickson
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10265) Overseer can become the bottleneck in very large clusters

2017-10-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194970#comment-16194970
 ] 

Shawn Heisey commented on SOLR-10265:
-

Guava has something really cool that I think we could use for a multi-threaded 
overseer, and it is available in the 14.0.1 version that Solr currently 
includes:

https://google.github.io/guava/releases/14.0/api/docs/com/google/common/util/concurrent/Striped.html

If we use the name of the collection as the stripe input, then we will have 
individual locks for each collection, so we can guarantee order of operations.  
If there are any Overseer operations that are cluster-wide and not applicable 
to one collection, we probably need to come up with a special name for those.

NB: It's possible that different collection names might hash to the same stripe 
and therefore block each other, but that would hopefully be a rare occurrence.

Probably what should happen is that each thread would grab an operation from 
the queue, and attempt to acquire the lock for the collection mentioned in that 
operation.  If a ton of operations happen that all apply to one collection, 
then it would still act like the current single-thread implementation.  One 
thing that I do not know is whether the Java "Lock" implementation (which is 
what is obtained from the Striped class) guarantees what order the threads will 
be granted the lock.  The hope is of course that locks will be granted in the 
order they are requested when several threads are all attempting to use the 
same lock.

> Overseer can become the bottleneck in very large clusters
> -
>
> Key: SOLR-10265
> URL: https://issues.apache.org/jira/browse/SOLR-10265
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> Let's say we have a large cluster. Some numbers:
> - To ingest the data at the volume we want to I need roughly a 600 shard 
> collection.
> - Index into the collection for 1 hour and then create a new collection 
> - For a 30 days retention window with these numbers we would end up wth  
> ~400k cores in the cluster
> - Just a rolling restart of this cluster can take hours because the overseer 
> queue gets backed up. If a few nodes looses connectivity to ZooKeeper then 
> also we can end up with lots of messages in the Overseer queue
> With some tests here are the two high level problems we have identified:
> 1> How fast can the overseer process operations:
> The rate at which the overseer processes events is too slow at this scale. 
> I ran {{OverseerTest#testPerformance}} which creates 10 collections ( 1 shard 
> 1 replica ) and generates 20k state change events. The test took 119 seconds 
> to run on my machine which means ~170 events a second. Let's say a server can 
> process 5x of my machine so 1k events a second. 
> Total events generated by a 400k replica cluster = 400k * 4 ( state changes 
> till replica become active ) = 1.6M / 1k events a second will be 1600 minutes.
> Second observation was that the rate at which the overseer can process events 
> slows down when the number of items in the queue gets larger
> I ran the same {{OverseerTest#testPerformance}} but changed the number of 
> events generated to 2000 instead. The test took only 5 seconds to run. So it 
> was a lot faster than the test run which generated 20k events
> 2> State changes overwhelming ZK:
> For every state change Solr is writing out a big state.json to zookeeper. 
> This can lead to the zookeeper transaction logs going out of control even 
> with auto purging etc set . 
> I haven't debugged why the transaction logs ran into terabytes without taking 
> into snapshots but this was my assumption based on the other problems we 
> observed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7988) RandomGeoShapeRelationshipTest.testRandomContains() failure

2017-10-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7988:
---
Component/s: modules/spatial3d

> RandomGeoShapeRelationshipTest.testRandomContains() failure
> ---
>
> Key: LUCENE-7988
> URL: https://issues.apache.org/jira/browse/LUCENE-7988
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Steve Rowe
>
> Reproduces for me.  From 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/555]:
> {noformat}
>[junit4] Suite: 
> org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=RandomGeoShapeRelationshipTest -Dtests.method=testRandomContains 
> -Dtests.seed=B276D90A4C724311 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=no -Dtests.timezone=Europe/Oslo -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.88s J0 | 
> RandomGeoShapeRelationshipTest.testRandomContains <<<
>[junit4]> Throwable #1: java.lang.AssertionError: geoAreaShape: 
> GeoExactCircle: {planetmodel=PlanetModel.WGS84, 
> center=[lat=0.02123571392201587, 
> lon=2.320149787902387([X=-0.6817728874503795, Y=0.732782197038459, 
> Z=0.021257843476414247])], radius=3.0750485329959063(176.1873027385607), 
> accuracy=1.363030071996312E-4}
>[junit4]> shape: GeoRectangle: {planetmodel=PlanetModel.WGS84, 
> toplat=1.0536304186599388(60.36857615581647), 
> bottomlat=-1.0245136525145786(-58.70030834261794), 
> leftlon=-2.1970388932576568(-125.8810560097571), 
> rightlon=0.4079910742650278(23.37616663439463)} expected:<0> but was:<2>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B276D90A4C724311:85901780E92DE45B]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest.testRandomContains(RandomGeoShapeRelationshipTest.java:225)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: test params are: codec=Lucene70, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=no, timezone=Europe/Oslo
>[junit4]   2> NOTE: Linux 4.10.0-33-generic i386/Oracle Corporation 
> 1.8.0_144 (32-bit)/cpus=8,threads=1,free=4893368,total=16252928
>[junit4]   2> NOTE: All tests run in this JVM: 
> [RandomGeoShapeRelationshipTest]
>[junit4] Completed [9/15 (1!)] on J0 in 8.00s, 26 tests, 1 failure <<< 
> FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7988) RandomGeoShapeRelationshipTest.testRandomContains() failure

2017-10-06 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7988:
--

 Summary: RandomGeoShapeRelationshipTest.testRandomContains() 
failure
 Key: LUCENE-7988
 URL: https://issues.apache.org/jira/browse/LUCENE-7988
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe


Reproduces for me.  From 
[https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/555]:

{noformat}
   [junit4] Suite: 
org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=RandomGeoShapeRelationshipTest -Dtests.method=testRandomContains 
-Dtests.seed=B276D90A4C724311 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=no -Dtests.timezone=Europe/Oslo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.88s J0 | 
RandomGeoShapeRelationshipTest.testRandomContains <<<
   [junit4]> Throwable #1: java.lang.AssertionError: geoAreaShape: 
GeoExactCircle: {planetmodel=PlanetModel.WGS84, 
center=[lat=0.02123571392201587, lon=2.320149787902387([X=-0.6817728874503795, 
Y=0.732782197038459, Z=0.021257843476414247])], 
radius=3.0750485329959063(176.1873027385607), accuracy=1.363030071996312E-4}
   [junit4]> shape: GeoRectangle: {planetmodel=PlanetModel.WGS84, 
toplat=1.0536304186599388(60.36857615581647), 
bottomlat=-1.0245136525145786(-58.70030834261794), 
leftlon=-2.1970388932576568(-125.8810560097571), 
rightlon=0.4079910742650278(23.37616663439463)} expected:<0> but was:<2>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B276D90A4C724311:85901780E92DE45B]:0)
   [junit4]>at 
org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest.testRandomContains(RandomGeoShapeRelationshipTest.java:225)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: test params are: codec=Lucene70, 
sim=RandomSimilarity(queryNorm=false): {}, locale=no, timezone=Europe/Oslo
   [junit4]   2> NOTE: Linux 4.10.0-33-generic i386/Oracle Corporation 
1.8.0_144 (32-bit)/cpus=8,threads=1,free=4893368,total=16252928
   [junit4]   2> NOTE: All tests run in this JVM: 
[RandomGeoShapeRelationshipTest]
   [junit4] Completed [9/15 (1!)] on J0 in 8.00s, 26 tests, 1 failure <<< 
FAILURES!
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11444) Improve Aliases.java and comma delimited collection list handling

2017-10-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194919#comment-16194919
 ] 

Joel Bernstein commented on SOLR-11444:
---

I also believe the get slices logic was in a number of different places in the 
streaming code with slightly different implementations. It was then moved to 
the TupleStream where it's shared by all the stream sources.  So when that 
consolidation was done it probably took the most up to date version in 
CloudSolrClient.

I think it does make sense to use the CloudSolrClient version rather then 
having the streaming have this logic.  

> Improve Aliases.java and comma delimited collection list handling
> -
>
> Key: SOLR-11444
> URL: https://issues.apache.org/jira/browse/SOLR-11444
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_11444_Aliases.patch
>
>
> While starting to look at SOLR-11299 I noticed some brittleness in 
> assumptions about Strings that refer to a collection.  Sometimes they are in 
> fact references to comma separated lists, which appears was added with the 
> introduction of collection aliases (an alias can refer to a comma delimited 
> list).  So Java's type system kind of goes out the window when we do this.  
> In one case this leads to a bug -- CloudSolrClient will throw an NPE if you 
> try to write to such an alias.  Sending an update via HTTP will allow it and 
> send it to the first in the list.
> So this issue is about refactoring and some little improvements pertaining to 
> Aliases.java plus certain key spots that deal with collection references.  I 
> don't think I want to go as far as changing the public SolrJ API except to 
> adding documentation on what's possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11444) Improve Aliases.java and comma delimited collection list handling

2017-10-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194919#comment-16194919
 ] 

Joel Bernstein edited comment on SOLR-11444 at 10/6/17 5:37 PM:


I also believe the get slices logic was in a number of different places in the 
streaming code with slightly different implementations. It was then moved to 
the TupleStream where it's shared by all the stream sources.  So when that 
consolidation was done it probably took the most up to date version in 
CloudSolrClient.

I think it does make sense to use the CloudSolrClient version rather then 
having the TupleStream have this logic.  


was (Author: joel.bernstein):
I also believe the get slices logic was in a number of different places in the 
streaming code with slightly different implementations. It was then moved to 
the TupleStream where it's shared by all the stream sources.  So when that 
consolidation was done it probably took the most up to date version in 
CloudSolrClient.

I think it does make sense to use the CloudSolrClient version rather then 
having the streaming have this logic.  

> Improve Aliases.java and comma delimited collection list handling
> -
>
> Key: SOLR-11444
> URL: https://issues.apache.org/jira/browse/SOLR-11444
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_11444_Aliases.patch
>
>
> While starting to look at SOLR-11299 I noticed some brittleness in 
> assumptions about Strings that refer to a collection.  Sometimes they are in 
> fact references to comma separated lists, which appears was added with the 
> introduction of collection aliases (an alias can refer to a comma delimited 
> list).  So Java's type system kind of goes out the window when we do this.  
> In one case this leads to a bug -- CloudSolrClient will throw an NPE if you 
> try to write to such an alias.  Sending an update via HTTP will allow it and 
> send it to the first in the list.
> So this issue is about refactoring and some little improvements pertaining to 
> Aliases.java plus certain key spots that deal with collection references.  I 
> don't think I want to go as far as changing the public SolrJ API except to 
> adding documentation on what's possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11444) Improve Aliases.java and comma delimited collection list handling

2017-10-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194909#comment-16194909
 ] 

Joel Bernstein commented on SOLR-11444:
---

I suspect what happened was that the getSlices implementation was different at 
one point in time and then made the same. I'm not sure if I'm the one that did 
this or not, as it's been refactored a number of times I believe. If they are 
identical now it makes sense to just use the CloudSolrClient version.

> Improve Aliases.java and comma delimited collection list handling
> -
>
> Key: SOLR-11444
> URL: https://issues.apache.org/jira/browse/SOLR-11444
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_11444_Aliases.patch
>
>
> While starting to look at SOLR-11299 I noticed some brittleness in 
> assumptions about Strings that refer to a collection.  Sometimes they are in 
> fact references to comma separated lists, which appears was added with the 
> introduction of collection aliases (an alias can refer to a comma delimited 
> list).  So Java's type system kind of goes out the window when we do this.  
> In one case this leads to a bug -- CloudSolrClient will throw an NPE if you 
> try to write to such an alias.  Sending an update via HTTP will allow it and 
> send it to the first in the list.
> So this issue is about refactoring and some little improvements pertaining to 
> Aliases.java plus certain key spots that deal with collection references.  I 
> don't think I want to go as far as changing the public SolrJ API except to 
> adding documentation on what's possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-10-06 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194900#comment-16194900
 ] 

Scott Blum commented on SOLR-11423:
---

Perhaps you're right.  In our cluster though, Overseer is not able to chew 
through 20k entries very fast; it takes a long time (many minutes) to get 
through that number of items.  Our current escape valve is a tiny separate tool 
that just literally just goes through the queue and deletes everything. :D

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 555 - Still Unstable!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/555/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest.testRandomContains

Error Message:
geoAreaShape: GeoExactCircle: {planetmodel=PlanetModel.WGS84, 
center=[lat=0.02123571392201587, lon=2.320149787902387([X=-0.6817728874503795, 
Y=0.732782197038459, Z=0.021257843476414247])], 
radius=3.0750485329959063(176.1873027385607), accuracy=1.363030071996312E-4} 
shape: GeoRectangle: {planetmodel=PlanetModel.WGS84, 
toplat=1.0536304186599388(60.36857615581647), 
bottomlat=-1.0245136525145786(-58.70030834261794), 
leftlon=-2.1970388932576568(-125.8810560097571), 
rightlon=0.4079910742650278(23.37616663439463)} expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: geoAreaShape: GeoExactCircle: 
{planetmodel=PlanetModel.WGS84, center=[lat=0.02123571392201587, 
lon=2.320149787902387([X=-0.6817728874503795, Y=0.732782197038459, 
Z=0.021257843476414247])], radius=3.0750485329959063(176.1873027385607), 
accuracy=1.363030071996312E-4}
shape: GeoRectangle: {planetmodel=PlanetModel.WGS84, 
toplat=1.0536304186599388(60.36857615581647), 
bottomlat=-1.0245136525145786(-58.70030834261794), 
leftlon=-2.1970388932576568(-125.8810560097571), 
rightlon=0.4079910742650278(23.37616663439463)} expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([B276D90A4C724311:85901780E92DE45B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.spatial3d.geom.RandomGeoShapeRelationshipTest.testRandomContains(RandomGeoShapeRelationshipTest.java:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (LUCENE-5753) Refresh UAX29URLEmailTokenizer's TLD list

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194864#comment-16194864
 ] 

ASF subversion and git services commented on LUCENE-5753:
-

Commit d2e0905ebd64f8af21277b621bb9327382b106ff in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d2e0905e ]

LUCENE-5753: Refresh UAX29URLEmailTokenizer's TLD list


> Refresh UAX29URLEmailTokenizer's TLD list
> -
>
> Key: LUCENE-5753
> URL: https://issues.apache.org/jira/browse/LUCENE-5753
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Merritt
> Attachments: LUCENE-5753.patch
>
>
> uax_url_email analyzer appears unable to recognize the ".local" TLD among 
> others. Bug can be reproduced by
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=First%20Last%20lname@section.mycorp.local=uax_url_email"
> will parse "ln...@section.my" and "corp.local" as separate tokens, as opposed 
> to
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=first%20last%20ln...@section.mycorp.org=uax_url_email"
> which will recognize "ln...@section.mycorp.org".
> Can this be fixed by updating to a newer version? I am running ElasticSearch 
> 0.90.5 and whatever Lucene version sits underneath that. My suspicion is that 
> the TLD list the analyzer relies on (http://www.internic.net/zones/root.zone, 
> I think?) is incomplete and needs updating. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5753) Refresh UAX29URLEmailTokenizer's TLD list

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194862#comment-16194862
 ] 

ASF subversion and git services commented on LUCENE-5753:
-

Commit 432c61f95e6d2b2dc31533344b776c59efb6f89b in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=432c61f ]

LUCENE-5753: Refresh UAX29URLEmailTokenizer's TLD list


> Refresh UAX29URLEmailTokenizer's TLD list
> -
>
> Key: LUCENE-5753
> URL: https://issues.apache.org/jira/browse/LUCENE-5753
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Merritt
> Attachments: LUCENE-5753.patch
>
>
> uax_url_email analyzer appears unable to recognize the ".local" TLD among 
> others. Bug can be reproduced by
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=First%20Last%20lname@section.mycorp.local=uax_url_email"
> will parse "ln...@section.my" and "corp.local" as separate tokens, as opposed 
> to
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=first%20last%20ln...@section.mycorp.org=uax_url_email"
> which will recognize "ln...@section.mycorp.org".
> Can this be fixed by updating to a newer version? I am running ElasticSearch 
> 0.90.5 and whatever Lucene version sits underneath that. My suspicion is that 
> the TLD list the analyzer relies on (http://www.internic.net/zones/root.zone, 
> I think?) is incomplete and needs updating. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7983) Make IndexReaderWarmer a functional interface

2017-10-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194860#comment-16194860
 ] 

Steve Rowe commented on LUCENE-7983:


[~dweiss], you set the fix version at 7.1, but you didn't push the change to 
branch_7x - I'm guessing one of these was a mistake?

> Make IndexReaderWarmer a functional interface
> -
>
> Key: LUCENE-7983
> URL: https://issues.apache.org/jira/browse/LUCENE-7983
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.1
>
>
> {{IndexReaderWarmer}} has a single method but is an abstract class with a 
> confusing protected constructor. Can we make it a proper functional interface 
> instead? This is marked as {{lucene.experimental}} API and while it would be 
> a binary incompatibility, everything remains the same at the source level, 
> even for existing implementations.
> {code}
> public static abstract class IndexReaderWarmer {
> /** Sole constructor. (For invocation by subclass 
>  *  constructors, typically implicit.) */
> protected IndexReaderWarmer() {
> }
> /** Invoked on the {@link LeafReader} for the newly
>  *  merged segment, before that segment is made visible
>  *  to near-real-time readers. */
> public abstract void warm(LeafReader reader) throws IOException;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11426) TestLazyCores fails too often

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194851#comment-16194851
 ] 

ASF subversion and git services commented on SOLR-11426:


Commit 37fb60d0f1188c3399232fe0240f53d2f4743bb0 in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=37fb60d ]

SOLR-11426: TestLazyCores fails too often. Adding debugging code MASTER ONLY 
since I can't get it to fail locally


> TestLazyCores fails too often
> -
>
> Key: SOLR-11426
> URL: https://issues.apache.org/jira/browse/SOLR-11426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Rather then re-opening SOLR-10101 I thought I'd start a new issue. I may have 
> to put some code up on Jenkins to test, last time I tried to get this to fail 
> locally I couldn't



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5753) Refresh UAX29URLEmailTokenizer's TLD list

2017-10-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-5753:
---
Summary: Refresh UAX29URLEmailTokenizer's TLD list  (was: Domain lists for 
UAX_URL_EMAIL analyzer are incomplete - cannot recognize ".local" among others)

> Refresh UAX29URLEmailTokenizer's TLD list
> -
>
> Key: LUCENE-5753
> URL: https://issues.apache.org/jira/browse/LUCENE-5753
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Merritt
> Attachments: LUCENE-5753.patch
>
>
> uax_url_email analyzer appears unable to recognize the ".local" TLD among 
> others. Bug can be reproduced by
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=First%20Last%20lname@section.mycorp.local=uax_url_email"
> will parse "ln...@section.my" and "corp.local" as separate tokens, as opposed 
> to
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=first%20last%20ln...@section.mycorp.org=uax_url_email"
> which will recognize "ln...@section.mycorp.org".
> Can this be fixed by updating to a newer version? I am running ElasticSearch 
> 0.90.5 and whatever Lucene version sits underneath that. My suspicion is that 
> the TLD list the analyzer relies on (http://www.internic.net/zones/root.zone, 
> I think?) is incomplete and needs updating. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5753) Domain lists for UAX_URL_EMAIL analyzer are incomplete - cannot recognize ".local" among others

2017-10-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-5753:
---
Attachment: LUCENE-5753.patch

Patch updating TLDs for {{UAX29URLEmailTokenizer}}.  Precommit and all tests 
pass.

{{ASCIITLD.jflex-macro}} increases from 342 to 1543 TLDs. 

I had to use {{ANT_OPTS=-Xmx8g ant regenerate}} to give enough memory to JFlex. 
 ({{-Xmx4g}} didn't work, but maybe something between 4g and 8g would - I 
didn't try.)

With this patch, the lucene-analyzers-common jar goes from 1.5MB to 1.6MB; I 
think the size increase is acceptable.

Committing shortly.

> Domain lists for UAX_URL_EMAIL analyzer are incomplete - cannot recognize 
> ".local" among others
> ---
>
> Key: LUCENE-5753
> URL: https://issues.apache.org/jira/browse/LUCENE-5753
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Merritt
> Attachments: LUCENE-5753.patch
>
>
> uax_url_email analyzer appears unable to recognize the ".local" TLD among 
> others. Bug can be reproduced by
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=First%20Last%20lname@section.mycorp.local=uax_url_email"
> will parse "ln...@section.my" and "corp.local" as separate tokens, as opposed 
> to
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=first%20last%20ln...@section.mycorp.org=uax_url_email"
> which will recognize "ln...@section.mycorp.org".
> Can this be fixed by updating to a newer version? I am running ElasticSearch 
> 0.90.5 and whatever Lucene version sits underneath that. My suspicion is that 
> the TLD list the analyzer relies on (http://www.internic.net/zones/root.zone, 
> I think?) is incomplete and needs updating. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11444) Improve Aliases.java and comma delimited collection list handling

2017-10-06 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11444:

Attachment: SOLR_11444_Aliases.patch

This patch is a WIP; I know I broke something and I'm working out what it was.

Some random notes:
* CloudSolrClient: send write to 1st in alias list
* More consistently use StrUtils.splitSmart instead of String.split
* [~joel.bernstein] {{TupleStream.getSlices}} looks identical to 
{{CloudSolrClient.getSlices}}.  Why did you copy code and commit it unmodified? 
 Perhaps there is more duplicated code; I didn't check. 



> Improve Aliases.java and comma delimited collection list handling
> -
>
> Key: SOLR-11444
> URL: https://issues.apache.org/jira/browse/SOLR-11444
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_11444_Aliases.patch
>
>
> While starting to look at SOLR-11299 I noticed some brittleness in 
> assumptions about Strings that refer to a collection.  Sometimes they are in 
> fact references to comma separated lists, which appears was added with the 
> introduction of collection aliases (an alias can refer to a comma delimited 
> list).  So Java's type system kind of goes out the window when we do this.  
> In one case this leads to a bug -- CloudSolrClient will throw an NPE if you 
> try to write to such an alias.  Sending an update via HTTP will allow it and 
> send it to the first in the list.
> So this issue is about refactoring and some little improvements pertaining to 
> Aliases.java plus certain key spots that deal with collection references.  I 
> don't think I want to go as far as changing the public SolrJ API except to 
> adding documentation on what's possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11444) Improve Aliases.java and comma delimited collection list handling

2017-10-06 Thread David Smiley (JIRA)
David Smiley created SOLR-11444:
---

 Summary: Improve Aliases.java and comma delimited collection list 
handling
 Key: SOLR-11444
 URL: https://issues.apache.org/jira/browse/SOLR-11444
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: David Smiley
Assignee: David Smiley


While starting to look at SOLR-11299 I noticed some brittleness in assumptions 
about Strings that refer to a collection.  Sometimes they are in fact 
references to comma separated lists, which appears was added with the 
introduction of collection aliases (an alias can refer to a comma delimited 
list).  So Java's type system kind of goes out the window when we do this.  In 
one case this leads to a bug -- CloudSolrClient will throw an NPE if you try to 
write to such an alias.  Sending an update via HTTP will allow it and send it 
to the first in the list.

So this issue is about refactoring and some little improvements pertaining to 
Aliases.java plus certain key spots that deal with collection references.  I 
don't think I want to go as far as changing the public SolrJ API except to 
adding documentation on what's possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2114 - Still Failing

2017-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2114/

6 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.test

Error Message:
Could not load collection from ZK: routeFieldColl

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
routeFieldColl
at 
__randomizedtesting.SeedInfo.seed([B84BCE15B927B69B:301FF1CF17DBDB63]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1170)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:690)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:130)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:154)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:908)
at 
org.apache.solr.cloud.ShardSplitTest.splitByRouteFieldTest(ShardSplitTest.java:736)
at org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9) - Build # 20617 - Still Failing!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20617/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC --illegal-access=deny

All tests passed

Build Log:
[...truncated 53708 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:826: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:706: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:693: Source checkout 
is dirty (unversioned/missing files) after running tests!!! Offending files:
* lucene/licenses/morfologik-ukrainian-search-3.7.5.jar.sha1

Total time: 64 minutes 46 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11299) Time partitioned collections (umbrella issue)

2017-10-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194736#comment-16194736
 ] 

David Smiley commented on SOLR-11299:
-

Hi Gus.

bq. One thought that comes to mind is that with deletions of old collections, 
we could more or less think of it as solr collection based ring buffer...

Perhaps in an abstract sense but I don't think modeling it physically (creating 
X collections with some suffix ordinal name in advance) makes sense. I don't 
think it's a big deal to delete collections and create new ones.  This is very 
flexible to changing settings of how much data to retain but an actual ring 
buffer design is rigid.

bq. The implicit assumption seems to be that writes are "mostly ordered" and 
that severely out of order writes might be rejected? I think that that's 
probably a critical assumption since I imagine that we'll have an alias that's 
moving from collection to collection for writes.  ...

My proposed design does not call for a so-called write alias, which would be a 
limitation for out-of-order.  Instead there is an URP (or add-on to 
DistributedURP) that can route to the proper partition.  For fixed time based 
partitions, it shouldn't be a big deal to add data out of order.  For size 
capped partitions, it's definitely incompatible.  For documents far in the 
future, instead of creating too many intermediate collections, we very well 
might reject it.

bq. Thoughts on the possible URP/DURP maybe it's always present by default, but 
a silent no-op unless it sees that a time partitioned collection is being 
accessed, and only then does it do anything?  ...

Yeah maybe; more investigation is needed to help us pick. Perhaps collections 
involved in a time series have a boolean piece of metadata denoting it is a 
part of a time series?  Or a string back-reference to the alias?

bq. Another thought is that while date/time is the objective here, it would 
seem that any numeric field should work...

I've thought of this but I think the time based use case is so prevalent that I 
have doubts it's worth bothering to add non-time support.  It could be 
theoretically added in the future.  And such a user could abuse their number as 
a time to use this feature.

> Time partitioned collections (umbrella issue)
> -
>
> Key: SOLR-11299
> URL: https://issues.apache.org/jira/browse/SOLR-11299
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>
> Solr ought to have the ability to manage large-scale time-series data (think 
> logs or sensor data / IOT) itself without a lot of manual/external work.  The 
> most naive and painless approach today is to create a collection with a high 
> numShards with hash routing but this isn't as good as partitioning the 
> underlying indexes by time for these reasons:
> * Easy to scale up/down horizontally as data/requirements change.  (No need 
> to over-provision, use shard splitting, or re-index with different config)
> * Faster queries: 
> ** can search fewer shards, reducing overall load
> ** realtime search is more tractable (since most shards are stable -- 
> good caches)
> ** "recent" shards (that might be queried more) can be allocated to 
> faster hardware
> ** aged out data is simply removed, not marked as deleted.  Deleted docs 
> still have search overhead.
> * Outages of a shard result in a degraded but sometimes a useful system 
> nonetheless (compare to random subset missing)
> Ideally you could set this up once and then simply work with a collection 
> (potentially actually an alias) in a normal way (search or update), letting 
> Solr handle the addition of new partitions, removing of old ones, and 
> appropriate routing of requests depending on their nature.
> This issue is an umbrella issue for the particular tasks that will make it 
> all happen -- either subtasks or issue linking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11032) Update solrj tutorial

2017-10-06 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194711#comment-16194711
 ] 

Cassandra Targett commented on SOLR-11032:
--

bq. Would anyone be opposed to updating the ref-guide content, and then 
figuring out a way to build/test the Java snippets afterwards?

+1, updating the content without tests is still good progress.

> Update solrj tutorial
> -
>
> Key: SOLR-11032
> URL: https://issues.apache.org/jira/browse/SOLR-11032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, SolrJ, website
>Reporter: Karl Richter
>
> The [solrj tutorial](https://wiki.apache.org/solr/Solrj) has the following 
> issues:
>   * It refers to 1.4.0 whereas the current release is 6.x, some classes are 
> deprecated or no longer exist.
>   * Document-object-binding is a crucial feature [which should be working in 
> the meantime](https://issues.apache.org/jira/browse/SOLR-1945) and thus 
> should be covered in the tutorial.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11442) Slightly prettier table of contents in ref guide

2017-10-06 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11442.
--
   Resolution: Fixed
Fix Version/s: 7.1

Thanks [~gus_heck]!

> Slightly prettier table of contents in ref guide
> 
>
> Key: SOLR-11442
> URL: https://issues.apache.org/jira/browse/SOLR-11442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gus Heck
>Assignee: Cassandra Targett
> Fix For: 7.1
>
> Attachments: prettier_toc.patch, Screen Shot 2017-10-05 at 9.13.13 
> PM.png, Screen Shot 2017-10-05 at 9.13.37 PM.png, SOLR-11442.patch
>
>
> This has been irking me, and the fix is dead simple... The table of contents 
> in the ref guide is silly skinny taking up only 300px leading to things like 
> the attached screen shots...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11442) Slightly prettier table of contents in ref guide

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194709#comment-16194709
 ] 

ASF subversion and git services commented on SOLR-11442:


Commit c5eaf31b789b4439820957b5f7bd256391addb9b in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c5eaf31 ]

SOLR-11442: fix width of in-page TOC


> Slightly prettier table of contents in ref guide
> 
>
> Key: SOLR-11442
> URL: https://issues.apache.org/jira/browse/SOLR-11442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gus Heck
>Assignee: Cassandra Targett
> Attachments: prettier_toc.patch, Screen Shot 2017-10-05 at 9.13.13 
> PM.png, Screen Shot 2017-10-05 at 9.13.37 PM.png, SOLR-11442.patch
>
>
> This has been irking me, and the fix is dead simple... The table of contents 
> in the ref guide is silly skinny taking up only 300px leading to things like 
> the attached screen shots...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11442) Slightly prettier table of contents in ref guide

2017-10-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194706#comment-16194706
 ] 

ASF subversion and git services commented on SOLR-11442:


Commit c5f9a6f221c24911701786daf6e16c102124752b in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c5f9a6f ]

SOLR-11442: fix width of in-page TOC


> Slightly prettier table of contents in ref guide
> 
>
> Key: SOLR-11442
> URL: https://issues.apache.org/jira/browse/SOLR-11442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gus Heck
>Assignee: Cassandra Targett
> Attachments: prettier_toc.patch, Screen Shot 2017-10-05 at 9.13.13 
> PM.png, Screen Shot 2017-10-05 at 9.13.37 PM.png, SOLR-11442.patch
>
>
> This has been irking me, and the fix is dead simple... The table of contents 
> in the ref guide is silly skinny taking up only 300px leading to things like 
> the attached screen shots...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11442) Slightly prettier table of contents in ref guide

2017-10-06 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194700#comment-16194700
 ] 

Cassandra Targett commented on SOLR-11442:
--

This is a change I've considered making a few times too so I'm overall +1.

The reason I never made the change is because I'm not a huge fan of how the 
gray background spans the whole width of the page taken up by the TOC (I'd 
prefer it to scale dynamically with the size of the TOC that's inside it), but 
I'll live with that for now until I have time to look at it in more detail and 
see if I can get it to scale.

SOLR-10612 added the ability to put the TOC on the right side of the page (see 
something like 
https://builds.apache.org/view/L/view/Lucene/job/Solr-reference-guide-master/javadoc/collections-api.html
 for an example), and the change in the proposed patch makes it so it's 
possible for that right-hand TOC to severely crowd out the content that's 
supposed to float next to it since it will span as much space as it needs. 
Adding a {{max-width: 300px;}} to the definition for {{toc-right}} confines it 
back into a reasonable space. New patch attached to show the change, but I'll 
commit both changes in a moment.

> Slightly prettier table of contents in ref guide
> 
>
> Key: SOLR-11442
> URL: https://issues.apache.org/jira/browse/SOLR-11442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gus Heck
>Assignee: Cassandra Targett
> Attachments: prettier_toc.patch, Screen Shot 2017-10-05 at 9.13.13 
> PM.png, Screen Shot 2017-10-05 at 9.13.37 PM.png, SOLR-11442.patch
>
>
> This has been irking me, and the fix is dead simple... The table of contents 
> in the ref guide is silly skinny taking up only 300px leading to things like 
> the attached screen shots...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11442) Slightly prettier table of contents in ref guide

2017-10-06 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11442:
-
Attachment: SOLR-11442.patch

> Slightly prettier table of contents in ref guide
> 
>
> Key: SOLR-11442
> URL: https://issues.apache.org/jira/browse/SOLR-11442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gus Heck
>Assignee: Cassandra Targett
> Attachments: prettier_toc.patch, Screen Shot 2017-10-05 at 9.13.13 
> PM.png, Screen Shot 2017-10-05 at 9.13.37 PM.png, SOLR-11442.patch
>
>
> This has been irking me, and the fix is dead simple... The table of contents 
> in the ref guide is silly skinny taking up only 300px leading to things like 
> the attached screen shots...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11306) Solr example schemas inaccurate comments on docValues and StrField

2017-10-06 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194688#comment-16194688
 ] 

Varun Thacker commented on SOLR-11306:
--

Thanks Jason for the reminder. I'll fix it today

> Solr example schemas inaccurate comments on  docValues and StrField
> ---
>
> Key: SOLR-11306
> URL: https://issues.apache.org/jira/browse/SOLR-11306
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: examples
>Affects Versions: 6.6, 7.0
>Reporter: Tom Burton-West
>Priority: Minor
> Attachments: SOLR-11306.patch
>
>
> Several of the example managed-schema files have an outdated comment about 
> docValues and StrField.  In Solr 6.6.0 these are under solr-6.6.0/solr/server 
> and the lines where the comment starts for each file are:
> solr/configsets/basic_configs/conf/managed-schema:216:   
> solr/configsets/data_driven_schema_configs/conf/managed-schema:221:
> solr/configsets/sample_techproducts_configs/conf/managed-schema:317
> In the case of 
> Solr-6.6.0/server/solr/configsets/basic_configs/conf/managed-schema, shortly 
> after the comment  are some lines which seem to directly contradict the 
> comment:
> 216  
> On line 221 a StrField is declared with docValues that is multiValued:
> 221   sortMissingLast="true" multiValued="true" docValues="true" />
> Also note that the comments above say that the field must either be required 
> or have a default value, but line 221 appears to satisfy neither condition.
> The JavaDocs indicate that StrField can be multi-valued 
> https://lucene.apache.org/core/6_6_0//core/org/apache/lucene/index/DocValuesType.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11306) Solr example schemas inaccurate comments on docValues and StrField

2017-10-06 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194673#comment-16194673
 ] 

Jason Gerlowski commented on SOLR-11306:


This isn't hugely important, but it would be a nice documentation fix.  If 
people don't have bandwidth, that's cool.  Just wanted to make sure it didn't 
get lost because people didn't know about or see it.

> Solr example schemas inaccurate comments on  docValues and StrField
> ---
>
> Key: SOLR-11306
> URL: https://issues.apache.org/jira/browse/SOLR-11306
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: examples
>Affects Versions: 6.6, 7.0
>Reporter: Tom Burton-West
>Priority: Minor
> Attachments: SOLR-11306.patch
>
>
> Several of the example managed-schema files have an outdated comment about 
> docValues and StrField.  In Solr 6.6.0 these are under solr-6.6.0/solr/server 
> and the lines where the comment starts for each file are:
> solr/configsets/basic_configs/conf/managed-schema:216:   
> solr/configsets/data_driven_schema_configs/conf/managed-schema:221:
> solr/configsets/sample_techproducts_configs/conf/managed-schema:317
> In the case of 
> Solr-6.6.0/server/solr/configsets/basic_configs/conf/managed-schema, shortly 
> after the comment  are some lines which seem to directly contradict the 
> comment:
> 216  
> On line 221 a StrField is declared with docValues that is multiValued:
> 221   sortMissingLast="true" multiValued="true" docValues="true" />
> Also note that the comments above say that the field must either be required 
> or have a default value, but line 221 appears to satisfy neither condition.
> The JavaDocs indicate that StrField can be multi-valued 
> https://lucene.apache.org/core/6_6_0//core/org/apache/lucene/index/DocValuesType.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11442) Slightly prettier table of contents in ref guide

2017-10-06 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-11442:


Assignee: Cassandra Targett

> Slightly prettier table of contents in ref guide
> 
>
> Key: SOLR-11442
> URL: https://issues.apache.org/jira/browse/SOLR-11442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gus Heck
>Assignee: Cassandra Targett
> Attachments: prettier_toc.patch, Screen Shot 2017-10-05 at 9.13.13 
> PM.png, Screen Shot 2017-10-05 at 9.13.37 PM.png
>
>
> This has been irking me, and the fix is dead simple... The table of contents 
> in the ref guide is silly skinny taking up only 300px leading to things like 
> the attached screen shots...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11032) Update solrj tutorial

2017-10-06 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194651#comment-16194651
 ] 

Jason Gerlowski commented on SOLR-11032:


Seems like there's two things being talked about in this JIRA:
1. Updating the content of the SolrJ tutorial (and SolrJ ref-guide page: 
{{solr/solr-ref-guide/src/using-solrj.adoc}})
2. Ensure the content stays up to date (with some sort of build-time 
enforcement).

We 100% _should_ do both of these things.  But I'm worried that (2) will hold 
up (1) longer than necessary.  Or, to be more explicit, I'm willing to work on 
both of these, but I don't want my current lack of knowledge about the 
ref-guide-build to stand in the way of updating some doc content that could be 
useful right away.

Would anyone be opposed to updating the ref-guide content, and then figuring 
out a way to build/test the Java snippets afterwards?

> Update solrj tutorial
> -
>
> Key: SOLR-11032
> URL: https://issues.apache.org/jira/browse/SOLR-11032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, SolrJ, website
>Reporter: Karl Richter
>
> The [solrj tutorial](https://wiki.apache.org/solr/Solrj) has the following 
> issues:
>   * It refers to 1.4.0 whereas the current release is 6.x, some classes are 
> deprecated or no longer exist.
>   * Document-object-binding is a crucial feature [which should be working in 
> the meantime](https://issues.apache.org/jira/browse/SOLR-1945) and thus 
> should be covered in the tutorial.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9) - Build # 554 - Unstable!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/554/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC --illegal-access=deny

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
8 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=18546, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[53BF4670AAAD2607]-SendThread(127.0.0.1:39589),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)2) 
Thread[id=18547, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[53BF4670AAAD2607]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)3) 
Thread[id=18673, name=zkCallback-3444-thread-4, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1091)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)4) 
Thread[id=18672, name=zkCallback-3444-thread-3, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1091)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)5) 
Thread[id=18545, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@9/java.lang.Thread.run(Thread.java:844)6) 
Thread[id=18671, name=zkCallback-3444-thread-2, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1091)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)7) 
Thread[id=18548, name=zkCallback-3444-thread-1, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native 

[jira] [Comment Edited] (SOLR-11443) Remove the usage of workqueue for Overseer

2017-10-06 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194627#comment-16194627
 ] 

Cao Manh Dat edited comment on SOLR-11443 at 10/6/17 2:13 PM:
--

Patch for this ticket. The idea here is simple, we peek for 1000 messages in 
the queue, processed them, write new clusterstate to ZK, then poll out these 
messages. So we only poll out processed messages when new clusterstate is 
written. 
In case of Overseer get restarted, all the uncommitted messages still in the 
queue ( no need for workqueue, only keep them for backward compatible ), we 
will reprocess them and still achieve the desired state.

Here are some benchmark number ( OverseerTest.testPerformance() )
Before optimize : {{avgRequestsPerSecond: 1551.8934622998179}}
After optmize : {{avgRequestsPerSecond: 3425.594762960455}}


was (Author: caomanhdat):
Patch for this ticket. The idea here is simple, we peek for 1000 messages in 
the queue, processed them, write new clusterstate to ZK, then poll out these 
messages. So we only poll out processed messages when new clusterstate is 
written.

In case of Overseer get restarted, all the uncommitted messages still in the 
queue, we will reprocess them and still achieve the desired state.
Here are some benchmark number ( OverseerTest.testPerformance() )
Before optimize : {{avgRequestsPerSecond: 1551.8934622998179}}
After optmize : {{avgRequestsPerSecond: 3425.594762960455}}

> Remove the usage of workqueue for Overseer
> --
>
> Key: SOLR-11443
> URL: https://issues.apache.org/jira/browse/SOLR-11443
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11443.patch
>
>
> If we can remove the usage of workqueue, We can save a lot of IO blocking in 
> Overseer, hence boost performance a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11443) Remove the usage of workqueue for Overseer

2017-10-06 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11443:

Attachment: SOLR-11443.patch

Patch for this ticket. The idea here is simple, we peek for 1000 messages in 
the queue, processed them, write new clusterstate to ZK, then poll out these 
messages. So we only poll out processed messages when new clusterstate is 
written.

In case of Overseer get restarted, all the uncommitted messages still in the 
queue, we will reprocess them and still achieve the desired state.
Here are some benchmark number ( OverseerTest.testPerformance() )
Before optimize : {{avgRequestsPerSecond: 1551.8934622998179}}
After optmize : {{avgRequestsPerSecond: 3425.594762960455}}

> Remove the usage of workqueue for Overseer
> --
>
> Key: SOLR-11443
> URL: https://issues.apache.org/jira/browse/SOLR-11443
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11443.patch
>
>
> If we can remove the usage of workqueue, We can save a lot of IO blocking in 
> Overseer, hence boost performance a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11441) windows shell splits args on "=" so we should consider updating our docs to always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this

2017-10-06 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-11441:
---
Attachment: SOLR-11441.patch

I was able to reproduce the issue the same way that Dawid described: I can 
reproduce in Powershell, but not in "standard" Command Prompt.

This does seem like something that would be nice to fix outright, but in case 
no one has the time to get to it soon, I've attached a doc-patch which updates 
the relevant {{bin/solr}} commands in the ref-guide to use quoting, as well as 
including a short blurb about the use of double-quotes in the main page for the 
{{bin/solr}} scripts.

It'd be better to actually fix the issue, but failing that, this patch might be 
useful.

> windows shell splits args on "=" so we should consider updating our docs to 
> always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this
> 
>
> Key: SOLR-11441
> URL: https://issues.apache.org/jira/browse/SOLR-11441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, scripts and tools
> Environment: Windows 10, possible other versions as well (presumably 
> not when running cygwin?)
>Reporter: Hoss Man
> Attachments: SOLR-11441.patch
>
>
> confusing exchange with a user on freenode#solr led to this discovery...
> {noformat}
> 14:07 < sara_:#solr> New question: bin/solr start -e techproducts 
> -Dsolr.ltr.enabled=true
> 14:07 < sara_:#solr> gave me invalid command-line option:true
> 14:07 < sara_:#solr> anyone knows why?
> ...
> 15:02 < sara_:#solr> i have 6.6.1 @elyograg
> 15:03 < sara_:#solr> mine is a windows 10 machine
> ...
> 15:28 < sara_:#solr> @elyograg i just downloaded solr-7.0.0 and ran bin/solr 
> start -e techproducts -Dsolr.ltr.enabled=true
> 15:28 < sara_:#solr> it still gave me invalid command-line
> ...
> 15:29 <@hoss:#solr> sara_: the only thing i can think of is that windows 10 
> is splitting your command line on '=' ? ... can you try 
> quoting the entire command line arg so the script gets 
> the entire -Dsolr.ltr.enabled=true ? (not sure how to quote 
> strings in the windows command shell -- i would assume 
> "-Dsolr.ltr.enabled=true"
> 15:32 <@hoss:#solr> sigh ... yes, aparently windows things "=" is a shell 
> delimiter: https://ss64.com/nt/syntax-esc.html
> 15:33 <@hoss:#solr> s/shell delimiter/parameter delimiter in shell commands/
> 15:33 < sara_:#solr> you are genius!
> 15:34 < sara_:#solr> you and elyograg. you guys are fantastic. Saving me from 
> looking at the cmd script or shell script
> 15:34 <@hoss:#solr> sara_: do i have your permission to copy/paste this 
> exchange into our bug tracker as a note about updating our docs 
> (and maybe making the solr.cmd smart enough to handle 
> this) ?
> 15:45 < sara_:#solr> sure of course
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11443) Remove the usage of workqueue for Overseer

2017-10-06 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-11443:
---

 Summary: Remove the usage of workqueue for Overseer
 Key: SOLR-11443
 URL: https://issues.apache.org/jira/browse/SOLR-11443
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Cao Manh Dat
Assignee: Cao Manh Dat


If we can remove the usage of workqueue, We can save a lot of IO blocking in 
Overseer, hence boost performance a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11442) Slightly prettier table of contents in ref guide

2017-10-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194611#comment-16194611
 ] 

Steve Rowe commented on SOLR-11442:
---

bq. argh can someone migrate this to solr

done

> Slightly prettier table of contents in ref guide
> 
>
> Key: SOLR-11442
> URL: https://issues.apache.org/jira/browse/SOLR-11442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gus Heck
> Attachments: prettier_toc.patch, Screen Shot 2017-10-05 at 9.13.13 
> PM.png, Screen Shot 2017-10-05 at 9.13.37 PM.png
>
>
> This has been irking me, and the fix is dead simple... The table of contents 
> in the ref guide is silly skinny taking up only 300px leading to things like 
> the attached screen shots...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Moved] (SOLR-11442) Slightly prettier table of contents in ref guide

2017-10-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe moved LUCENE-7987 to SOLR-11442:
---

Lucene Fields:   (was: New)
  Key: SOLR-11442  (was: LUCENE-7987)
  Project: Solr  (was: Lucene - Core)

> Slightly prettier table of contents in ref guide
> 
>
> Key: SOLR-11442
> URL: https://issues.apache.org/jira/browse/SOLR-11442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gus Heck
> Attachments: prettier_toc.patch, Screen Shot 2017-10-05 at 9.13.13 
> PM.png, Screen Shot 2017-10-05 at 9.13.37 PM.png
>
>
> This has been irking me, and the fix is dead simple... The table of contents 
> in the ref guide is silly skinny taking up only 300px leading to things like 
> the attached screen shots...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11299) Time partitioned collections (umbrella issue)

2017-10-06 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194601#comment-16194601
 ] 

Gus Heck commented on SOLR-11299:
-

One thought that comes to mind is that with deletions of old collections, we 
could more or less think of it as solr collection based ring buffer...

The implicit assumption seems to be that writes are "mostly ordered" and that 
severely out of order writes might be rejected? I think that that's probably a 
critical assumption since I imagine that we'll have an alias that's moving from 
collection to collection for writes. Even if CloudSolrClient is able to write 
to the first collection in a multi-collection alias, this applies since we 
would need to reject a write not appropriate for that partition. And if that 
change is made does it have the potential to surprise folks who make an alias 
write to it and find all the docs in only one collection? Handling some sort of 
collection level routing will be needed if pre-allocation is to be useful in 
catching "early" or "late" writes near partition boundaries...

Thoughts on the possible URP/DURP maybe it's always present by default, but a 
silent no-op unless it sees that a time partitioned collection is being 
accessed, and only then does it do anything? This would require some highly 
efficient way of checking if something is a time series collection. Maybe a 
mandatory suffix/prefix on the collection name (".tpc" or "TPC-" or some such) 
so that there's no need to look anything up in zookeeper etc to know if it's a 
time series...? Downside is the potential for accidentally triggering it, so 
maybe a second more expensive check (attempt to parse out dateness from the 
name, ask zookeeper...whatever) could then revert to no-op if it failed so that 
slowdown rather than failure is the impact of an inadvertent suffix/prefix? 
suffix/prefix denoting time series collections could be configureable in 
solr.xml to make it possible to escape from naming clashes.

Another thought is that while date/time is the objective here, it would seem 
that any numeric field should work...

> Time partitioned collections (umbrella issue)
> -
>
> Key: SOLR-11299
> URL: https://issues.apache.org/jira/browse/SOLR-11299
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>
> Solr ought to have the ability to manage large-scale time-series data (think 
> logs or sensor data / IOT) itself without a lot of manual/external work.  The 
> most naive and painless approach today is to create a collection with a high 
> numShards with hash routing but this isn't as good as partitioning the 
> underlying indexes by time for these reasons:
> * Easy to scale up/down horizontally as data/requirements change.  (No need 
> to over-provision, use shard splitting, or re-index with different config)
> * Faster queries: 
> ** can search fewer shards, reducing overall load
> ** realtime search is more tractable (since most shards are stable -- 
> good caches)
> ** "recent" shards (that might be queried more) can be allocated to 
> faster hardware
> ** aged out data is simply removed, not marked as deleted.  Deleted docs 
> still have search overhead.
> * Outages of a shard result in a degraded but sometimes a useful system 
> nonetheless (compare to random subset missing)
> Ideally you could set this up once and then simply work with a collection 
> (potentially actually an alias) in a normal way (search or update), letting 
> Solr handle the addition of new partitions, removing of old ones, and 
> appropriate routing of requests depending on their nature.
> This issue is an umbrella issue for the particular tasks that will make it 
> all happen -- either subtasks or issue linking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5753) Domain lists for UAX_URL_EMAIL analyzer are incomplete - cannot recognize ".local" among others

2017-10-06 Thread Quentin BIOJOUT (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194579#comment-16194579
 ] 

Quentin BIOJOUT commented on LUCENE-5753:
-

Hi,

Aby update?

Thx.

> Domain lists for UAX_URL_EMAIL analyzer are incomplete - cannot recognize 
> ".local" among others
> ---
>
> Key: LUCENE-5753
> URL: https://issues.apache.org/jira/browse/LUCENE-5753
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Merritt
>
> uax_url_email analyzer appears unable to recognize the ".local" TLD among 
> others. Bug can be reproduced by
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=First%20Last%20lname@section.mycorp.local=uax_url_email"
> will parse "ln...@section.my" and "corp.local" as separate tokens, as opposed 
> to
> curl -XGET 
> "ADDRESS/INDEX/_analyze?text=first%20last%20ln...@section.mycorp.org=uax_url_email"
> which will recognize "ln...@section.mycorp.org".
> Can this be fixed by updating to a newer version? I am running ElasticSearch 
> 0.90.5 and whatever Lucene version sits underneath that. My suspicion is that 
> the TLD list the analyzer relies on (http://www.internic.net/zones/root.zone, 
> I think?) is incomplete and needs updating. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.0-Linux (64bit/jdk1.8.0_144) - Build # 429 - Unstable!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/429/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv

Error Message:
java.lang.RuntimeException: Error from server at 
http://127.0.0.1:46745/solr/test_col: Async exception during distributed 
update: Error from server at 
http://127.0.0.1:37643/solr/test_col_shard2_replica_n1: Server Error
request: 
http://127.0.0.1:37643/solr/test_col_shard2_replica_n1/update?update.distrib=TOLEADER=http%3A%2F%2F127.0.0.1%3A46745%2Fsolr%2Ftest_col_shard2_replica_n2%2F=javabin=2
 Remote error message: Failed synchronous update on shard StdNode: 
http://127.0.0.1:46745/solr/test_col_shard2_replica_n2/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@5c536e33

Stack Trace:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error from 
server at http://127.0.0.1:46745/solr/test_col: Async exception during 
distributed update: Error from server at 
http://127.0.0.1:37643/solr/test_col_shard2_replica_n1: Server Error



request: 
http://127.0.0.1:37643/solr/test_col_shard2_replica_n1/update?update.distrib=TOLEADER=http%3A%2F%2F127.0.0.1%3A46745%2Fsolr%2Ftest_col_shard2_replica_n2%2F=javabin=2
Remote error message: Failed synchronous update on shard StdNode: 
http://127.0.0.1:46745/solr/test_col_shard2_replica_n2/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@5c536e33
at 
__randomizedtesting.SeedInfo.seed([25F5C08871C40AB7:13E1A2CEFB9930A6]:0)
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:283)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv(TestStressCloudBlindAtomicUpdates.java:195)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20616 - Still Failing!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20616/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 56645 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:826: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:706: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:693: Source checkout 
is dirty (unversioned/missing files) after running tests!!! Offending files:
* lucene/licenses/morfologik-ukrainian-search-3.7.5.jar.sha1

Total time: 78 minutes 51 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.0 - Build # 145 - Still Unstable

2017-10-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.0/145/

2 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Doc with id=1 not found in http://127.0.0.1:40020/forceleader_test_collection 
due to: Path not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:40020/forceleader_test_collection due to: Path not found: /id; 
rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([F2324FBD33C2251C:14A57B7D0A40DC7D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:556)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:142)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-10265) Overseer can become the bottleneck in very large clusters

2017-10-06 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194319#comment-16194319
 ] 

Cao Manh Dat commented on SOLR-10265:
-

Maybe the problem here is Overseer is processing all messages in a single 
thread ( with a lot of IO blocking for every time we peek, poll messages and 
write new clusterstate )? So it will be wasted in case of using a powerful 
machine just for Overseer.
The idea here is each collection has its own {{states.json}}, so messages for 
different collections can be processed and updated in parallel. It can be 
tricky to implement but If we want a cluster with 400k cores, we can not just 
use a single thread to process 1m6 messages.

> Overseer can become the bottleneck in very large clusters
> -
>
> Key: SOLR-10265
> URL: https://issues.apache.org/jira/browse/SOLR-10265
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> Let's say we have a large cluster. Some numbers:
> - To ingest the data at the volume we want to I need roughly a 600 shard 
> collection.
> - Index into the collection for 1 hour and then create a new collection 
> - For a 30 days retention window with these numbers we would end up wth  
> ~400k cores in the cluster
> - Just a rolling restart of this cluster can take hours because the overseer 
> queue gets backed up. If a few nodes looses connectivity to ZooKeeper then 
> also we can end up with lots of messages in the Overseer queue
> With some tests here are the two high level problems we have identified:
> 1> How fast can the overseer process operations:
> The rate at which the overseer processes events is too slow at this scale. 
> I ran {{OverseerTest#testPerformance}} which creates 10 collections ( 1 shard 
> 1 replica ) and generates 20k state change events. The test took 119 seconds 
> to run on my machine which means ~170 events a second. Let's say a server can 
> process 5x of my machine so 1k events a second. 
> Total events generated by a 400k replica cluster = 400k * 4 ( state changes 
> till replica become active ) = 1.6M / 1k events a second will be 1600 minutes.
> Second observation was that the rate at which the overseer can process events 
> slows down when the number of items in the queue gets larger
> I ran the same {{OverseerTest#testPerformance}} but changed the number of 
> events generated to 2000 instead. The test took only 5 seconds to run. So it 
> was a lot faster than the test run which generated 20k events
> 2> State changes overwhelming ZK:
> For every state change Solr is writing out a big state.json to zookeeper. 
> This can lead to the zookeeper transaction logs going out of control even 
> with auto purging etc set . 
> I haven't debugged why the transaction logs ran into terabytes without taking 
> into snapshots but this was my assumption based on the other problems we 
> observed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11441) windows shell splits args on "=" so we should consider updating our docs to always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this

2017-10-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194265#comment-16194265
 ] 

Uwe Schindler commented on SOLR-11441:
--

Reading the IRC chat the problem seems much easier. On Windows the forward 
slash is a separator. So to execute a command you need to use backslash after 
"bin".

Also the user does not say which shell she uses.

> windows shell splits args on "=" so we should consider updating our docs to 
> always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this
> 
>
> Key: SOLR-11441
> URL: https://issues.apache.org/jira/browse/SOLR-11441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, scripts and tools
> Environment: Windows 10, possible other versions as well (presumably 
> not when running cygwin?)
>Reporter: Hoss Man
>
> confusing exchange with a user on freenode#solr led to this discovery...
> {noformat}
> 14:07 < sara_:#solr> New question: bin/solr start -e techproducts 
> -Dsolr.ltr.enabled=true
> 14:07 < sara_:#solr> gave me invalid command-line option:true
> 14:07 < sara_:#solr> anyone knows why?
> ...
> 15:02 < sara_:#solr> i have 6.6.1 @elyograg
> 15:03 < sara_:#solr> mine is a windows 10 machine
> ...
> 15:28 < sara_:#solr> @elyograg i just downloaded solr-7.0.0 and ran bin/solr 
> start -e techproducts -Dsolr.ltr.enabled=true
> 15:28 < sara_:#solr> it still gave me invalid command-line
> ...
> 15:29 <@hoss:#solr> sara_: the only thing i can think of is that windows 10 
> is splitting your command line on '=' ? ... can you try 
> quoting the entire command line arg so the script gets 
> the entire -Dsolr.ltr.enabled=true ? (not sure how to quote 
> strings in the windows command shell -- i would assume 
> "-Dsolr.ltr.enabled=true"
> 15:32 <@hoss:#solr> sigh ... yes, aparently windows things "=" is a shell 
> delimiter: https://ss64.com/nt/syntax-esc.html
> 15:33 <@hoss:#solr> s/shell delimiter/parameter delimiter in shell commands/
> 15:33 < sara_:#solr> you are genius!
> 15:34 < sara_:#solr> you and elyograg. you guys are fantastic. Saving me from 
> looking at the cmd script or shell script
> 15:34 <@hoss:#solr> sara_: do i have your permission to copy/paste this 
> exchange into our bug tracker as a note about updating our docs 
> (and maybe making the solr.cmd smart enough to handle 
> this) ?
> 15:45 < sara_:#solr> sure of course
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9) - Build # 20615 - Still Failing!

2017-10-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20615/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC --illegal-access=deny

All tests passed

Build Log:
[...truncated 53745 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:826: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:706: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:693: Source checkout 
is dirty (unversioned/missing files) after running tests!!! Offending files:
* lucene/licenses/morfologik-ukrainian-search-3.7.5.jar.sha1

Total time: 80 minutes 24 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11441) windows shell splits args on "=" so we should consider updating our docs to always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this

2017-10-06 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194260#comment-16194260
 ] 

Dawid Weiss commented on SOLR-11441:


So, to summarize -- it's a problem somewhere in how the {{solr.cmd}} parses/ 
processes command line in powershell. Powershell alone accepts {{-Dfoo=bar}} 
just fine and passes it to Java as a single argument.

> windows shell splits args on "=" so we should consider updating our docs to 
> always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this
> 
>
> Key: SOLR-11441
> URL: https://issues.apache.org/jira/browse/SOLR-11441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, scripts and tools
> Environment: Windows 10, possible other versions as well (presumably 
> not when running cygwin?)
>Reporter: Hoss Man
>
> confusing exchange with a user on freenode#solr led to this discovery...
> {noformat}
> 14:07 < sara_:#solr> New question: bin/solr start -e techproducts 
> -Dsolr.ltr.enabled=true
> 14:07 < sara_:#solr> gave me invalid command-line option:true
> 14:07 < sara_:#solr> anyone knows why?
> ...
> 15:02 < sara_:#solr> i have 6.6.1 @elyograg
> 15:03 < sara_:#solr> mine is a windows 10 machine
> ...
> 15:28 < sara_:#solr> @elyograg i just downloaded solr-7.0.0 and ran bin/solr 
> start -e techproducts -Dsolr.ltr.enabled=true
> 15:28 < sara_:#solr> it still gave me invalid command-line
> ...
> 15:29 <@hoss:#solr> sara_: the only thing i can think of is that windows 10 
> is splitting your command line on '=' ? ... can you try 
> quoting the entire command line arg so the script gets 
> the entire -Dsolr.ltr.enabled=true ? (not sure how to quote 
> strings in the windows command shell -- i would assume 
> "-Dsolr.ltr.enabled=true"
> 15:32 <@hoss:#solr> sigh ... yes, aparently windows things "=" is a shell 
> delimiter: https://ss64.com/nt/syntax-esc.html
> 15:33 <@hoss:#solr> s/shell delimiter/parameter delimiter in shell commands/
> 15:33 < sara_:#solr> you are genius!
> 15:34 < sara_:#solr> you and elyograg. you guys are fantastic. Saving me from 
> looking at the cmd script or shell script
> 15:34 <@hoss:#solr> sara_: do i have your permission to copy/paste this 
> exchange into our bug tracker as a note about updating our docs 
> (and maybe making the solr.cmd smart enough to handle 
> this) ?
> 15:45 < sara_:#solr> sure of course
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11441) windows shell splits args on "=" so we should consider updating our docs to always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this

2017-10-06 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194254#comment-16194254
 ] 

Dawid Weiss commented on SOLR-11441:


I successfully ran the command mentioned in the issue -- no problems (see 
below). 

{code}
C:\_tmp\solr-7.0.0>bin\solr start -e techproducts -Dsolr.ltr.enabled=true
Creating Solr home directory C:\_tmp\solr-7.0.0\example\techproducts\solr

Starting up Solr on port 8983 using command:
"C:\_tmp\solr-7.0.0\bin\solr.cmd" start -p 8983 -s 
"C:\_tmp\solr-7.0.0\example\techproducts\solr" -Dsolr.ltr.enabled=true

Waiting up to 30 to see Solr running on port 8983

Copying configuration to new core instance directory:
C:\_tmp\solr-7.0.0\example\techproducts\solr\techproducts

Creating new core 'techproducts' using command:
http://localhost:8983/solr/admin/cores?action=CREATE=techproducts=techproducts

Started Solr server on port 8983. Happy searching!
{
  "responseHeader":{
"status":0,
"QTime":10199},
  "core":"techproducts"}


Indexing tech product example docs from C:\_tmp\solr-7.0.0\example\exampledocs
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/techproducts/update 
using content-type application/xml...
POSTing file gb18030-example.xml to [base]
POSTing file hd.xml to [base]
POSTing file ipod_other.xml to [base]
POSTing file ipod_video.xml to [base]
POSTing file manufacturers.xml to [base]
POSTing file mem.xml to [base]
POSTing file money.xml to [base]
POSTing file monitor.xml to [base]
POSTing file monitor2.xml to [base]
POSTing file mp500.xml to [base]
POSTing file sd500.xml to [base]
POSTing file solr.xml to [base]
POSTing file utf8-example.xml to [base]
POSTing file vidcard.xml to [base]
14 files indexed.
COMMITting Solr index changes to 
http://localhost:8983/solr/techproducts/update...
Time spent: 0:00:00.545

Solr techproducts example launched successfully. Direct your Web browser to 
http://localhost:8983/solr to visit the Solr Admin UI
{code}

The user has a slash in her {{solr/bin}}; this won't find solr command when 
executed in {{cmd}}; I think she had powershell instead and indeed the script 
fails then:

{code}
PS C:\_tmp\solr-7.0.0> bin/solr start -e techproducts -Dsolr.ltr.enabled=true

Invalid command-line option: true
{code}



> windows shell splits args on "=" so we should consider updating our docs to 
> always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this
> 
>
> Key: SOLR-11441
> URL: https://issues.apache.org/jira/browse/SOLR-11441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, scripts and tools
> Environment: Windows 10, possible other versions as well (presumably 
> not when running cygwin?)
>Reporter: Hoss Man
>
> confusing exchange with a user on freenode#solr led to this discovery...
> {noformat}
> 14:07 < sara_:#solr> New question: bin/solr start -e techproducts 
> -Dsolr.ltr.enabled=true
> 14:07 < sara_:#solr> gave me invalid command-line option:true
> 14:07 < sara_:#solr> anyone knows why?
> ...
> 15:02 < sara_:#solr> i have 6.6.1 @elyograg
> 15:03 < sara_:#solr> mine is a windows 10 machine
> ...
> 15:28 < sara_:#solr> @elyograg i just downloaded solr-7.0.0 and ran bin/solr 
> start -e techproducts -Dsolr.ltr.enabled=true
> 15:28 < sara_:#solr> it still gave me invalid command-line
> ...
> 15:29 <@hoss:#solr> sara_: the only thing i can think of is that windows 10 
> is splitting your command line on '=' ? ... can you try 
> quoting the entire command line arg so the script gets 
> the entire -Dsolr.ltr.enabled=true ? (not sure how to quote 
> strings in the windows command shell -- i would assume 
> "-Dsolr.ltr.enabled=true"
> 15:32 <@hoss:#solr> sigh ... yes, aparently windows things "=" is a shell 
> delimiter: https://ss64.com/nt/syntax-esc.html
> 15:33 <@hoss:#solr> s/shell delimiter/parameter delimiter in shell commands/
> 15:33 < sara_:#solr> you are genius!
> 15:34 < sara_:#solr> you and elyograg. you guys are fantastic. Saving me from 
> looking at the cmd script or shell script
> 15:34 <@hoss:#solr> sara_: do i have your permission to copy/paste this 
> exchange into our bug tracker as a note about updating our docs 
> (and maybe making the solr.cmd smart enough to handle 
> this) ?
> 15:45 < sara_:#solr> sure of course
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11441) windows shell splits args on "=" so we should consider updating our docs to always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this

2017-10-06 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194248#comment-16194248
 ] 

Dawid Weiss commented on SOLR-11441:


I don't know what that page says, but on my machine(s) the '=' is definitely 
not split when passed to a raw Java process.
{code}
public class Test {
  public static void main(String[] args) {
for (int i = 0; i < args.length; i++) {
  System.out.println(i + ": " + args[i]);
}
  }
}
{code}

Windows 10, with latest patches.

When using {{cmd}}:
{code}
C:\_tmp>java -cp . Test -Dfoo=bar
0: -Dfoo=bar
{code}

When using {{powershell}}:
{code}
PS C:\_tmp> java -cp . Test -Dfoo=bar
0: -Dfoo=bar
{code}

> windows shell splits args on "=" so we should consider updating our docs to 
> always quote args like -Dfoo=bar or improve bin/solr.cmd to account for this
> 
>
> Key: SOLR-11441
> URL: https://issues.apache.org/jira/browse/SOLR-11441
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, scripts and tools
> Environment: Windows 10, possible other versions as well (presumably 
> not when running cygwin?)
>Reporter: Hoss Man
>
> confusing exchange with a user on freenode#solr led to this discovery...
> {noformat}
> 14:07 < sara_:#solr> New question: bin/solr start -e techproducts 
> -Dsolr.ltr.enabled=true
> 14:07 < sara_:#solr> gave me invalid command-line option:true
> 14:07 < sara_:#solr> anyone knows why?
> ...
> 15:02 < sara_:#solr> i have 6.6.1 @elyograg
> 15:03 < sara_:#solr> mine is a windows 10 machine
> ...
> 15:28 < sara_:#solr> @elyograg i just downloaded solr-7.0.0 and ran bin/solr 
> start -e techproducts -Dsolr.ltr.enabled=true
> 15:28 < sara_:#solr> it still gave me invalid command-line
> ...
> 15:29 <@hoss:#solr> sara_: the only thing i can think of is that windows 10 
> is splitting your command line on '=' ? ... can you try 
> quoting the entire command line arg so the script gets 
> the entire -Dsolr.ltr.enabled=true ? (not sure how to quote 
> strings in the windows command shell -- i would assume 
> "-Dsolr.ltr.enabled=true"
> 15:32 <@hoss:#solr> sigh ... yes, aparently windows things "=" is a shell 
> delimiter: https://ss64.com/nt/syntax-esc.html
> 15:33 <@hoss:#solr> s/shell delimiter/parameter delimiter in shell commands/
> 15:33 < sara_:#solr> you are genius!
> 15:34 < sara_:#solr> you and elyograg. you guys are fantastic. Saving me from 
> looking at the cmd script or shell script
> 15:34 <@hoss:#solr> sara_: do i have your permission to copy/paste this 
> exchange into our bug tracker as a note about updating our docs 
> (and maybe making the solr.cmd smart enough to handle 
> this) ?
> 15:45 < sara_:#solr> sure of course
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: min-should-match with slop and phrases?

2017-10-06 Thread Dawid Weiss
Hi Hoss,

Thanks for feedback -- it sucks to know more about the deep internals
like index merging and so little about end-user facing stuff like
query parsers, I should educate myself on those fronts.

> Maybe i'm missunderstanding the objective,

I can't say for sure, but it seems like the customer wishes to favor
precision over recall and be able to search for longer phrases, while
allowing certain distortions. Think: "Lebron James nike shoes" and
allowing some term reorderings and some missing terms (a document with
just "lebron's nike shoes" should match).

This isn't generic search where just scoring better matches and
pulling them up to the top is enough. We also want to get rid of
documents without enough key terms (hence the need for
minShouldMatch).

This is all my guess, but seems like a valid intent/ use case to me
and I was a bit surprised at not being able to help that person with
formulating a single query (or Solr edismax parameters) that could do
it.

> how you could do what you want (or at least what i think you want) with the 
> existing span queries?

In terms of code-level I was thinking of a custom query and
subclassing FilterSpans, then applying minShouldMatch to a
single-term-based "should" SpanNearQuery (in order, slop factor).
There are other alternatives but they can lead to an explosion of
query clauses so I don't think it's a good way to go. Other ideas
welcome.

Dawid

On Fri, Oct 6, 2017 at 12:06 AM, Chris Hostetter
 wrote:
>
> : I've been asked today about whether there is a way to express a query like:
> :
> : q="foo bar baz"
>
>
> : My tentatively answer based on the code is that mm (min should match)
> : only applies to Boolean queries (clauses), so there is no way to mix
>
> that is correct -- minNrShouldMatch is only a BQ concept (the "should"
> literally refers to the SHOULD clause type)
>
> : it with phrase queries... One could simulate this with span queries,
> : but there is no query parser available that would permit creating such
> : a query from user input.
>
> Maybe i'm missunderstanding the objective, because i can't at all imagine
> how you could do what you want (or at least what i think you want) with
> the existing span queries?
>
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org