[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 3191 - Still Unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3191/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:41151/collection1: Async exception during 
distributed update: Error from server at 
http://127.0.0.1:38475/collection1_shard1_replica_n1: Can not find: 
/collection1_shard1_replica_n1/updaterequest: 
http://127.0.0.1:38475/collection1_shard1_replica_n1/update?update.chain=distrib-dup-test-chain-explicit=TOLEADER=http%3A%2F%2F127.0.0.1%3A41151%2Fcollection1_shard2_replica_n2%2F=javabin=2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:41151/collection1: Async exception during 
distributed update: Error from server at 
http://127.0.0.1:38475/collection1_shard1_replica_n1: Can not find: 
/collection1_shard1_replica_n1/update



request: 
http://127.0.0.1:38475/collection1_shard1_replica_n1/update?update.chain=distrib-dup-test-chain-explicit=TOLEADER=http%3A%2F%2F127.0.0.1%3A41151%2Fcollection1_shard2_replica_n2%2F=javabin=2
at 
__randomizedtesting.SeedInfo.seed([72CF6DD6AA989198:FA9B520C0464FC60]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.BaseDistributedSearchTestCase.add(BaseDistributedSearchTestCase.java:557)
at 
org.apache.solr.cloud.BasicDistributedZkTest.testUpdateProcessorsRunOnlyOnce(BasicDistributedZkTest.java:746)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:424)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1063)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1035)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS-EA] Lucene-Solr-7.6-Linux (64bit/jdk-12-ea+12) - Build # 75 - Still Unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/75/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseParallelGC

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.request.TestV2Request

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.request.TestV2Request: 1) Thread[id=177, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestV2Request] 
at java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.request.TestV2Request: 
   1) Thread[id=177, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestV2Request]
at java.base@12-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([5300DB420A857595]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.request.TestV2Request

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.request.TestV2Request: 1) Thread[id=479, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestV2Request] 
at java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.request.TestV2Request: 
   1) Thread[id=479, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestV2Request]
at java.base@12-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([5300DB420A857595]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.request.TestV2Request

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.request.TestV2Request: 1) Thread[id=958, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestV2Request] 
at java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.request.TestV2Request: 
   1) Thread[id=958, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestV2Request]
at java.base@12-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([5300DB420A857595]:0)


FAILED:  org.apache.solr.client.solrj.request.TestV2Request.testHttpSolrClient

Error Message:
Error from server at https://127.0.0.1:41765/solr: no such collection or alias

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at https://127.0.0.1:41765/solr: no such collection or alias
at 
__randomizedtesting.SeedInfo.seed([5300DB420A857595:8B18C368ABB167B2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException.create(HttpSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:620)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260)
at 
org.apache.solr.client.solrj.request.TestV2Request.assertSuccess(TestV2Request.java:49)
at 
org.apache.solr.client.solrj.request.TestV2Request.doTest(TestV2Request.java:96)
at 
org.apache.solr.client.solrj.request.TestV2Request.testHttpSolrClient(TestV2Request.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 1100 - Unstable

2018-12-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1100/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [InternalHttpClient, 
MMapDirectory, MMapDirectory, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:321)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:330)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:225)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:267)  at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:421) 
 at org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:237) 
 at 
org.apache.solr.cloud.RecoveryStrategy.doReplicateOnlyRecovery(RecoveryStrategy.java:382)
  at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:328)  
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:307)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:257)
  at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:131)
  at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2096)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2255)  at 
org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1104)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:991)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1177)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:689)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:503)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:346) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:425) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1171)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1052)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 

[jira] [Assigned] (SOLR-13041) SolrJ autoscaling Condition class has equals but no hashCode

2018-12-07 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-13041:
-

Assignee: Noble Paul

> SolrJ autoscaling Condition class has equals but no hashCode
> 
>
> Key: SOLR-13041
> URL: https://issues.apache.org/jira/browse/SOLR-13041
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.5
>Reporter: Zsolt Gyulavari
>Assignee: Noble Paul
>Priority: Major
>  Labels: patch-available
> Attachments: SOLR-13041.patch
>
>
> SolrJ autoscaling Condition class has equals but no hashCode implementation.
> Instances are being used in a HashSet in Clause.testGroupNodes method, so 
> this could lead to unreliable behavior or increased memory consumption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-http2-Linux (64bit/jdk-11) - Build # 48 - Still unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Linux/48/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseSerialGC

53 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.response.transform.TestSubQueryTransformerDistrib

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.response.transform.TestSubQueryTransformerDistrib: 1) 
Thread[id=50686, name=qtp1446605275-50686, state=TIMED_WAITING, 
group=TGRP-TestSubQueryTransformerDistrib] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2211)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:292)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:357)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
 at java.base@11/java.lang.Thread.run(Thread.java:834)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.response.transform.TestSubQueryTransformerDistrib: 
   1) Thread[id=50686, name=qtp1446605275-50686, state=TIMED_WAITING, 
group=TGRP-TestSubQueryTransformerDistrib]
at java.base@11/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2211)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:292)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:357)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.base@11/java.lang.Thread.run(Thread.java:834)
at __randomizedtesting.SeedInfo.seed([63BC6600B0696F1A]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.response.transform.TestSubQueryTransformerDistrib

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=50686, name=qtp1446605275-50686, state=TIMED_WAITING, 
group=TGRP-TestSubQueryTransformerDistrib] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2211)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:292)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:357)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
 at java.base@11/java.lang.Thread.run(Thread.java:834)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=50686, name=qtp1446605275-50686, state=TIMED_WAITING, 
group=TGRP-TestSubQueryTransformerDistrib]
at java.base@11/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2211)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:292)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:357)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.base@11/java.lang.Thread.run(Thread.java:834)
at __randomizedtesting.SeedInfo.seed([63BC6600B0696F1A]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest: 1) Thread[id=1671, 
name=qtp663904321-1671, state=TIMED_WAITING, 

[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk-11) - Build # 131 - Unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/131/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseSerialGC

21 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressInPlaceUpdates

Error Message:
45 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestStressInPlaceUpdates: 1) Thread[id=223, 
name=recoveryExecutor-60-thread-1, state=TIMED_WAITING, 
group=TGRP-TestStressInPlaceUpdates] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@11/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@11/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11/java.lang.Thread.run(Thread.java:834)2) 
Thread[id=135, name=qtp899728942-135, state=TIMED_WAITING, 
group=TGRP-TestStressInPlaceUpdates] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
app//org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:656)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:46)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:720)
 at java.base@11/java.lang.Thread.run(Thread.java:834)3) 
Thread[id=186, name=SolrRrdBackendFactory-54-thread-1, state=TIMED_WAITING, 
group=TGRP-TestStressInPlaceUpdates] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
 at 
java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11/java.lang.Thread.run(Thread.java:834)4) 
Thread[id=229, name=updateExecutor-45-thread-1, state=TIMED_WAITING, 
group=TGRP-TestStressInPlaceUpdates] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@11/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@11/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11/java.lang.Thread.run(Thread.java:834)5) 
Thread[id=109, name=qtp738822242-109, state=TIMED_WAITING, 
group=TGRP-TestStressInPlaceUpdates] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
app//org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:656)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:46)
   

[jira] [Commented] (LUCENE-8527) Upgrade JFlex to 1.7.0

2018-12-07 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713510#comment-16713510
 ] 

Robert Muir commented on LUCENE-8527:
-

It would be really nice. I don't think the tricky part is really segmentation 
at all (as far as finding breaks) but instead the problem of assigning the 
proper "label" to the token (tag it as a emoji type). 

So the stuff in the ICU tokenizer uses some properties to tag the "stuff 
between breaks" as emoji token type versus something else. I looked at latest 
jflex, it seems it would need those props? And its a little tricky, e.g. 
ordinary ascii digit 7 is [:Emoji:] in unicode. So thats why the isEmoji there 
is a bit crazy.


> Upgrade JFlex to 1.7.0
> --
>
> Key: LUCENE-8527
> URL: https://issues.apache.org/jira/browse/LUCENE-8527
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build, modules/analysis
>Reporter: Steve Rowe
>Priority: Minor
>
> JFlex 1.7.0, supporting Unicode 9.0, was released recently: 
> [http://jflex.de/changelog.html#jflex-1.7.0].  We should upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 3190 - Unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3190/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseSerialGC

15 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestConfigReload

Error Message:
57 threads leaked from SUITE scope at org.apache.solr.handler.TestConfigReload: 
1) Thread[id=995, name=Scheduler-1129187295, state=TIMED_WAITING, 
group=TGRP-TestConfigReload] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
java.base@12-ea/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
 at 
java.base@12-ea/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)2) 
Thread[id=1019, 
name=qtp39194080-1019-acceptor-0@43ef3718-ServerConnector@21bc5d2e{SSL,[ssl, 
http/1.1]}{127.0.0.1:40115}, state=RUNNABLE, group=TGRP-TestConfigReload]   
  at java.base@12-ea/sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)  
   at 
java.base@12-ea/sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:525)
 at 
java.base@12-ea/sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:277)
 at 
app//org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:385)  
   at 
app//org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:648)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)3) 
Thread[id=977, name=SolrRrdBackendFactory-267-thread-1, state=TIMED_WAITING, 
group=TGRP-TestConfigReload] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
java.base@12-ea/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
 at 
java.base@12-ea/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)4) 
Thread[id=948, name=qtp2076860498-948, state=TIMED_WAITING, 
group=TGRP-TestConfigReload] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
app//org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:656)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:46)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:720)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)5) 
Thread[id=1049, name=MetricsHistoryHandler-288-thread-1, state=TIMED_WAITING, 
group=TGRP-TestConfigReload] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
java.base@12-ea/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
 at 

[jira] [Commented] (LUCENE-8527) Upgrade JFlex to 1.7.0

2018-12-07 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713437#comment-16713437
 ] 

Steve Rowe commented on LUCENE-8527:


[~rcmuir ] mentioned on LUCENE-8125 that StandardTokenizer should give such 
sequences the {{}} token type - see the logic in the {{icu}} module's 
{{BreakIteratorWrapper}}.

JFlex 1.7.0 supports Unicode 9.0, which, if I'm interpreting the discussion at 
http://www.unicode.org/L2/L2016/16315r-handling-seg-emoji.pdf properly, does 
not (fully) include Emoji sequence support (though customized rules that would 
do that properly in Unicode 9.0 are listed in that doc).

Should we include the (post-9.0) customized rules for Unicode 9.0?


> Upgrade JFlex to 1.7.0
> --
>
> Key: LUCENE-8527
> URL: https://issues.apache.org/jira/browse/LUCENE-8527
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build, modules/analysis
>Reporter: Steve Rowe
>Priority: Minor
>
> JFlex 1.7.0, supporting Unicode 9.0, was released recently: 
> [http://jflex.de/changelog.html#jflex-1.7.0].  We should upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8527) Upgrade JFlex to 1.7.0

2018-12-07 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713437#comment-16713437
 ] 

Steve Rowe edited comment on LUCENE-8527 at 12/8/18 12:22 AM:
--

[~rcmuir ] mentioned on LUCENE-8125 that StandardTokenizer should give Emoji 
sequences the {{}} token type - see the logic in the {{icu}} module's 
{{BreakIteratorWrapper}}.

JFlex 1.7.0 supports Unicode 9.0, which, if I'm interpreting the discussion at 
http://www.unicode.org/L2/L2016/16315r-handling-seg-emoji.pdf properly, does 
not (fully) include Emoji sequence support (though customized rules that would 
do that properly in Unicode 9.0 are listed in that doc).

Should we include the (post-9.0) customized rules for Unicode 9.0?



was (Author: steve_rowe):
[~rcmuir ] mentioned on LUCENE-8125 that StandardTokenizer should give such 
sequences the {{}} token type - see the logic in the {{icu}} module's 
{{BreakIteratorWrapper}}.

JFlex 1.7.0 supports Unicode 9.0, which, if I'm interpreting the discussion at 
http://www.unicode.org/L2/L2016/16315r-handling-seg-emoji.pdf properly, does 
not (fully) include Emoji sequence support (though customized rules that would 
do that properly in Unicode 9.0 are listed in that doc).

Should we include the (post-9.0) customized rules for Unicode 9.0?


> Upgrade JFlex to 1.7.0
> --
>
> Key: LUCENE-8527
> URL: https://issues.apache.org/jira/browse/LUCENE-8527
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build, modules/analysis
>Reporter: Steve Rowe
>Priority: Minor
>
> JFlex 1.7.0, supporting Unicode 9.0, was released recently: 
> [http://jflex.de/changelog.html#jflex-1.7.0].  We should upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.6-Linux (64bit/jdk-12-ea+12) - Build # 74 - Still Unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/74/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:
Could not find collection : delLiveColl

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : delLiveColl
at 
__randomizedtesting.SeedInfo.seed([DA8B4C380BA7A12E:77EBF8331698095B]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:77)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:
Could not find collection : delLiveColl

Stack Trace:
org.apache.solr.common.SolrException: Could not find 

[VOTE] Release Lucene/Solr 7.6.0 RC2

2018-12-07 Thread Nicholas Knize
Please vote for release candidate 2 for Lucene/Solr 7.6.0

The artifacts can be downloaded from:
*https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC2-rev719cde97f84640faa1e3525690d262946571245f/
*

You can run the smoke tester directly with this command:

python3 -u dev-tools/scripts/smokeTestRelease.py \
*https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC2-rev719cde97f84640faa1e3525690d262946571245f/
*

Here's my +1
SUCCESS! [0:50:22.047749]
-- 

Nicholas Knize, Ph.D., GISP
Geospatial Software Guy  |  Elasticsearch
Apache Lucene Committer
nkn...@apache.org


[jira] [Commented] (SOLR-12697) pure DocValues support for FieldValueFeature

2018-12-07 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713365#comment-16713365
 ] 

Christine Poerschke commented on SOLR-12697:


bq. ... a new patch where I had migrated the FieldValueFeature on using 
SolrDocumentFetcher#solrDoc introduced in patch SOLR-12625. [~erickerickson] 
can you please take a look at it? ... a couple of additional code changes ... 
[~cpoerschke] please take a look at the patch and described changes. WDYT?

Thanks for creating a new patch [~slivotov]!

I've so far only looked at the first additional change w.r.t. the default value 
(and moved it to SOLR-13049 as a new feature).

[~erickerickson] if you could perhaps look at the {{SolrDocumentFetcher}} side 
of the patch that would be great.


> pure DocValues support for FieldValueFeature
> 
>
> Key: SOLR-12697
> URL: https://issues.apache.org/jira/browse/SOLR-12697
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-12697.patch, SOLR-12697.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... FieldValueFeature doesn't support pure DocValues fields (Stored 
> false). Please also note that for fields which are both stored and DocValues 
> it is working not optimal because it is extracting just one field from the 
> stored document. DocValues are obviously faster for such usecases. ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-http2-Linux (64bit/jdk1.8.0_172) - Build # 47 - Still Failing!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Linux/47/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseSerialGC

11 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest: 1) Thread[id=956, 
name=qtp1250557763-956, state=TIMED_WAITING, group=TGRP-LegacyNoFacetCloudTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:292)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:357)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest: 
   1) Thread[id=956, name=qtp1250557763-956, state=TIMED_WAITING, 
group=TGRP-LegacyNoFacetCloudTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:292)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:357)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([249704BC4496C04F]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=956, name=qtp1250557763-956, state=TIMED_WAITING, 
group=TGRP-LegacyNoFacetCloudTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:292)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:357)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=956, name=qtp1250557763-956, state=TIMED_WAITING, 
group=TGRP-LegacyNoFacetCloudTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:292)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:357)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([249704BC4496C04F]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL: 1) Thread[id=13963, 
name=qtp2007763703-13963, state=TIMED_WAITING, 
group=TGRP-TestMiniSolrCloudClusterSSL] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:292)
 

[jira] [Created] (SOLR-13050) SystemLogListener can "lose" record of nodeLost event when node lost is/was .system collection leader

2018-12-07 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13050:
---

 Summary: SystemLogListener can "lose" record of nodeLost event 
when node lost is/was .system collection leader
 Key: SOLR-13050
 URL: https://issues.apache.org/jira/browse/SOLR-13050
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


A chicken/egg issue of the way the autoscaling SystemLogListener uses the 
{{.system}} collection to record event history is that in the case of a 
{{nodeLost}} event for the {{.system}} collection's leader, there is a window 
of time during leader election where trying to add the "Document" representing 
that {{nodeLost}} event to the {{.system}} collection can fail.

This isn't a silently failure: the SystemLogListener, acting the role of a Solr 
client, is informed that the "add" failed, but it doesn't/can't do much to deal 
with this situation other then to "log" (to the slf4j Logger) that it wasn't 
able to add the doc.



I'm not sure how much of a "real world" impact this has on users, but I noticed 
the issue while diagnosing a jenkins test failure and wanted to track it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13049) make contrib/ltr Feature.defaultValue configurable

2018-12-07 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-13049:
---
Attachment: SOLR-13049.patch

> make contrib/ltr Feature.defaultValue configurable
> --
>
> Key: SOLR-13049
> URL: https://issues.apache.org/jira/browse/SOLR-13049
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-13049.patch
>
>
> [~slivotov] wrote in SOLR-12697:
> {quote}
> I had also done a couple of additional code changes:
> 1. fixed small issue with defaultValue(previously it was impossible to set it 
> from feature.json, and the tests were written where Feature was created 
> manually, and not by parsing json). Tests are added which are validating 
> defaultValue from schema field configuration and from a feature default value.
> {quote}
> (Please see 
> https://issues.apache.org/jira/browse/SOLR-12697?focusedCommentId=16708618=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16708618
>  for more context.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13049) make contrib/ltr Feature.defaultValue configurable

2018-12-07 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713360#comment-16713360
 ] 

Christine Poerschke commented on SOLR-13049:


{quote}... small issue with defaultValue ... impossible to set it from 
feature.json ...
{quote}
That's a good find, thank you [~slivotov]! And quite an interesting code change 
actually, because if a {{defaultValue}} was configured then it (of course) 
needs to be persisted but if no default value was configured then it might be 
confusing to include the default default value in the parameters map.

Attached patch started off with parts of your Dec 4th SOLR-12697 patch and then 
combined the three {{String/Double/Float Feature.setDefaultValue}} accessors 
into one {{setDefaultValue(Object)}} accessor which is similar to the 
{{ValueFeature.setValue(Object)}} accessor. I've then also included the default 
value in the {{paramsToMap()}} implementation of all features and added tests 
to check that parameters are correctly included in the parameter map. What do 
you think?

Potential next steps:
 * The patch in its current form includes no javadoc or documentation changes; 
i'm unsure on if/how best to document the default value feature parameter.
 * The patch started off with only parts of your Dec 4th SOLR-12697 patch; i 
think it would be okay to combine the two patches without issues but i have not 
yet tried to do so.

> make contrib/ltr Feature.defaultValue configurable
> --
>
> Key: SOLR-13049
> URL: https://issues.apache.org/jira/browse/SOLR-13049
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-13049.patch
>
>
> [~slivotov] wrote in SOLR-12697:
> {quote}
> I had also done a couple of additional code changes:
> 1. fixed small issue with defaultValue(previously it was impossible to set it 
> from feature.json, and the tests were written where Feature was created 
> manually, and not by parsing json). Tests are added which are validating 
> defaultValue from schema field configuration and from a feature default value.
> {quote}
> (Please see 
> https://issues.apache.org/jira/browse/SOLR-12697?focusedCommentId=16708618=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16708618
>  for more context.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13049) make contrib/ltr Feature.defaultValue configurable

2018-12-07 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-13049:
---
Reporter: Stanislav Livotov  (was: Christine Poerschke)

> make contrib/ltr Feature.defaultValue configurable
> --
>
> Key: SOLR-13049
> URL: https://issues.apache.org/jira/browse/SOLR-13049
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
>
> [~slivotov] wrote in SOLR-12697:
> {quote}
> I had also done a couple of additional code changes:
> 1. fixed small issue with defaultValue(previously it was impossible to set it 
> from feature.json, and the tests were written where Feature was created 
> manually, and not by parsing json). Tests are added which are validating 
> defaultValue from schema field configuration and from a feature default value.
> {quote}
> (Please see 
> https://issues.apache.org/jira/browse/SOLR-12697?focusedCommentId=16708618=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16708618
>  for more context.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13049) make contrib/ltr Feature.defaultValue configurable

2018-12-07 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-13049:
--

 Summary: make contrib/ltr Feature.defaultValue configurable
 Key: SOLR-13049
 URL: https://issues.apache.org/jira/browse/SOLR-13049
 Project: Solr
  Issue Type: New Feature
  Components: contrib - LTR
Reporter: Christine Poerschke


[~slivotov] wrote in SOLR-12697:

{quote}
I had also done a couple of additional code changes:
1. fixed small issue with defaultValue(previously it was impossible to set it 
from feature.json, and the tests were written where Feature was created 
manually, and not by parsing json). Tests are added which are validating 
defaultValue from schema field configuration and from a feature default value.
{quote}

(Please see 
https://issues.apache.org/jira/browse/SOLR-12697?focusedCommentId=16708618=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16708618
 for more context.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13027) Harden LeaderTragicEventTest.

2018-12-07 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713336#comment-16713336
 ] 

Steve Rowe commented on SOLR-13027:
---

This seed reproduces for me 10/10 iterations on Java8, from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23304/]:

{noformat}
Checking out Revision aaa64d7015998f28aaffac031c4032abf73bebd6 
(refs/remotes/origin/master)
[...]
[java-info] java version "10.0.1"
[java-info] OpenJDK Runtime Environment (10.0.1+10, Oracle Corporation)
[java-info] OpenJDK 64-Bit Server VM (10.0.1+10, Oracle Corporation)
[java-info] Test args: [-XX:+UseCompressedOops -XX:+UseSerialGC]
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=LeaderTragicEventTest -Dtests.method=test 
-Dtests.seed=482611237BA22E39 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=fo-FO -Dtests.timezone=Asia/Kolkata -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 47.8s J1 | LeaderTragicEventTest.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Timeout waiting for 
new replica become leader
   [junit4]> Timeout waiting to see state for collection=collection1 
:DocCollection(collection1//collections/collection1/state.json/5)={
   [junit4]>   "pullReplicas":"0",
   [junit4]>   "replicationFactor":"2",
   [junit4]>   "shards":{"shard1":{
   [junit4]>   "range":"8000-7fff",
   [junit4]>   "state":"active",
   [junit4]>   "replicas":{
   [junit4]> "core_node3":{
   [junit4]>   "core":"collection1_shard1_replica_n1",
   [junit4]>   "base_url":"http://127.0.0.1:39173/solr;,
   [junit4]>   "node_name":"127.0.0.1:39173_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "force_set_state":"false",
   [junit4]>   "leader":"true"},
   [junit4]> "core_node4":{
   [junit4]>   "core":"collection1_shard1_replica_n2",
   [junit4]>   "base_url":"http://127.0.0.1:40623/solr;,
   [junit4]>   "node_name":"127.0.0.1:40623_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "force_set_state":"false",
   [junit4]>   "router":{"name":"compositeId"},
   [junit4]>   "maxShardsPerNode":"1",
   [junit4]>   "autoAddReplicas":"false",
   [junit4]>   "nrtReplicas":"2",
   [junit4]>   "tlogReplicas":"0"}
   [junit4]> Live Nodes: [127.0.0.1:39173_solr, 127.0.0.1:40623_solr]
   [junit4]> Last available state: 
DocCollection(collection1//collections/collection1/state.json/5)={
   [junit4]>   "pullReplicas":"0",
   [junit4]>   "replicationFactor":"2",
   [junit4]>   "shards":{"shard1":{
   [junit4]>   "range":"8000-7fff",
   [junit4]>   "state":"active",
   [junit4]>   "replicas":{
   [junit4]> "core_node3":{
   [junit4]>   "core":"collection1_shard1_replica_n1",
   [junit4]>   "base_url":"http://127.0.0.1:39173/solr;,
   [junit4]>   "node_name":"127.0.0.1:39173_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "force_set_state":"false",
   [junit4]>   "leader":"true"},
   [junit4]> "core_node4":{
   [junit4]>   "core":"collection1_shard1_replica_n2",
   [junit4]>   "base_url":"http://127.0.0.1:40623/solr;,
   [junit4]>   "node_name":"127.0.0.1:40623_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "force_set_state":"false",
   [junit4]>   "router":{"name":"compositeId"},
   [junit4]>   "maxShardsPerNode":"1",
   [junit4]>   "autoAddReplicas":"false",
   [junit4]>   "nrtReplicas":"2",
   [junit4]>   "tlogReplicas":"0"}
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([482611237BA22E39:C0722EF9D55E43C1]:0)
   [junit4]>at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:289)
   [junit4]>at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:267)
   [junit4]>at 
org.apache.solr.cloud.LeaderTragicEventTest.test(LeaderTragicEventTest.java:84)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)
   [junit4]>at java.base/java.lang.Thread.run(Thread.java:844)

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23308 - Still Unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23308/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

15 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
61 threads leaked from SUITE scope at 
org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=32755, 
name=qtp993258067-32755, state=RUNNABLE, group=TGRP-TestSolrConfigHandlerCloud] 
at java.base@12-ea/sun.nio.ch.EPoll.wait(Native Method) at 
java.base@12-ea/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
 at 
java.base@12-ea/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)  
   at java.base@12-ea/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141) 
at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:423)
 at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:360)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:357)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:181)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
 at 
app//org.eclipse.jetty.io.ManagedSelector$$Lambda$177/0x7fee88b8f858.run(Unknown
 Source) at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)2) 
Thread[id=32748, name=qtp1206521228-32748, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
app//org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:656)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:46)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:720)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)3) 
Thread[id=32783, name=Thread-4300, state=WAITING, 
group=TGRP-TestSolrConfigHandlerCloud] at 
java.base@12-ea/java.lang.Object.wait(Native Method) at 
java.base@12-ea/java.lang.Object.wait(Object.java:328) at 
app//org.apache.solr.core.CloserThread.run(CoreContainer.java:1899)4) 
Thread[id=32756, name=qtp993258067-32756, state=RUNNABLE, 
group=TGRP-TestSolrConfigHandlerCloud] at 
java.base@12-ea/sun.nio.ch.EPoll.wait(Native Method) at 
java.base@12-ea/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
 at 
java.base@12-ea/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)  
   at java.base@12-ea/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141) 
at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:423)
 at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:360)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:357)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:181)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
 at 
app//org.eclipse.jetty.io.ManagedSelector$$Lambda$177/0x7fee88b8f858.run(Unknown
 Source) at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)5) 
Thread[id=32842, 
name=qtp1254459529-32842-acceptor-0@4c3c341e-ServerConnector@6fa6d380{SSL,[ssl, 
http/1.1]}{127.0.0.1:37367}, state=RUNNABLE, 
group=TGRP-TestSolrConfigHandlerCloud] at 
java.base@12-ea/sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)   
  at 
java.base@12-ea/sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:525)
 at 
java.base@12-ea/sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:277)
 at 

[jira] [Commented] (LUCENE-8527) Upgrade JFlex to 1.7.0

2018-12-07 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713295#comment-16713295
 ] 

Uwe Schindler commented on LUCENE-8527:
---

+1

> Upgrade JFlex to 1.7.0
> --
>
> Key: LUCENE-8527
> URL: https://issues.apache.org/jira/browse/LUCENE-8527
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build, modules/analysis
>Reporter: Steve Rowe
>Priority: Minor
>
> JFlex 1.7.0, supporting Unicode 9.0, was released recently: 
> [http://jflex.de/changelog.html#jflex-1.7.0].  We should upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8527) Upgrade JFlex to 1.7.0

2018-12-07 Thread Steve Rowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-8527:
---
Component/s: modules/analysis
 general/build

> Upgrade JFlex to 1.7.0
> --
>
> Key: LUCENE-8527
> URL: https://issues.apache.org/jira/browse/LUCENE-8527
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build, modules/analysis
>Reporter: Steve Rowe
>Priority: Minor
>
> JFlex 1.7.0, supporting Unicode 9.0, was released recently: 
> [http://jflex.de/changelog.html#jflex-1.7.0].  We should upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2018-12-07 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713279#comment-16713279
 ] 

Erick Erickson commented on SOLR-12727:
---

Great thanks! I'll be able to look at this over the weekend

 

 

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch, 
> SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 7.6.0 RC1

2018-12-07 Thread Nicholas Knize
Thanks all. And no problem. Respinning now.

On Fri, Dec 7, 2018 at 12:35 PM jim ferenczi  wrote:

> The fix has been backported to 7.6. Sorry for the trouble Nick and thanks
> Mike for reviewing.
>
> Le ven. 7 déc. 2018 à 19:01, Nicholas Knize  a écrit :
>
>> No worries. The RC build is now fairly stable. I'll keep an eye out and
>> respin when the fix lands.
>>
>> Thanks!
>>
>> On Fri, Dec 7, 2018, 11:40 AM jim ferenczi 
>> wrote:
>>
>>> +1 too, the patch is almost ready, just need a validation from Mike and
>>> I can push.
>>>
>>> Le ven. 7 déc. 2018 à 18:38, Michael McCandless <
>>> luc...@mikemccandless.com> a écrit :
>>>
 Yeah +1 to respin with this fix -- this is possibly a fairly common bug
 resulting in index corruption.

 Mike McCandless

 http://blog.mikemccandless.com


 On Fri, Dec 7, 2018 at 12:27 PM Simon Willnauer <
 simon.willna...@gmail.com> wrote:

> Nick, nobody wants to be the one asking for a respin but I think this
> bug here [1] is pretty terrible and we should do another round once
> it's resolved and backported. @jim / @mike what do you think?
>
> [1] https://issues.apache.org/jira/browse/LUCENE-8592
> On Fri, Dec 7, 2018 at 5:29 PM Nicholas Knize 
> wrote:
> >
> > Please vote for release candidate 1 for Lucene/Solr 7.6.0
> >
> > The artifacts can be downloaded from:
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
> >
> > You can run the smoke tester directly with this command:
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py \
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
> >
> > Here's my +1
> > SUCCESS! [0:50:36.294057]
> > --
> >
> > Nicholas Knize, Ph.D., GISP
> > Geospatial Software Guy  |  Elasticsearch
> > Apache Lucene Committer
> > nkn...@apache.org
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
>>
>> Nicholas Knize, Ph.D., GISP
>> Geospatial Software Guy  |  Elasticsearch
>> Apache Lucene Committer
>> nkn...@apache.org
>>
> --

Nicholas Knize, Ph.D., GISP
Geospatial Software Guy  |  Elasticsearch
Apache Lucene Committer
nkn...@apache.org


[JENKINS] Lucene-Solr-7.6-Linux (64bit/jdk-11) - Build # 73 - Still Unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/73/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
Could not find collection : AutoscalingHistoryHandlerTest_collection

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
AutoscalingHistoryHandlerTest_collection
at __randomizedtesting.SeedInfo.seed([2C9C6F5335B238C0]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
Could not find collection : AutoscalingHistoryHandlerTest_collection

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
AutoscalingHistoryHandlerTest_collection
at __randomizedtesting.SeedInfo.seed([2C9C6F5335B238C0]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713176#comment-16713176
 ] 

ASF subversion and git services commented on LUCENE-8592:
-

Commit 719cde97f84640faa1e3525690d262946571245f in lucene-solr's branch 
refs/heads/branch_7_6 from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=719cde9 ]

LUCENE-8592: switch the corrupted sorted index to a 7.6 version (instead of 7x).


> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch, LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713174#comment-16713174
 ] 

ASF subversion and git services commented on LUCENE-8592:
-

Commit d9cd9f78b1182125a7fb02d724608d5f355df785 in lucene-solr's branch 
refs/heads/branch_7x from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d9cd9f7 ]

LUCENE-8592: switch the corrupted sorted index to a 7.6 version (instead of 7x).


> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch, LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 7.6.0 RC1

2018-12-07 Thread jim ferenczi
The fix has been backported to 7.6. Sorry for the trouble Nick and thanks
Mike for reviewing.

Le ven. 7 déc. 2018 à 19:01, Nicholas Knize  a écrit :

> No worries. The RC build is now fairly stable. I'll keep an eye out and
> respin when the fix lands.
>
> Thanks!
>
> On Fri, Dec 7, 2018, 11:40 AM jim ferenczi  wrote:
>
>> +1 too, the patch is almost ready, just need a validation from Mike and I
>> can push.
>>
>> Le ven. 7 déc. 2018 à 18:38, Michael McCandless <
>> luc...@mikemccandless.com> a écrit :
>>
>>> Yeah +1 to respin with this fix -- this is possibly a fairly common bug
>>> resulting in index corruption.
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>>
>>> On Fri, Dec 7, 2018 at 12:27 PM Simon Willnauer <
>>> simon.willna...@gmail.com> wrote:
>>>
 Nick, nobody wants to be the one asking for a respin but I think this
 bug here [1] is pretty terrible and we should do another round once
 it's resolved and backported. @jim / @mike what do you think?

 [1] https://issues.apache.org/jira/browse/LUCENE-8592
 On Fri, Dec 7, 2018 at 5:29 PM Nicholas Knize  wrote:
 >
 > Please vote for release candidate 1 for Lucene/Solr 7.6.0
 >
 > The artifacts can be downloaded from:
 >
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
 >
 > You can run the smoke tester directly with this command:
 >
 > python3 -u dev-tools/scripts/smokeTestRelease.py \
 >
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
 >
 > Here's my +1
 > SUCCESS! [0:50:36.294057]
 > --
 >
 > Nicholas Knize, Ph.D., GISP
 > Geospatial Software Guy  |  Elasticsearch
 > Apache Lucene Committer
 > nkn...@apache.org

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

 --
>
> Nicholas Knize, Ph.D., GISP
> Geospatial Software Guy  |  Elasticsearch
> Apache Lucene Committer
> nkn...@apache.org
>


[jira] [Resolved] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi resolved LUCENE-8592.
--
Resolution: Fixed

> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch, LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713175#comment-16713175
 ] 

ASF subversion and git services commented on LUCENE-8592:
-

Commit 3098af2b76e447e72f11d5de509ae154e0cec644 in lucene-solr's branch 
refs/heads/branch_7_6 from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3098af2 ]

LUCENE-8592: Fix index sorting corruption due to numeric overflow

The merge sort of sorted segments can produce an invalid
sort if the sort field is an Integer/Long that uses reverse order and contains 
values equal to
Integer/Long#MIN_VALUE. These values are always sorted first during a merge
(instead of last because of the reverse order) due to this bug.
Indices affected by the bug can be detected by running the CheckIndex command 
on a
distribution that contains the fix (7.6+).


> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch, LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713172#comment-16713172
 ] 

ASF subversion and git services commented on LUCENE-8592:
-

Commit 16a76883eba7a6c70a70b634fbe6cf89712f2d97 in lucene-solr's branch 
refs/heads/branch_7x from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=16a7688 ]

LUCENE-8592: Fix index sorting corruption due to numeric overflow

The merge sort of sorted segments can produce an invalid
sort if the sort field is an Integer/Long that uses reverse order and contains 
values equal to
Integer/Long#MIN_VALUE. These values are always sorted first during a merge
(instead of last because of the reverse order) due to this bug.
Indices affected by the bug can be detected by running the CheckIndex command 
on a
distribution that contains the fix (7.6+).


> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch, LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713169#comment-16713169
 ] 

ASF subversion and git services commented on LUCENE-8592:
-

Commit df84a3c9815a85aa140e5013b44488f45de9a203 in lucene-solr's branch 
refs/heads/master from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df84a3c ]

LUCENE-8592: switch the corrupted sorted index to a 7.6 version (instead of 7x).


> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch, LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713160#comment-16713160
 ] 

ASF subversion and git services commented on LUCENE-8592:
-

Commit 9f29ed0757eae12d8311ffd6891f7032370ea39a in lucene-solr's branch 
refs/heads/master from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9f29ed0 ]

LUCENE-8592: Fix index sorting corruption due to numeric overflow

The merge sort of sorted segments can produce an invalid
sort if the sort field is an Integer/Long that uses reverse order and contains 
values equal to
Integer/Long#MIN_VALUE. These values are always sorted first during a merge
(instead of last because of the reverse order) due to this bug.
Indices affected by the bug can be detected by running the CheckIndex command 
on a
distribution that contains the fix (7.6+).


> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch, LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 7.6.0 RC1

2018-12-07 Thread Nicholas Knize
No worries. The RC build is now fairly stable. I'll keep an eye out and
respin when the fix lands.

Thanks!

On Fri, Dec 7, 2018, 11:40 AM jim ferenczi  wrote:

> +1 too, the patch is almost ready, just need a validation from Mike and I
> can push.
>
> Le ven. 7 déc. 2018 à 18:38, Michael McCandless 
> a écrit :
>
>> Yeah +1 to respin with this fix -- this is possibly a fairly common bug
>> resulting in index corruption.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Fri, Dec 7, 2018 at 12:27 PM Simon Willnauer <
>> simon.willna...@gmail.com> wrote:
>>
>>> Nick, nobody wants to be the one asking for a respin but I think this
>>> bug here [1] is pretty terrible and we should do another round once
>>> it's resolved and backported. @jim / @mike what do you think?
>>>
>>> [1] https://issues.apache.org/jira/browse/LUCENE-8592
>>> On Fri, Dec 7, 2018 at 5:29 PM Nicholas Knize  wrote:
>>> >
>>> > Please vote for release candidate 1 for Lucene/Solr 7.6.0
>>> >
>>> > The artifacts can be downloaded from:
>>> >
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
>>> >
>>> > You can run the smoke tester directly with this command:
>>> >
>>> > python3 -u dev-tools/scripts/smokeTestRelease.py \
>>> >
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
>>> >
>>> > Here's my +1
>>> > SUCCESS! [0:50:36.294057]
>>> > --
>>> >
>>> > Nicholas Knize, Ph.D., GISP
>>> > Geospatial Software Guy  |  Elasticsearch
>>> > Apache Lucene Committer
>>> > nkn...@apache.org
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>> --

Nicholas Knize, Ph.D., GISP
Geospatial Software Guy  |  Elasticsearch
Apache Lucene Committer
nkn...@apache.org


[JENKINS] Lucene-Solr-http2-Linux (64bit/jdk1.8.0_172) - Build # 46 - Still Failing!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Linux/46/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

9 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.TestLBHttp2SolrClient

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.TestLBHttp2SolrClient: 1) Thread[id=1937, 
name=aliveCheckExecutor-299-thread-1, state=TIMED_WAITING, 
group=TGRP-TestLBHttp2SolrClient] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=1693, 
name=aliveCheckExecutor-282-thread-1, state=TIMED_WAITING, 
group=TGRP-TestLBHttp2SolrClient] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=2182, 
name=aliveCheckExecutor-316-thread-1, state=TIMED_WAITING, 
group=TGRP-TestLBHttp2SolrClient] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.client.solrj.TestLBHttp2SolrClient: 
   1) Thread[id=1937, name=aliveCheckExecutor-299-thread-1, 
state=TIMED_WAITING, group=TGRP-TestLBHttp2SolrClient]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
   2) Thread[id=1693, name=aliveCheckExecutor-282-thread-1, 
state=TIMED_WAITING, group=TGRP-TestLBHttp2SolrClient]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at 

[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713130#comment-16713130
 ] 

Michael McCandless commented on LUCENE-8592:


+1, patch looks great!  Thanks [~jim.ferenczi].

> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch, LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713127#comment-16713127
 ] 

Michael McCandless commented on LUCENE-8592:


Thanks [~jim.ferenczi], I'll look now!

> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch, LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 7.6.0 RC1

2018-12-07 Thread jim ferenczi
+1 too, the patch is almost ready, just need a validation from Mike and I
can push.

Le ven. 7 déc. 2018 à 18:38, Michael McCandless 
a écrit :

> Yeah +1 to respin with this fix -- this is possibly a fairly common bug
> resulting in index corruption.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Dec 7, 2018 at 12:27 PM Simon Willnauer 
> wrote:
>
>> Nick, nobody wants to be the one asking for a respin but I think this
>> bug here [1] is pretty terrible and we should do another round once
>> it's resolved and backported. @jim / @mike what do you think?
>>
>> [1] https://issues.apache.org/jira/browse/LUCENE-8592
>> On Fri, Dec 7, 2018 at 5:29 PM Nicholas Knize  wrote:
>> >
>> > Please vote for release candidate 1 for Lucene/Solr 7.6.0
>> >
>> > The artifacts can be downloaded from:
>> >
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
>> >
>> > You can run the smoke tester directly with this command:
>> >
>> > python3 -u dev-tools/scripts/smokeTestRelease.py \
>> >
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
>> >
>> > Here's my +1
>> > SUCCESS! [0:50:36.294057]
>> > --
>> >
>> > Nicholas Knize, Ph.D., GISP
>> > Geospatial Software Guy  |  Elasticsearch
>> > Apache Lucene Committer
>> > nkn...@apache.org
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


Re: [VOTE] Release Lucene/Solr 7.6.0 RC1

2018-12-07 Thread Michael McCandless
Yeah +1 to respin with this fix -- this is possibly a fairly common bug
resulting in index corruption.

Mike McCandless

http://blog.mikemccandless.com


On Fri, Dec 7, 2018 at 12:27 PM Simon Willnauer 
wrote:

> Nick, nobody wants to be the one asking for a respin but I think this
> bug here [1] is pretty terrible and we should do another round once
> it's resolved and backported. @jim / @mike what do you think?
>
> [1] https://issues.apache.org/jira/browse/LUCENE-8592
> On Fri, Dec 7, 2018 at 5:29 PM Nicholas Knize  wrote:
> >
> > Please vote for release candidate 1 for Lucene/Solr 7.6.0
> >
> > The artifacts can be downloaded from:
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
> >
> > You can run the smoke tester directly with this command:
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py \
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
> >
> > Here's my +1
> > SUCCESS! [0:50:36.294057]
> > --
> >
> > Nicholas Knize, Ph.D., GISP
> > Geospatial Software Guy  |  Elasticsearch
> > Apache Lucene Committer
> > nkn...@apache.org
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread Simon Willnauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8592:

Affects Version/s: master (8.0)
   7.5
 Priority: Blocker  (was: Major)
Fix Version/s: master (8.0)
   7.6

> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5, master (8.0)
>Reporter: Jim Ferenczi
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 7.6.0 RC1

2018-12-07 Thread Simon Willnauer
Nick, nobody wants to be the one asking for a respin but I think this
bug here [1] is pretty terrible and we should do another round once
it's resolved and backported. @jim / @mike what do you think?

[1] https://issues.apache.org/jira/browse/LUCENE-8592
On Fri, Dec 7, 2018 at 5:29 PM Nicholas Knize  wrote:
>
> Please vote for release candidate 1 for Lucene/Solr 7.6.0
>
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/
>
> Here's my +1
> SUCCESS! [0:50:36.294057]
> --
>
> Nicholas Knize, Ph.D., GISP
> Geospatial Software Guy  |  Elasticsearch
> Apache Lucene Committer
> nkn...@apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Fwd: [NOTICE] Mandatory relocation of Apache git repositories on git-wip-us.apache.org

2018-12-07 Thread Steve Rowe


> Begin forwarded message:
> 
> From: Daniel Gruno 
> Subject: [NOTICE] Mandatory relocation of Apache git repositories on 
> git-wip-us.apache.org
> Date: December 7, 2018 at 11:52:36 AM EST
> To: "us...@infra.apache.org" 
> Reply-To: "us...@infra.apache.org" 
> 
> [IF YOUR PROJECT DOES NOT HAVE GIT REPOSITORIES ON GIT-WIP-US PLEASE
> DISREGARD THIS EMAIL; IT WAS MASS-MAILED TO ALL APACHE PROJECTS]
> 
> Hello Apache projects,
> 
> I am writing to you because you may have git repositories on the
> git-wip-us server, which is slated to be decommissioned in the coming
> months. All repositories will be moved to the new gitbox service which
> includes direct write access on github as well as the standard ASF
> commit access via gitbox.apache.org.
> 
> ## Why this move? ##
> The move comes as a result of retiring the git-wip service, as the
> hardware it runs on is longing for retirement. In lieu of this, we
> have decided to consolidate the two services (git-wip and gitbox), to
> ease the management of our repository systems and future-proof the
> underlying hardware. The move is fully automated, and ideally, nothing
> will change in your workflow other than added features and access to
> GitHub.
> 
> ## Timeframe for relocation ##
> Initially, we are asking that projects voluntarily request to move
> their repositories to gitbox, hence this email. The voluntary
> timeframe is between now and January 9th 2019, during which projects
> are free to either move over to gitbox or stay put on git-wip. After
> this phase, we will be requiring the remaining projects to move within
> one month, after which we will move the remaining projects over.
> 
> To have your project moved in this initial phase, you will need:
> 
> - Consensus in the project (documented via the mailing list)
> - File a JIRA ticket with INFRA to voluntarily move your project repos
>  over to gitbox (as stated, this is highly automated and will take
>  between a minute and an hour, depending on the size and number of
>  your repositories)
> 
> To sum up the preliminary timeline;
> 
> - December 9th 2018 -> January 9th 2019: Voluntary (coordinated)
>  relocation
> - January 9th -> February 6th: Mandated (coordinated) relocation
> - February 7th: All remaining repositories are mass migrated.
> 
> This timeline may change to accommodate various scenarios.
> 
> ## Using GitHub with ASF repositories ##
> When your project has moved, you are free to use either the ASF
> repository system (gitbox.apache.org) OR GitHub for your development
> and code pushes. To be able to use GitHub, please follow the primer
> at: https://reference.apache.org/committer/github
> 
> 
> We appreciate your understanding of this issue, and hope that your
> project can coordinate voluntarily moving your repositories in a
> timely manner.
> 
> All settings, such as commit mail targets, issue linking, PR
> notification schemes etc will automatically be migrated to gitbox as
> well.
> 
> With regards, Daniel on behalf of ASF Infra.
> 
> PS:For inquiries, please reply to us...@infra.apache.org, not your project's 
> dev list :-).
> 
> 



[jira] [Commented] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2018-12-07 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713079#comment-16713079
 ] 

Kevin Risden commented on SOLR-12727:
-

It reverts some of the HTTP/localhost changes since those aren't necessary for 
this. 

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch, 
> SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2018-12-07 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-12727:

Attachment: SOLR-12727.patch

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch, 
> SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8585) Create jump-tables for DocValues at index-time

2018-12-07 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713081#comment-16713081
 ] 

Adrien Grand commented on LUCENE-8585:
--

bq. Since the norms-classes also uses IndexedDISI, I expect it would be best to 
upgrade them too. This would leave the core lucene70 folder empty of active 
code.

+1 to improve norms at the same time

> Create jump-tables for DocValues at index-time
> --
>
> Key: LUCENE-8585
> URL: https://issues.apache.org/jira/browse/LUCENE-8585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: master (8.0)
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: performance
> Attachments: LUCENE-8585.patch, make_patch_lucene8585.sh
>
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible for creating 
> index-structures. I'm quite willing to take a stab at this, although I 
> probably won't do much about it before January 2019. Should anyone else wish 
> to adopt this JIRA-issue or co-work on it, I'll be happy to share.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[NOTICE] Mandatory relocation of Apache git repositories on git-wip-us.apache.org

2018-12-07 Thread Daniel Gruno

[IF YOUR PROJECT DOES NOT HAVE GIT REPOSITORIES ON GIT-WIP-US PLEASE
 DISREGARD THIS EMAIL; IT WAS MASS-MAILED TO ALL APACHE PROJECTS]

Hello Apache projects,

I am writing to you because you may have git repositories on the
git-wip-us server, which is slated to be decommissioned in the coming
months. All repositories will be moved to the new gitbox service which
includes direct write access on github as well as the standard ASF
commit access via gitbox.apache.org.

## Why this move? ##
The move comes as a result of retiring the git-wip service, as the
hardware it runs on is longing for retirement. In lieu of this, we
have decided to consolidate the two services (git-wip and gitbox), to
ease the management of our repository systems and future-proof the
underlying hardware. The move is fully automated, and ideally, nothing
will change in your workflow other than added features and access to
GitHub.

## Timeframe for relocation ##
Initially, we are asking that projects voluntarily request to move
their repositories to gitbox, hence this email. The voluntary
timeframe is between now and January 9th 2019, during which projects
are free to either move over to gitbox or stay put on git-wip. After
this phase, we will be requiring the remaining projects to move within
one month, after which we will move the remaining projects over.

To have your project moved in this initial phase, you will need:

- Consensus in the project (documented via the mailing list)
- File a JIRA ticket with INFRA to voluntarily move your project repos
  over to gitbox (as stated, this is highly automated and will take
  between a minute and an hour, depending on the size and number of
  your repositories)

To sum up the preliminary timeline;

- December 9th 2018 -> January 9th 2019: Voluntary (coordinated)
  relocation
- January 9th -> February 6th: Mandated (coordinated) relocation
- February 7th: All remaining repositories are mass migrated.

This timeline may change to accommodate various scenarios.

## Using GitHub with ASF repositories ##
When your project has moved, you are free to use either the ASF
repository system (gitbox.apache.org) OR GitHub for your development
and code pushes. To be able to use GitHub, please follow the primer
at: https://reference.apache.org/committer/github


We appreciate your understanding of this issue, and hope that your
project can coordinate voluntarily moving your repositories in a
timely manner.

All settings, such as commit mail targets, issue linking, PR
notification schemes etc will automatically be migrated to gitbox as
well.

With regards, Daniel on behalf of ASF Infra.

PS:For inquiries, please reply to us...@infra.apache.org, not your 
project's dev list :-).




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2018-12-07 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713078#comment-16713078
 ] 

Kevin Risden commented on SOLR-12727:
-

[~erickerickson] - I just uploaded a patch that passed all Solr tests. 

{code:java}
ant clean clean-jars jar-checksums compile
cd solr
ant test
{code}


> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch, 
> SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23307 - Still Unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23307/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseG1GC

26 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testGammaDistribution

Error Message:
0.8100353276359745 0.8392832433176233

Stack Trace:
java.lang.AssertionError: 0.8100353276359745 0.8392832433176233
at 
__randomizedtesting.SeedInfo.seed([C106A0D31E80844F:FC7C8B7D3DF82E58]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testGammaDistribution(MathExpressionTest.java:4363)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:45365/solr/second_collection, 
https://127.0.0.1:42783/solr/second_collection]

Stack 

[VOTE] Release Lucene/Solr 7.6.0 RC1

2018-12-07 Thread Nicholas Knize
Please vote for release candidate 1 for Lucene/Solr 7.6.0

The artifacts can be downloaded from:
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/


You can run the smoke tester directly with this command:

python3 -u dev-tools/scripts/smokeTestRelease.py \
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.6.0-RC1-rev2d4435162774ad43b66ce0e7847bf8c1558e20a9/


Here's my +1
SUCCESS! [0:50:36.294057]
-- 

Nicholas Knize, Ph.D., GISP
Geospatial Software Guy  |  Elasticsearch
Apache Lucene Committer
nkn...@apache.org


[jira] [Commented] (SOLR-13045) Harden TestSimPolicyCloud

2018-12-07 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713056#comment-16713056
 ] 

Jason Gerlowski commented on SOLR-13045:


I've attached a proposed fix for this.  With this, all tests in 
{{TestSimPolicyCloud}} looked good.  Ran them ~5000 times.  Gonna do some beast 
runs to trigger things that way, but otherwise things look good here.

> Harden TestSimPolicyCloud
> -
>
> Key: SOLR-13045
> URL: https://issues.apache.org/jira/browse/SOLR-13045
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13045.patch
>
>
> Several tests in TestSimPolicyCloud, but especially 
> {{testCreateCollectionAddReplica}}, have some flaky behavior, even after 
> Mark's recent test-fix commit.  This JIRA covers looking into and (hopefully) 
> fixing this test failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13045) Harden TestSimPolicyCloud

2018-12-07 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-13045:
---
Attachment: SOLR-13045.patch

> Harden TestSimPolicyCloud
> -
>
> Key: SOLR-13045
> URL: https://issues.apache.org/jira/browse/SOLR-13045
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-13045.patch
>
>
> Several tests in TestSimPolicyCloud, but especially 
> {{testCreateCollectionAddReplica}}, have some flaky behavior, even after 
> Mark's recent test-fix commit.  This JIRA covers looking into and (hopefully) 
> fixing this test failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13045) Harden TestSimPolicyCloud

2018-12-07 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski reassigned SOLR-13045:
--

Assignee: Jason Gerlowski

> Harden TestSimPolicyCloud
> -
>
> Key: SOLR-13045
> URL: https://issues.apache.org/jira/browse/SOLR-13045
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
>
> Several tests in TestSimPolicyCloud, but especially 
> {{testCreateCollectionAddReplica}}, have some flaky behavior, even after 
> Mark's recent test-fix commit.  This JIRA covers looking into and (hopefully) 
> fixing this test failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13028) Harden AutoAddReplicasPlanActionTest#testSimple

2018-12-07 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713029#comment-16713029
 ] 

Mark Miller commented on SOLR-13028:


{quote}Ironically the error it throws if it retries on SocketException multiple 
times w/o succeeding missleadingly claims it encountered a 
NoHttpResponseException
{quote}
Just evolution - a different exception can be thrown depending on OS, JVM, so 
NoHttpResponseException is too specific. I'll clean that up a bit.

> Harden AutoAddReplicasPlanActionTest#testSimple
> ---
>
> Key: SOLR-13028
> URL: https://issues.apache.org/jira/browse/SOLR-13028
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
> Attachments: sarowe__Lucene-Solr-BadApple-tests-master__229.log.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13045) Harden TestSimPolicyCloud

2018-12-07 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713023#comment-16713023
 ] 

Jason Gerlowski commented on SOLR-13045:


I believe I found the race condition causing these failures. It looks like an 
issue between the {{waitForState}} polling, which occurs in the main test 
thread, and the leader-election execution, which occurs in a {{Future}} 
submitted to {{SimCloudManager}}'s ExecutorService.

The {{waitForState}} thread repeatedly asks for the cluster state, which looks 
a bit like this:
 * [return cached value, if any. Otherwise 
continue|https://github.com/apache/lucene-solr/blob/75b183196798232aa6f2dcb117f309119053/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/SimClusterStateProvider.java#L2090]
 * [Grab 
lock|https://github.com/apache/lucene-solr/blob/75b183196798232aa6f2dcb117f309119053/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/SimClusterStateProvider.java#L2093]
 * [Clear 
cache|https://github.com/apache/lucene-solr/blob/75b183196798232aa6f2dcb117f309119053/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/SimClusterStateProvider.java#L2094]
 * [Build Map to store in 
cache|https://github.com/apache/lucene-solr/blob/75b183196798232aa6f2dcb117f309119053/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/SimClusterStateProvider.java#L2126]
 * [Set cache with 
Map|https://github.com/apache/lucene-solr/blob/75b183196798232aa6f2dcb117f309119053/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/SimClusterStateProvider.java#L2141]
 * [Release 
lock|https://github.com/apache/lucene-solr/blob/75b183196798232aa6f2dcb117f309119053/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/SimClusterStateProvider.java#L2144]

The Leader Election Future looks a bit like this:
 * [Give a ReplicaInfo 
"leader=true"|https://github.com/apache/lucene-solr/blob/75b183196798232aa6f2dcb117f309119053/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/SimClusterStateProvider.java#L756]
 * [Clear 
cache|https://github.com/apache/lucene-solr/blob/75b183196798232aa6f2dcb117f309119053/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/SimClusterStateProvider.java#L766]

Note that the leader election Future does this without acquiring the lock. Now 
imagine the following interleaving of these two threads:
 * [Thread-Test] Grab lock
 * [Thread-Test] Clear cache
 * [Thread-Test] Build Map to store in cache
 * [Thread-LeaderElection] Give ReplicaInfo "leader=true"
 * [Thread-LeaderElection] Clear cache
 * [Thread-Test] Set cache with Map

At the end of this interleaving the cache has a value that's missing the latest 
"leader=true" changes, and nothing will ever clear it. So the {{waitForState}} 
polling will go on to fail.

We should be able to fix this by having the leader election code use the same 
Lock used elsewhere. I've actually got this change staged locally and am 
running tests on it currently. If all looks well I should have this uploaded 
soon. One thing I'll be curious to see is whether this affects any of the other 
TestSim* failures we've seen recently. If we're lucky we may get 2 (or more) 
birds with this one stone.

> Harden TestSimPolicyCloud
> -
>
> Key: SOLR-13045
> URL: https://issues.apache.org/jira/browse/SOLR-13045
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
>
> Several tests in TestSimPolicyCloud, but especially 
> {{testCreateCollectionAddReplica}}, have some flaky behavior, even after 
> Mark's recent test-fix commit.  This JIRA covers looking into and (hopefully) 
> fixing this test failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13048) Connections issue between nodes when using SSL + Java 11

2018-12-07 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712990#comment-16712990
 ] 

Cao Manh Dat commented on SOLR-13048:
-

I'm not sure this relates to this issue or not?
https://bugs.openjdk.java.net/browse/JDK-8207009

> Connections issue between nodes when using SSL + Java 11
> 
>
> Key: SOLR-13048
> URL: https://issues.apache.org/jira/browse/SOLR-13048
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: jenkins.log (5).txt.gz, jenkins.log (6).txt.gz
>
>
> When I looked into test failures recently, I saw a common pattern of failures 
> related to Java 11 + SSL. 
> {code}
> 24580 ERROR 
> (OverseerThreadFactory-166-thread-1-processing-n:127.0.0.1:40151_solr) 
> [n:127.0.0.1:40151_solr] o.a.s.c.a.c.OverseerCollectionMessageHandler 
> Error from shard: https://127.0.0.1:40151/solr
> org.apache.solr.client.solrj.SolrServerException: IOException occured when 
> talking to server at: https://127.0.0.1:40151/solr
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) 
> ~[java/:?]
>   at 
> org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:172)
>  ~[java/:?]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
>   at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  [java/:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: javax.net.ssl.SSLHandshakeException: Remote host terminated the 
> handshake
>   at sun.security.ssl.SSLSocketImpl.handleEOF(SSLSocketImpl.java:1321) 
> ~[?:?]
>   at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1160) ~[?:?]
>   at 
> sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1063) 
> ~[?:?]
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:402) ~[?:?]
>   at 
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:396)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:394)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) 
> ~[httpclient-4.5.6.jar:4.5.6]
>   at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 
> ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 
> ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:542)
>  ~[java/:?]
>   ... 12 more
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
>   at 
> 

[jira] [Commented] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2018-12-07 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713013#comment-16713013
 ] 

Kevin Risden commented on SOLR-12727:
-

Taking a look and they don't seem to fail in my IDE but the failures reproduce 
from the command line. It looks like there might be some unrelated changes 
added to the patch that cause this. I am looking into it a bit further.

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12727.patch, SOLR-12727.patch, SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8566) Deprecate methods in CustomAnalyzer.Builder which take factory classes

2018-12-07 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713006#comment-16713006
 ] 

Tomoko Uchida commented on LUCENE-8566:
---

Thanks for the comment. I'd like to start from this,
{quote} - Add a "NAME" static public final String field to all factories{quote}
and document the SPI names in all factories' Javadoc.

Also we might need some code validator, which can be called from the 
{{precommit}} build task, to make sure that each factory has the "NAME" static 
field.

> Deprecate methods in CustomAnalyzer.Builder which take factory classes
> --
>
> Key: LUCENE-8566
> URL: https://issues.apache.org/jira/browse/LUCENE-8566
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Minor
>
> CustomAnalyzer.Builder has methods which take implementation classes as 
> follows.
>  - withTokenizer(Class factory, String... params)
>  - withTokenizer(Class factory, 
> Map params)
>  - addTokenFilter(Class factory, String... 
> params)
>  - addTokenFilter(Class factory, 
> Map params)
>  - addCharFilter(Class factory, String... params)
>  - addCharFilter(Class factory, 
> Map params)
> Since the builder also has methods which take service names, it seems like 
> that above methods are unnecessary and a little bit misleading. Giving 
> symbolic names is preferable to implementation factory classes, but for now, 
> users can write code depending on implementation classes.
> What do you think about deprecating those methods (adding {{@Deprecated}} 
> annotations) and deleting them in the future releases? Those are called by 
> only test cases so deleting them should have no impact on current lucene/solr 
> codebase.
> If this proposal gains your consent, I will create a patch. (Let me know if I 
> missed some point. I'll close it.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-12-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-7896:
--
Attachment: login-screen-2.png

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, Authentication, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: authentication, login, password
> Fix For: master (8.0)
>
> Attachments: dispatchfilter-code.png, login-page.png, 
> login-screen-2.png, logout.png, unknown_scheme.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Now that Solr supports Authentication plugins, the missing piece is to be 
> allowed access from Admin UI when authentication is enabled. For this we need
>  * Some plumbing in Admin UI that allows the UI to detect 401 responses and 
> redirect to login page
>  * Possibility to have multiple login pages depending on auth method and 
> redirect to the correct one
>  * [AngularJS HTTP 
> interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to 
> add correct HTTP headers on all requests when user is logged in
> This issue should aim to implement some of the plumbing mentioned above, and 
> make it work with Basic Auth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13048) Connections issue between nodes when using SSL + Java 11

2018-12-07 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-13048:

Attachment: jenkins.log (5).txt.gz
jenkins.log (6).txt.gz

> Connections issue between nodes when using SSL + Java 11
> 
>
> Key: SOLR-13048
> URL: https://issues.apache.org/jira/browse/SOLR-13048
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: jenkins.log (5).txt.gz, jenkins.log (6).txt.gz
>
>
> When I looked into test failures recently, I saw a common pattern of failures 
> related to Java 11 + SSL. 
> {code}
> 24580 ERROR 
> (OverseerThreadFactory-166-thread-1-processing-n:127.0.0.1:40151_solr) 
> [n:127.0.0.1:40151_solr] o.a.s.c.a.c.OverseerCollectionMessageHandler 
> Error from shard: https://127.0.0.1:40151/solr
> org.apache.solr.client.solrj.SolrServerException: IOException occured when 
> talking to server at: https://127.0.0.1:40151/solr
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[java/:?]
>   at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) 
> ~[java/:?]
>   at 
> org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:172)
>  ~[java/:?]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
>   at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  [java/:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: javax.net.ssl.SSLHandshakeException: Remote host terminated the 
> handshake
>   at sun.security.ssl.SSLSocketImpl.handleEOF(SSLSocketImpl.java:1321) 
> ~[?:?]
>   at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1160) ~[?:?]
>   at 
> sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1063) 
> ~[?:?]
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:402) ~[?:?]
>   at 
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:396)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:394)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) 
> ~[httpclient-4.5.6.jar:4.5.6]
>   at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 
> ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 
> ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
>  ~[httpclient-4.5.6.jar:4.5.6]
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:542)
>  ~[java/:?]
>   ... 12 more
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
>   at 
> sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:167) 
> ~[?:?]
>   at 

[jira] [Created] (SOLR-13048) Connections issue between nodes when using SSL + Java 11

2018-12-07 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-13048:
---

 Summary: Connections issue between nodes when using SSL + Java 11
 Key: SOLR-13048
 URL: https://issues.apache.org/jira/browse/SOLR-13048
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Cao Manh Dat
 Attachments: jenkins.log (5).txt.gz, jenkins.log (6).txt.gz

When I looked into test failures recently, I saw a common pattern of failures 
related to Java 11 + SSL. 
{code}
24580 ERROR 
(OverseerThreadFactory-166-thread-1-processing-n:127.0.0.1:40151_solr) 
[n:127.0.0.1:40151_solr] o.a.s.c.a.c.OverseerCollectionMessageHandler Error 
from shard: https://127.0.0.1:40151/solr
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:40151/solr
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
 ~[java/:?]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
 ~[java/:?]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
 ~[java/:?]
at 
org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) ~[java/:?]
at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:172)
 ~[java/:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.6.jar:3.2.6]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 [java/:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Remote host terminated the 
handshake
at sun.security.ssl.SSLSocketImpl.handleEOF(SSLSocketImpl.java:1321) 
~[?:?]
at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1160) ~[?:?]
at 
sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1063) 
~[?:?]
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:402) ~[?:?]
at 
org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:396)
 ~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
 ~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
 ~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
 ~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:394)
 ~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237) 
~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) 
~[httpclient-4.5.6.jar:4.5.6]
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 
~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 
~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
 ~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
 ~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
 ~[httpclient-4.5.6.jar:4.5.6]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:542)
 ~[java/:?]
... 12 more
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at 
sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:167) 
~[?:?]
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:108) ~[?:?]
at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1152) ~[?:?]
at 
sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1063) 
~[?:?]
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:402) ~[?:?]
at 

[jira] [Resolved] (SOLR-7095) Disaster Recovery native online cross-site replication for NRT SolrCloud

2018-12-07 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-7095.
--
Resolution: Won't Fix

Yeah, I'll close this. I think any improvements ought to be new JIRAs built on 
CDCR.

> Disaster Recovery native online cross-site replication for NRT SolrCloud
> 
>
> Key: SOLR-7095
> URL: https://issues.apache.org/jira/browse/SOLR-7095
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 4.10
>Reporter: Hari Sekhon
>Priority: Major
>
> Feature request to add native online cross-site DR support for NRT SolrCloud.
> Currently NRT DR recovery requires taking down the recovering cluster 
> including halting any new indexing, changing zookeeper emsembles to the other 
> datacenter for one node per shard to replicate, then taking down again to 
> switch back to local DC zookeeper ensemble after shard has caught up. This is 
> a relatively difficult/tedious manual operation to perform and seems 
> impossible to get completely up to date in scenarios with constant new update 
> requests arriving during downtime of switching back to local DC's zookeeper 
> ensemble, therefore preventing 100% accurate catch up.
> There will be trade-offs such as making cross-site replication async to avoid 
> update latency penalty, and may require a last-write-wins type scenario like 
> Cassandra.
> Regards,
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13047) Add facet2D Streaming Expression

2018-12-07 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13047:
--
Description: 
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
dimensional facets which are designed to be *pivoted* into a matrix and 
operated on by *Math Expressions*. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)){code}
The example above will return tuples containing the top 300 diseases and the 
top ten symptoms for each disease. 

Using math expression the tuples can be pivoted into a matrix and the rows of 
the matrix can be clustered. 
{code:java}
let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)),
b=pivot(a, diseases, symptoms, count(*)),
c=kmeans(b, 10)){code}

  was:
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)){code}
The example above will return tuples containing the top 300 diseases and the 
top ten symptoms for each disease. 

Using math expression the tuples can be pivoted into a matrix and the rows of 
the matrix can be clustered. 
{code:java}
let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)),
b=pivot(a, diseases, symptoms, count(*)),
c=kmeans(b, 10)){code}


> Add facet2D Streaming Expression
> 
>
> Key: SOLR-13047
> URL: https://issues.apache.org/jira/browse/SOLR-13047
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> The current facet expression is a generic tool for creating multi-dimension 
> aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
> dimensional facets which are designed to be *pivoted* into a matrix and 
> operated on by *Math Expressions*. 
> facet2D will use the json facet API under the covers. 
> Proposed syntax:
> {code:java}
> facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
> count(*)){code}
> The example above will return tuples containing the top 300 diseases and the 
> top ten symptoms for each disease. 
> Using math expression the tuples can be pivoted into a matrix and the rows of 
> the matrix can be clustered. 
> {code:java}
> let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 
> 10", count(*)),
> b=pivot(a, diseases, symptoms, count(*)),
> c=kmeans(b, 10)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13047) Add facet2D Streaming Expression

2018-12-07 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13047:
--
Description: 
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)){code}
The example above will return tuples containing the top 300 diseases and the 
top ten symptoms for each disease. 

Using math expression the tuples can be pivoted into a matrix and the rows of 
the matrix can be clustered. 
{code:java}
let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)),
b=pivot(a, diseases, symptoms, count(*)),
c=kmeans(b, 10)){code}

  was:
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)){code}
The example above will return tuples containing the top 300 diseases and the 
top ten symptoms for each disease. 

Using math expression the tuples can be pivoted into a matrix and the rows of 
the matrix and clustered. 
{code:java}
let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)),
b=pivot(a, diseases, symptoms, count(*)),
c=kmeans(b, 10)){code}


> Add facet2D Streaming Expression
> 
>
> Key: SOLR-13047
> URL: https://issues.apache.org/jira/browse/SOLR-13047
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> The current facet expression is a generic tool for creating multi-dimension 
> aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
> dimensional facets which are designed to be pivoted into a matrix and 
> operated on by Math Expressions. 
> facet2D will use the json facet API under the covers. 
> Proposed syntax:
> {code:java}
> facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
> count(*)){code}
> The example above will return tuples containing the top 300 diseases and the 
> top ten symptoms for each disease. 
> Using math expression the tuples can be pivoted into a matrix and the rows of 
> the matrix can be clustered. 
> {code:java}
> let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 
> 10", count(*)),
> b=pivot(a, diseases, symptoms, count(*)),
> c=kmeans(b, 10)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13047) Add facet2D Streaming Expression

2018-12-07 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13047:
--
Description: 
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)){code}
The example above will return tuples containing the top 300 diseases and the 
top ten symptoms for each disease. 

Using math expression the tuples can be pivoted into a matrix and the rows of 
the matrix and clustered. 
{code:java}
let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
count(*)),
b=pivot(a, diseases, symptoms, count(*)),
c=kmeans(b, 10)){code}

  was:
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(medrecords, q=*:*, x=diseases, y=symptom, dimensions="300, 10", 
count(*)){code}
The example above will return tuples containing the top 300 diseases and the 
top ten symptoms for each disease. 


> Add facet2D Streaming Expression
> 
>
> Key: SOLR-13047
> URL: https://issues.apache.org/jira/browse/SOLR-13047
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> The current facet expression is a generic tool for creating multi-dimension 
> aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
> dimensional facets which are designed to be pivoted into a matrix and 
> operated on by Math Expressions. 
> facet2D will use the json facet API under the covers. 
> Proposed syntax:
> {code:java}
> facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
> count(*)){code}
> The example above will return tuples containing the top 300 diseases and the 
> top ten symptoms for each disease. 
> Using math expression the tuples can be pivoted into a matrix and the rows of 
> the matrix and clustered. 
> {code:java}
> let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 
> 10", count(*)),
> b=pivot(a, diseases, symptoms, count(*)),
> c=kmeans(b, 10)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13047) Add facet2D Streaming Expression

2018-12-07 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-13047:
-

Assignee: Joel Bernstein

> Add facet2D Streaming Expression
> 
>
> Key: SOLR-13047
> URL: https://issues.apache.org/jira/browse/SOLR-13047
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> The current facet expression is a generic tool for creating multi-dimension 
> aggregations. The *facet2D* Streaming Expression has semantics specific to 2 
> dimensional facets which are designed to be pivoted into a matrix and 
> operated on by Math Expressions. 
> facet2D will use the json facet API under the covers. 
> Proposed syntax:
> {code:java}
> facet2D(medrecords, q=*:*, x=diseases, y=symptom, dimensions="300, 10", 
> count(*)){code}
> The example above will return tuples containing the top 300 diseases and the 
> top ten symptoms for each disease. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13047) Add facet2D Streaming Expression

2018-12-07 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13047:
--
Description: 
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(medrecords, q=*:*, x=diseases, y=symptom, dimensions="300, 10", 
count(*)){code}
The example above will return tuples containing the top 300 diseases and the 
top ten symptoms for each disease. 

  was:
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific to 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(medrecords, q=*:*, x=diseases, y=symptom, dimensions="300, 10", 
count(*)){code}
The example above will return tuples containing the top 300 diseases and the 
top ten symptoms for each disease. 


> Add facet2D Streaming Expression
> 
>
> Key: SOLR-13047
> URL: https://issues.apache.org/jira/browse/SOLR-13047
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> The current facet expression is a generic tool for creating multi-dimension 
> aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
> dimensional facets which are designed to be pivoted into a matrix and 
> operated on by Math Expressions. 
> facet2D will use the json facet API under the covers. 
> Proposed syntax:
> {code:java}
> facet2D(medrecords, q=*:*, x=diseases, y=symptom, dimensions="300, 10", 
> count(*)){code}
> The example above will return tuples containing the top 300 diseases and the 
> top ten symptoms for each disease. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712958#comment-16712958
 ] 

Michael McCandless commented on LUCENE-8592:


Phew this is evil, nice catch [~jim.ferenczi]! Thank you for adding reverse 
true/false testing through all the test cases. The patch looks good – moving 
the {{reverseMul}} logic to after the comparison.

This may have high impact? Any time the user sorts by an int field, reversed, 
and has missing values, their index will now fail {{CheckIndex}} on upgrade and 
require reindexing from their original docs.

Whether this fix is buggy depends on the return values of e.g. 
{{Integer.compareTo}}, {{String.compareTo}}, etc.? I.e. do those methods ever 
return {{Integer.MIN_VALUE}}. Looking at [Integer.java in at least 
JDK8|http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/lang/Integer.java#l1233]
 it seems we are good – it returns 0, 1, -1. And [String.java returns the 
difference of two 
{{chars}}|http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/lang/String.java#l1140].
 Hopefully all other native {{compareTo}} impls are similar.

Should we add a test case confirming {{CheckIndex}} detects this bug? Create a 
broken index and zip it up and commit that, with a test that unzips and 
confirms {{CheckIndex}} fails on it?

Instead of the (int) casts e.g. in {{return (int) 
globalOrds.get(readerValues.ordValue())}}, maybe we should switch to 
{{Math.toIntExact}}?  Because these are single values, it should never happen 
that the long ord exceeds positive int space, so the conversion should always 
work safely.

> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13047) Add facet2D Streaming Expression

2018-12-07 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13047:
--
Description: 
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific to 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(medrecords, q=*:*, x=diseases, y=symptom, dimensions="300, 10", 
count(*)){code}
The example above will return tuples containing the top 300 diseases and the 
top ten symptoms for each disease. 

  was:
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific to 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(collection1, q=*:*, x=cars, y=color, dimensions="1000, 10", 
count(*)){code}


> Add facet2D Streaming Expression
> 
>
> Key: SOLR-13047
> URL: https://issues.apache.org/jira/browse/SOLR-13047
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> The current facet expression is a generic tool for creating multi-dimension 
> aggregations. The *facet2D* Streaming Expression has semantics specific to 2 
> dimensional facets which are designed to be pivoted into a matrix and 
> operated on by Math Expressions. 
> facet2D will use the json facet API under the covers. 
> Proposed syntax:
> {code:java}
> facet2D(medrecords, q=*:*, x=diseases, y=symptom, dimensions="300, 10", 
> count(*)){code}
> The example above will return tuples containing the top 300 diseases and the 
> top ten symptoms for each disease. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13047) Add facet2D Streaming Expression

2018-12-07 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13047:
--
Description: 
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific to 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 

Proposed syntax:
{code:java}
facet2D(collection1, q=*:*, x=cars, y=color, dimensions="1000, 10", 
count(*)){code}

  was:
The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific to 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 


> Add facet2D Streaming Expression
> 
>
> Key: SOLR-13047
> URL: https://issues.apache.org/jira/browse/SOLR-13047
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> The current facet expression is a generic tool for creating multi-dimension 
> aggregations. The *facet2D* Streaming Expression has semantics specific to 2 
> dimensional facets which are designed to be pivoted into a matrix and 
> operated on by Math Expressions. 
> facet2D will use the json facet API under the covers. 
> Proposed syntax:
> {code:java}
> facet2D(collection1, q=*:*, x=cars, y=color, dimensions="1000, 10", 
> count(*)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13047) Add facet2D Streaming Expression

2018-12-07 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-13047:
-

 Summary: Add facet2D Streaming Expression
 Key: SOLR-13047
 URL: https://issues.apache.org/jira/browse/SOLR-13047
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The current facet expression is a generic tool for creating multi-dimension 
aggregations. The *facet2D* Streaming Expression has semantics specific to 2 
dimensional facets which are designed to be pivoted into a matrix and operated 
on by Math Expressions. 

facet2D will use the json facet API under the covers. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712938#comment-16712938
 ] 

Michael McCandless commented on LUCENE-8592:


I'll have a look; this is sneaky.

> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13046) Suppress SSL for StreamingTest

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712912#comment-16712912
 ] 

ASF subversion and git services commented on SOLR-13046:


Commit 17fca051c56977a0fcc9ab79506dbdb89fce2722 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=17fca05 ]

SOLR-13046: Suppress SSL for StreamingTest


> Suppress SSL for StreamingTest
> --
>
> Key: SOLR-13046
> URL: https://issues.apache.org/jira/browse/SOLR-13046
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> Currently the StreamingTest fails every time when run under SSL. First step 
> is to suppress SSL and then another ticket can be created to understand what 
> the SSL issues are.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-http2-Linux (64bit/jdk1.8.0_172) - Build # 45 - Still Failing!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Linux/45/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

9 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.TestLBHttp2SolrClient

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.TestLBHttp2SolrClient: 1) Thread[id=195, 
name=aliveCheckExecutor-10-thread-1, state=TIMED_WAITING, 
group=TGRP-TestLBHttp2SolrClient] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=683, 
name=aliveCheckExecutor-44-thread-1, state=TIMED_WAITING, 
group=TGRP-TestLBHttp2SolrClient] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=438, 
name=aliveCheckExecutor-27-thread-1, state=TIMED_WAITING, 
group=TGRP-TestLBHttp2SolrClient] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.client.solrj.TestLBHttp2SolrClient: 
   1) Thread[id=195, name=aliveCheckExecutor-10-thread-1, state=TIMED_WAITING, 
group=TGRP-TestLBHttp2SolrClient]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
   2) Thread[id=683, name=aliveCheckExecutor-44-thread-1, state=TIMED_WAITING, 
group=TGRP-TestLBHttp2SolrClient]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at 

[jira] [Commented] (SOLR-13046) Suppress SSL for StreamingTest

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712914#comment-16712914
 ] 

ASF subversion and git services commented on SOLR-13046:


Commit f1759301cac1fb134258b2160f62b501bae41552 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f175930 ]

SOLR-13046: Suppress SSL for StreamingTest


> Suppress SSL for StreamingTest
> --
>
> Key: SOLR-13046
> URL: https://issues.apache.org/jira/browse/SOLR-13046
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> Currently the StreamingTest fails every time when run under SSL. First step 
> is to suppress SSL and then another ticket can be created to understand what 
> the SSL issues are.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13046) Suppress SSL for StreamingTest

2018-12-07 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-13046:
-

 Summary: Suppress SSL for StreamingTest
 Key: SOLR-13046
 URL: https://issues.apache.org/jira/browse/SOLR-13046
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


Currently the StreamingTest fails every time when run under SSL. First step is 
to suppress SSL and then another ticket can be created to understand what the 
SSL issues are.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 2296 - Unstable

2018-12-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2296/

[...truncated 34 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-http2/54/consoleText

[repro] Revision: cb3ebcb12b0e45a3b86792a21aecf18abb163070

[repro] Repro line:  ant test  -Dtestcase=TestDistributedSearch 
-Dtests.method=test -Dtests.seed=444256D610B24AD7 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ar-TN -Dtests.timezone=Pacific/Easter 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  
-Dtestcase=DistributedQueryComponentOptimizationTest 
-Dtests.method=testOptimizations -Dtests.seed=444256D610B24AD7 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-BA 
-Dtests.timezone=Africa/Monrovia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  
-Dtestcase=DistributedQueryComponentOptimizationTest 
-Dtests.method=testMissingFieldListWithSort -Dtests.seed=444256D610B24AD7 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-BA 
-Dtests.timezone=Africa/Monrovia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  
-Dtestcase=DistributedQueryComponentOptimizationTest 
-Dtests.method=testMultipleFlParams -Dtests.seed=444256D610B24AD7 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-BA 
-Dtests.timezone=Africa/Monrovia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  
-Dtestcase=DistributedQueryComponentOptimizationTest 
-Dtests.method=testScoreAlwaysReturned -Dtests.seed=444256D610B24AD7 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-BA 
-Dtests.timezone=Africa/Monrovia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  
-Dtestcase=DistributedQueryComponentOptimizationTest 
-Dtests.method=testWildcardFieldList -Dtests.seed=444256D610B24AD7 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-BA 
-Dtests.timezone=Africa/Monrovia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTrackingShardHandlerFactory 
-Dtests.method=testRequestTracking -Dtests.seed=444256D610B24AD7 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=es-US 
-Dtests.timezone=America/Rankin_Inlet -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLBHttp2SolrClient 
-Dtests.seed=D19AC4EB473DD913 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=es-NI -Dtests.timezone=Europe/Sofia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
b24af10d59b15d4b79418c9d7af958aa0ac7c39a
[repro] git fetch
[repro] git checkout cb3ebcb12b0e45a3b86792a21aecf18abb163070

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestTrackingShardHandlerFactory
[repro]   DistributedQueryComponentOptimizationTest
[repro]   TestDistributedSearch
[repro]solr/solrj
[repro]   TestLBHttp2SolrClient
[repro] ant compile-test

[...truncated 3575 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.TestTrackingShardHandlerFactory|*.DistributedQueryComponentOptimizationTest|*.TestDistributedSearch"
 -Dtests.showOutput=onerror  -Dtests.seed=444256D610B24AD7 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=es-US -Dtests.timezone=America/Rankin_Inlet 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 12542 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 454 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLBHttp2SolrClient" -Dtests.showOutput=onerror  
-Dtests.seed=D19AC4EB473DD913 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=es-NI -Dtests.timezone=Europe/Sofia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 478 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.client.solrj.TestLBHttp2SolrClient
[repro]   5/5 failed: org.apache.solr.TestDistributedSearch
[repro]   5/5 failed: 
org.apache.solr.handler.component.DistributedQueryComponentOptimizationTest
[repro]   5/5 failed: 
org.apache.solr.handler.component.TestTrackingShardHandlerFactory

[repro] Re-testing 100% failures at the tip of jira/http2
[repro] git fetch
[repro] git checkout jira/http2

[...truncated 3 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestTrackingShardHandlerFactory
[repro]   DistributedQueryComponentOptimizationTest
[repro]   TestDistributedSearch
[repro] ant compile-test

[...truncated 3575 lines...]
[repro] ant test-nocompile -Dtests.dups=5 

[jira] [Commented] (LUCENE-8585) Create jump-tables for DocValues at index-time

2018-12-07 Thread Toke Eskildsen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712878#comment-16712878
 ] 

Toke Eskildsen commented on LUCENE-8585:


Thank you for the clarifications, [~jpountz].

Regarding where to put the jump-data:
{quote}If the access pattern is sequential, which I assume would be the case in 
both cases, then it's fine to keep them on storage.
{quote}
Well, that really depends on the access pattern from the outside ;). But as the 
jump-entries are stored sequentially then a request hitting a smaller subset of 
the documents in a manner that will benefit from jumps means that the 
jump-entries will be accessed in increasing order. They won't be used if the 
jumps are within the current block or to the block immediately following the 
current one.
{quote}We can also move the 7.0 format to lucene/backward-codecs since 
lucene/core only keeps formats that are used for the current codec.
{quote}
Before I began there was a single file {{Lucene80Codec.java}} in the 
{{lucene80}} package, picking codec-parts from both 50, 60 and 70. After having 
implemented the jumps, I have not touched the {{Lucene70Norms*}}-part. I 
_guess_ I should move the {{Lucene70DocValues*}}-files from {{lucene70}} to 
{{backward-codecs}}, leaving the norms-classes?

Since the norms-classes also uses {{IndexedDISI}}, I expect it would be best to 
upgrade them too. This would leave the core {{lucene70}} folder empty of active 
code.
{quote}If you move the 7.0 format to lucene/backward-codecs, then you'll need 
to move it to 
lucene/backward-codecs/src/resources/META-INF/services/org.apache.lucene.codecs.DocValuesFormat.
{quote}
That makes sense, thanks!

> Create jump-tables for DocValues at index-time
> --
>
> Key: LUCENE-8585
> URL: https://issues.apache.org/jira/browse/LUCENE-8585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: master (8.0)
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: performance
> Attachments: LUCENE-8585.patch, make_patch_lucene8585.sh
>
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible 

[jira] [Updated] (SOLR-12315) the number of docs in each group depends on rows

2018-12-07 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12315:
-
Component/s: (was: CDCR)
 search

> the number of docs in each group depends on rows
> 
>
> Key: SOLR-12315
> URL: https://issues.apache.org/jira/browse/SOLR-12315
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 7.1
>Reporter: Duo Chen
>Priority: Critical
> Attachments: difference.jpeg
>
>
> Hi, 
> We used Solr Cloud 7.1.0(3 nodes, 3 shards with 2 replicas). When we used 
> group query, we found that the number of docs in each group depends on the 
> rows number(group number). 
> When the rows bigger then 5, the return docs are correct and stable, for the 
> rest, the number of docs is smaller than the actual result. 
> Could you please explain why and give me some suggestion about how to decide 
> the rows number? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12380) Support CDCR operation in the implicit routing mode cluster

2018-12-07 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12380:
-
Issue Type: Improvement  (was: Bug)

> Support CDCR operation in the implicit routing mode cluster
> ---
>
> Key: SOLR-12380
> URL: https://issues.apache.org/jira/browse/SOLR-12380
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Atita Arora
>Priority: Major
> Attachments: Gmail - CDCR setup with Custom Document Routing.pdf
>
>
> Would like to explore to see if we can fix CDC replication in the custom 
> document / implicit routing mode cluster.
>  
> Attaching mail for reference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2018-12-07 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11959:
-
Component/s: (was: SolrCloud)
 Authentication

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712870#comment-16712870
 ] 

Simon Willnauer commented on LUCENE-8592:
-

the patch looks good to me. Yet, I am not 100% on top of this code if there are 
other places that need to be fixed. Still +1 to commit.

> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7095) Disaster Recovery native online cross-site replication for NRT SolrCloud

2018-12-07 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712868#comment-16712868
 ] 

Cassandra Targett commented on SOLR-7095:
-

It feels to me like this has been implemented with CDCR, especially now that it 
supports bidirectional updates?

> Disaster Recovery native online cross-site replication for NRT SolrCloud
> 
>
> Key: SOLR-7095
> URL: https://issues.apache.org/jira/browse/SOLR-7095
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 4.10
>Reporter: Hari Sekhon
>Priority: Major
>
> Feature request to add native online cross-site DR support for NRT SolrCloud.
> Currently NRT DR recovery requires taking down the recovering cluster 
> including halting any new indexing, changing zookeeper emsembles to the other 
> datacenter for one node per shard to replicate, then taking down again to 
> switch back to local DC zookeeper ensemble after shard has caught up. This is 
> a relatively difficult/tedious manual operation to perform and seems 
> impossible to get completely up to date in scenarios with constant new update 
> requests arriving during downtime of switching back to local DC's zookeeper 
> ensemble, therefore preventing 100% accurate catch up.
> There will be trade-offs such as making cross-site replication async to avoid 
> update latency penalty, and may require a last-write-wins type scenario like 
> Cassandra.
> Regards,
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-http2 - Build # 55 - Still Failing

2018-12-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-http2/55/

11 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionOnCommitTest.test

Error Message:
Could not find collection : c8n_2x2_commits

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
c8n_2x2_commits
at 
__randomizedtesting.SeedInfo.seed([69678AEA075C42BC:E133B530A9A02F44]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:2085)
at 
org.apache.solr.cloud.HttpPartitionOnCommitTest.multiShardTest(HttpPartitionOnCommitTest.java:80)
at 
org.apache.solr.cloud.HttpPartitionOnCommitTest.test(HttpPartitionOnCommitTest.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1070)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1042)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-13025) SchemaSimilarityFactory fallback to LegacyBM25Similarity

2018-12-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712836#comment-16712836
 ] 

Jan Høydahl commented on SOLR-13025:


I changed my implementation to not try to be clever if people have explicitly 
chosen {{BM25SimilarityFactory}} in schema. Please see   [GitHub Pull Request 
#518|https://github.com/apache/lucene-solr/pull/518] for reviewing the changes:
 * {{BM25SimilarityFactory}} always creates instances of new {{BM25Similarity}}
 * New {{LegacyBM25SimilarityFactory }}to be able to explicitly fall back
 * {{SchemaSimilarityFactory}} creates {{BM25Similarity}} from 
luceneMatchVersion>=8.0, else {{LegacyBM25Similarity}}
 * Update tests relying on exact score

Upgrade notes reads:
{noformat}
* If you explicitly use BM25SimilarityFactory in your schema the absolute 
scoring will be lower, see SOLR-13025.
 But ordering of documents will not change in the normal case. Use 
LegacyBM25SimilarityFactory if you need to force
 the old 6.x/7.x scoring. Note that if you have not specified any similarity in 
schema or use the default
 SchemaSimilarityFactory, then LegacyBM25Similarity is automatically selected 
for 'luceneMatchVersion' < 8.0.0.
 See also explanation in Reference Guide chapter "Other Schema 
Elements".{noformat}
Precommit passes, as do the Solr test suite (incredible!)

Reviews welcome. Plan to commit on Wednesday.

> SchemaSimilarityFactory fallback to LegacyBM25Similarity
> 
>
> Key: SOLR-13025
> URL: https://issues.apache.org/jira/browse/SOLR-13025
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: master (8.0)
>Reporter: Adrien Grand
>Assignee: Jan Høydahl
>Priority: Blocker
> Fix For: master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a follow-up of LUCENE-8563: Lucene changed its BM25Similarity 
> implementation to no longer multiply all scores by (k1 + 1). Solr was left 
> unchanged by replacing uses of BM25Similarity with LegacyBM25Similarity which 
> returns the same scores as in 7.x.
> This Jira makes the default similarity depend on {{luceneMatchVersion}} for 
> back-compat if schema either does not define a similarity of defines 
> {{SchemaSimilarityFactory}}. If a user has explicitly defined 
> {{BM25SimilarityFactory}} then the new will be used and she will need to 
> replace with {{LegacyBM25SimilarityFactory}} if she wants to keep old 
> absolute scores (most often not necessary).
> This change is also described in RefGuide and CHANGES.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8566) Deprecate methods in CustomAnalyzer.Builder which take factory classes

2018-12-07 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712767#comment-16712767
 ] 

Uwe Schindler commented on LUCENE-8566:
---

Hi, in fact there is no difference between the two calls. Yes, the clas sname 
is an implementation details if you purely see it from the standpoint of 
somebody using "configuration" files. But those people get an error message on 
startup of the server.

For people building a custom analyzer from source code, using class names or 
constants help them when using their IDE's autocompletion. To them it does not 
matter if they write ".class" or ".NAME" or just use a "string" as is.

About the implementation - My proposal would be:
- Add a "NAME" static public final String field to all factories (similar like 
Elasticsearch is doing this).
- In the SPI code, we use reflection to lookup the static field named "NAME" 
for every class we discovered. We use the found name to register the factory 
class for lookup in "Factory.forName()".

> Deprecate methods in CustomAnalyzer.Builder which take factory classes
> --
>
> Key: LUCENE-8566
> URL: https://issues.apache.org/jira/browse/LUCENE-8566
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Minor
>
> CustomAnalyzer.Builder has methods which take implementation classes as 
> follows.
>  - withTokenizer(Class factory, String... params)
>  - withTokenizer(Class factory, 
> Map params)
>  - addTokenFilter(Class factory, String... 
> params)
>  - addTokenFilter(Class factory, 
> Map params)
>  - addCharFilter(Class factory, String... params)
>  - addCharFilter(Class factory, 
> Map params)
> Since the builder also has methods which take service names, it seems like 
> that above methods are unnecessary and a little bit misleading. Giving 
> symbolic names is preferable to implementation factory classes, but for now, 
> users can write code depending on implementation classes.
> What do you think about deprecating those methods (adding {{@Deprecated}} 
> annotations) and deleting them in the future releases? Those are called by 
> only test cases so deleting them should have no impact on current lucene/solr 
> codebase.
> If this proposal gains your consent, I will create a patch. (Let me know if I 
> missed some point. I'll close it.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.6-Linux (64bit/jdk-12-ea+12) - Build # 72 - Unstable!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/72/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseSerialGC

7 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamingTest.testNonePartitionKeys

Error Message:
java.util.concurrent.ExecutionException: java.io.IOException: --> 
https://127.0.0.1:9/solr/streams_shard2_replica_n3/:java.util.concurrent.ExecutionException:
 java.io.IOException: params 
q=*:*=id,a_s,a_i,a_f=a_s+asc,a_f+asc=none=javabin=false

Stack Trace:
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: --> 
https://127.0.0.1:9/solr/streams_shard2_replica_n3/:java.util.concurrent.ExecutionException:
 java.io.IOException: params 
q=*:*=id,a_s,a_i,a_f=a_s+asc,a_f+asc=none=javabin=false
at 
__randomizedtesting.SeedInfo.seed([4C5E76F19090F564:2BF71B90CBCDD93F]:0)
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:400)
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:275)
at 
org.apache.solr.client.solrj.io.stream.StreamingTest.getTuples(StreamingTest.java:2428)
at 
org.apache.solr.client.solrj.io.stream.StreamingTest.testNonePartitionKeys(StreamingTest.java:188)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-8591) LegacyBM25Similarity doesn't expose getDiscountOverlaps

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712704#comment-16712704
 ] 

ASF subversion and git services commented on LUCENE-8591:
-

Commit b24af10d59b15d4b79418c9d7af958aa0ac7c39a in lucene-solr's branch 
refs/heads/master from [~lucacavanna]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b24af10 ]

LUCENE-8591: add LegacyBM25Similarity#getDiscountOverlaps

Signed-off-by: Adrien Grand 


> LegacyBM25Similarity doesn't expose getDiscountOverlaps
> ---
>
> Key: LUCENE-8591
> URL: https://issues.apache.org/jira/browse/LUCENE-8591
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Assignee: Luca Cavanna
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I worked on LUCENE-8563 I intended to expose all the needed public 
> methods that BM25Similarity exposes, but I forgot to add getDiscountOverlaps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #514: LUCENE-8591: add LegacyBM25Similarity#getDisc...

2018-12-07 Thread javanna
Github user javanna closed the pull request at:

https://github.com/apache/lucene-solr/pull/514


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #514: LUCENE-8591: add LegacyBM25Similarity#getDiscountOve...

2018-12-07 Thread javanna
Github user javanna commented on the issue:

https://github.com/apache/lucene-solr/pull/514
  
This has been merged.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8591) LegacyBM25Similarity doesn't expose getDiscountOverlaps

2018-12-07 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-8591.
--
   Resolution: Fixed
Fix Version/s: master (8.0)

Thank you [~lucacavanna].

> LegacyBM25Similarity doesn't expose getDiscountOverlaps
> ---
>
> Key: LUCENE-8591
> URL: https://issues.apache.org/jira/browse/LUCENE-8591
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Assignee: Luca Cavanna
>Priority: Minor
> Fix For: master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I worked on LUCENE-8563 I intended to expose all the needed public 
> methods that BM25Similarity exposes, but I forgot to add getDiscountOverlaps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues

2018-12-07 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712684#comment-16712684
 ] 

Adrien Grand commented on LUCENE-8374:
--

I'm in favor of 4. LUCENE-8585 is a much better option to me and I hope that we 
never release this doc-value format.

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374_branch_7_3.patch, LUCENE-8374_branch_7_3.patch.20181005, 
> LUCENE-8374_branch_7_4.patch, LUCENE-8374_branch_7_5.patch, 
> LUCENE-8374_part_1.patch, LUCENE-8374_part_2.patch, LUCENE-8374_part_3.patch, 
> LUCENE-8374_part_4.patch, entire_index_logs.txt, 
> image-2018-10-24-07-30-06-663.png, image-2018-10-24-07-30-56-962.png, 
> single_vehicle_logs.txt, 
> start-2018-10-24-1_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png,
>  
> start-2018-10-24_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with existing 
> indexes.
> h2. The lookup structure inside each block
> If {{ALL}} of the 2^16 values are defined, the structure is empty and the 
> ordinal is simply the requested docID with some modulo and multiply math. 
> Nothing to improve there.
> If the block is {{DENSE}} (2^12 to 2^16 values are defined), a bitmap is used 
> and the number of set bits up to the wanted index (the docID modulo the block 
> origo) are counted. That bitmap is a long[1024], meaning that worst case is 
> to lookup and count all set bits for 1024 longs!
> One known solution to this is to use a [rank 
> structure|[https://en.wikipedia.org/wiki/Succinct_data_structure]]. I 
> [implemented 
> it|[https://github.com/tokee/lucene-solr/blob/solr5894/solr/core/src/java/org/apache/solr/search/sparse/count/plane/RankCache.java]]
>  for a related project and with that (), the rank-overhead for a {{DENSE}} 
> block would be long[32] and would ensure a maximum of 9 lookups. It is not 
> trivial to build the rank-structure and caching it (assuming all blocks are 
> dense) for 6M documents would require 22 KB (3.17% overhead). It would be far 
> better to generate the rank-structure at index time and store it immediately 
> before the bitset (this is possible with streaming as each block is fully 
> resolved before flushing), but of course 

[JENKINS] Lucene-Solr-http2-Linux (32bit/jdk1.8.0_172) - Build # 44 - Failure!

2018-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Linux/44/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseSerialGC

11 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.Http2SolrClientCompatibilityTest

Error Message:
13 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.impl.Http2SolrClientCompatibilityTest: 1) 
Thread[id=1980, name=qtp26808737-1980, state=RUNNABLE, 
group=TGRP-Http2SolrClientCompatibilityTest] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:423)
 at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:360)
 at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:357)
 at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:181)
 at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
 at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
 at 
org.eclipse.jetty.io.ManagedSelector$$Lambda$27/30690409.run(Unknown Source)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=1983, 
name=qtp26808737-1983, state=TIMED_WAITING, 
group=TGRP-Http2SolrClientCompatibilityTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:656)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:46)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:720) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=1984, 
name=qtp26808737-1984, state=TIMED_WAITING, 
group=TGRP-Http2SolrClientCompatibilityTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:656)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:46)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:720) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=1985, 
name=Scheduler-4707742, state=TIMED_WAITING, 
group=TGRP-Http2SolrClientCompatibilityTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)5) Thread[id=1977, 
name=qtp26808737-1977, state=RUNNABLE, 
group=TGRP-Http2SolrClientCompatibilityTest] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 

[jira] [Comment Edited] (SOLR-12697) pure DocValues support for FieldValueFeature

2018-12-07 Thread Stanislav Livotov (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708618#comment-16708618
 ] 

Stanislav Livotov edited comment on SOLR-12697 at 12/7/18 11:10 AM:


Hi all, 

I've created a new patch where I had migrated the FieldValueFeature on using 
SolrDocumentFetcher#solrDoc introduced in patch SOLR-12625. [~erickerickson] 
can you please take a look at it?

I had also done a couple of additional code changes:
 # fixed small issue with defaultValue(previously it was impossible to set it 
from feature.json, and the tests were written where Feature was created 
manually, and not by parsing json). Tests are added which are validating 
defaultValue from schema field configuration and from a feature default value. 
 # added functionality of parsing numbers persisted as StrFields(tests added). 
Please note that I first try to check that it is boolean String(T/F) in order 
to support the previous behavior. Please note that there is a difference in 
behavior - previously in case if we were getting some not supported field we 
were silently returning defaultValue. Now FeatureException is thrown. 

As we had migrated code on using SolrDocumentFetcher to retrieve fieldValues, 
it is now impossible to test FieldValueFeature with LuceneTestCase, so I had to 
migrate test TestLTRReRankingPipeline on SolrTestCaseJ4(I tried to do minimum 
changes there)

[~cpoerschke] please take a look at the patch and described changes. WDYT?


was (Author: slivotov):
Hi all, 

I've created a new patch where I had migrated the FieldValueFeature on using 
SolrDocumentFetcher#solrDoc introduced in patch SOLR-12625. [~erickerickson] 
can you please take a look at it?

I had also done a couple of additional code changes:
 # fixed small issue with defaultValue(previously it was impossible to set it 
from feature.json, and the tests were written where Feature were created 
manually, and not by parsing json). Tests are added which are validating 
defaultValue from schema field configuration and from a feature default value. 
 # added functionality of parsing numbers persisted as StrFields(tests added). 
Please note that I first try to check that it is boolean String(T/F) in order 
to support the previous behavior. Please note that there is a difference in 
behavior - previously in case if we were getting some not supported field we 
were silently returning defaultValue. Now FeatureException is thrown. 

As we had migrated code on using SolrDocumentFetcher to retrieve fieldValues, 
it is now impossible to test FieldValueFeature with LuceneTestCase, so I had to 
migrate test TestLTRReRankingPipeline on SolrTestCaseJ4(I tried to do minimum 
changes there)

[~cpoerschke] please take a look at the patch and described changes. WDYT?

> pure DocValues support for FieldValueFeature
> 
>
> Key: SOLR-12697
> URL: https://issues.apache.org/jira/browse/SOLR-12697
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-12697.patch, SOLR-12697.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... FieldValueFeature doesn't support pure DocValues fields (Stored 
> false). Please also note that for fields which are both stored and DocValues 
> it is working not optimal because it is extracting just one field from the 
> stored document. DocValues are obviously faster for such usecases. ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13025) SchemaSimilarityFactory fallback to LegacyBM25Similarity

2018-12-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13025:
---
Description: 
This is a follow-up of LUCENE-8563: Lucene changed its BM25Similarity 
implementation to no longer multiply all scores by (k1 + 1). Solr was left 
unchanged by replacing uses of BM25Similarity with LegacyBM25Similarity which 
returns the same scores as in 7.x.

This Jira makes the default similarity depend on {{luceneMatchVersion}} for 
back-compat if schema either does not define a similarity of defines 
{{SchemaSimilarityFactory}}. If a user has explicitly defined 
{{BM25SimilarityFactory}} then the new will be used and she will need to 
replace with {{LegacyBM25SimilarityFactory}} if she wants to keep old absolute 
scores (most often not necessary).

This change is also described in RefGuide and CHANGES.

  was:This is a follow-up of LUCENE-8563: Lucene changed its BM25Similarity 
implementation to no longer multiply all scores by (k1 + 1). Solr was left 
unchanged by replacing uses of BM25Similarity with LegacyBM25Similarity which 
returns the same scores as in 7.x. However it would be nice to switch back to 
BM25Similarity, either all the time with a note in the migration guide, or 
based on the luceneMatchVersion of the collection.


> SchemaSimilarityFactory fallback to LegacyBM25Similarity
> 
>
> Key: SOLR-13025
> URL: https://issues.apache.org/jira/browse/SOLR-13025
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: master (8.0)
>Reporter: Adrien Grand
>Assignee: Jan Høydahl
>Priority: Blocker
> Fix For: master (8.0)
>
>
> This is a follow-up of LUCENE-8563: Lucene changed its BM25Similarity 
> implementation to no longer multiply all scores by (k1 + 1). Solr was left 
> unchanged by replacing uses of BM25Similarity with LegacyBM25Similarity which 
> returns the same scores as in 7.x.
> This Jira makes the default similarity depend on {{luceneMatchVersion}} for 
> back-compat if schema either does not define a similarity of defines 
> {{SchemaSimilarityFactory}}. If a user has explicitly defined 
> {{BM25SimilarityFactory}} then the new will be used and she will need to 
> replace with {{LegacyBM25SimilarityFactory}} if she wants to keep old 
> absolute scores (most often not necessary).
> This change is also described in RefGuide and CHANGES.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #518: SOLR-13025: SchemaSimilarityFactory fallback ...

2018-12-07 Thread janhoy
GitHub user janhoy opened a pull request:

https://github.com/apache/lucene-solr/pull/518

SOLR-13025: SchemaSimilarityFactory fallback to LegacyBM25Similarity

See https://issues.apache.org/jira/browse/SOLR-13025

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cominvent/lucene-solr solr13025-newBM25

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/518.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #518


commit 5f5df0089fa5755b4259d25d8260e13f5a19a57e
Author: Jan Høydahl 
Date:   2018-11-30T13:49:19Z

SOLR-13025: First cut of back compat for BM25

commit c9367484280bc7c056852520ed5ca3bc5e6cebcd
Author: Jan Høydahl 
Date:   2018-12-07T09:23:22Z

Merge branch 'master' into solr13025-newBM25

commit 7e2ef9ba7a84f5e8f14ab24d1e0769f21cfc7a2a
Author: Jan Høydahl 
Date:   2018-12-07T10:07:23Z

SOLR-13025: Force back-compat only through SchemaSimilarityFactory, keep 
BM25SimilarityFactory explicitly for the new sim




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712613#comment-16712613
 ] 

Jim Ferenczi commented on LUCENE-8592:
--

{quote}
Does CheckIndex fail on broken indices? That could be helpful to know whether 
users can trust it to check whether they are affected and need to reindex.
{quote}

Yes, with this patch CheckIndex will fail on indices affected by this bug.

> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13025) SchemaSimilarityFactory fallback to LegacyBM25Similarity

2018-12-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13025:
---
Summary: SchemaSimilarityFactory fallback to LegacyBM25Similarity  (was: 
Replace usage of LegacyBM25Similarity with BM25Similarity)

> SchemaSimilarityFactory fallback to LegacyBM25Similarity
> 
>
> Key: SOLR-13025
> URL: https://issues.apache.org/jira/browse/SOLR-13025
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: master (8.0)
>Reporter: Adrien Grand
>Assignee: Jan Høydahl
>Priority: Blocker
> Fix For: master (8.0)
>
>
> This is a follow-up of LUCENE-8563: Lucene changed its BM25Similarity 
> implementation to no longer multiply all scores by (k1 + 1). Solr was left 
> unchanged by replacing uses of BM25Similarity with LegacyBM25Similarity which 
> returns the same scores as in 7.x. However it would be nice to switch back to 
> BM25Similarity, either all the time with a note in the migration guide, or 
> based on the luceneMatchVersion of the collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8592) MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural sort is reversed

2018-12-07 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712609#comment-16712609
 ] 

Adrien Grand commented on LUCENE-8592:
--

I see, the patch is applying reverseMul on top of the comparison result rather 
than on values themselves, like search-time sort. That is still subject to bugs 
but much less likely I guess.

+1 to merge. Let's get someone else to have a look at it first, but it would be 
nice to have it in 7.6 too.

Does CheckIndex fail on broken indices? That could be helpful to know whether 
users can trust it to check whether they are affected and need to reindex.

> MultiSorter#sort incorrectly sort Integer/Long#MIN_VALUE when the natural 
> sort is reversed
> --
>
> Key: LUCENE-8592
> URL: https://issues.apache.org/jira/browse/LUCENE-8592
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8592.patch
>
>
> MultiSorter#getComparableProviders on an integer or long field doesn't handle 
> MIN_VALUE correctly when the natural order is reversed. To handle reverse 
> sort we use the negation of the value but there is no check for overflows so 
> MIN_VALUE for ints and longs are always sorted first (even if the natural 
> order is reversed). 
> This method is used by index sorting when merging already sorted segments 
> together. This means that a sorted index can be incorrectly sorted if it uses 
> a reverse sort and a missing value set to MIN_VALUE (long or int or values 
> inside the segment that are equals to MIN_VALUE).
> This a bad bug because it affects the documents order inside segments and 
> only a reindex can restore the correct sort order. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13040) Harden TestSQLHandler.

2018-12-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712577#comment-16712577
 ] 

ASF subversion and git services commented on SOLR-13040:


Commit 38cfd0e25974e9a4cd676a25d373934e1ea8a528 in lucene-solr's branch 
refs/heads/jira/http2 from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=38cfd0e ]

SOLR-13040: Add AwaitsFix annotation to TestSQLHandler and improve exception 
information related to that test.


> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >