[jira] [Commented] (SOLR-7452) json facet api returning inconsistent counts in cloud set up

2016-08-10 Thread Dorit Carmon (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416599#comment-15416599
 ] 

Dorit Carmon commented on SOLR-7452:


hi, any updates for this issue? 
this is a very significant for our analysis, hope there are some good news on 
progress :-)
Thank you,
Dorit. 

> json facet api returning inconsistent counts in cloud set up
> 
>
> Key: SOLR-7452
> URL: https://issues.apache.org/jira/browse/SOLR-7452
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Affects Versions: 5.1
>Reporter: Vamsi Krishna D
>  Labels: count, facet, sort
> Fix For: 5.2
>
> Attachments: SOLR-7452.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> While using the newly added feature of json term facet api 
> (http://yonik.com/json-facet-api/#TermsFacet) I am encountering inconsistent 
> returns of counts of faceted value ( Note I am running on a cloud mode of 
> solr). For example consider that i have txns_id(unique field or key), 
> consumer_number and amount. Now for a 10 million such records , lets say i 
> query for 
> q=*:*=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> the results are as follows ( some are omitted ):
> "facets":{
> "count":6641277,
> "biskatoo":{
>   "numBuckets":3112708,
>   "buckets":[{
>   "val":"surya",
>   "count":4,
>   "y":2.264506},
>   {
>   "val":"raghu",
>   "COUNT":3,   // capitalised for recognition 
>   "y":1.8},
> {
>   "val":"malli",
>   "count":4,
>   "y":1.78}]}}}
> but if i restrict the query to 
> q=consumer_number:raghu=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> i get :
>   "facets":{
> "count":4,
> "biskatoo":{
>   "numBuckets":1,
>   "buckets":[{
>   "val":"raghu",
>   "COUNT":4,
>   "y":2429708.24}]}}}
> One can see the count results are inconsistent ( and I found many occasions 
> of inconsistencies).
> I have tried the patch https://issues.apache.org/jira/browse/SOLR-7412 but 
> still the issue seems not resolved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 320 - Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/320/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:53657/ny_nfs/c8n_1x3_lf_shard1_replica1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:53657/ny_nfs/c8n_1x3_lf_shard1_replica1]
at 
__randomizedtesting.SeedInfo.seed([5EA5C491DD462F85:D6F1FB4B73BA427D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:774)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1172)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1061)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:997)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:592)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:578)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:174)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 17528 - Failure!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17528/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 12561 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp/junit4-J2-20160811_040533_578.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps/java_pid8624.hprof 
...
   [junit4] Heap dump file created [463504543 bytes in 1.227 secs]
   [junit4] <<< JVM J2: EOF 

[...truncated 11049 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:763: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:715: Some of the 
tests produced a heap dump, but did not fail. Maybe a suppressed 
OutOfMemoryError? Dumps created:
* java_pid8624.hprof

Total time: 68 minutes 23 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 772 - Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/772/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestHdfsCloudBackupRestore.test

Error Message:
expected:<{shard1_0=19, shard1_1=19, shard2=26}> but was:<{shard1_0=0, 
shard1_1=0, shard2=26}>

Stack Trace:
java.lang.AssertionError: expected:<{shard1_0=19, shard1_1=19, shard2=26}> but 
was:<{shard1_0=0, shard1_1=0, shard2=26}>
at 
__randomizedtesting.SeedInfo.seed([1CFFEF775F761B5B:94ABD0ADF18A76A3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:242)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9405) ConcurrentModificationException in ZkStateReader.getStateWatchers

2016-08-10 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416374#comment-15416374
 ] 

Shalin Shekhar Mangar commented on SOLR-9405:
-

FYI [~romseygeek]

> ConcurrentModificationException in ZkStateReader.getStateWatchers
> -
>
> Key: SOLR-9405
> URL: https://issues.apache.org/jira/browse/SOLR-9405
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.2, master (7.0)
>
>
> Jenkins found this: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1432/
> {code}
> Stack Trace:
> java.util.ConcurrentModificationException
> at 
> __randomizedtesting.SeedInfo.seed([FA459DF725097EFF:A77E52876204E1C1]:0)
> at 
> java.util.HashMap$HashIterator.nextNode(java.base@9-ea/HashMap.java:1489)
> at 
> java.util.HashMap$KeyIterator.next(java.base@9-ea/HashMap.java:1513)
> at 
> java.util.AbstractCollection.addAll(java.base@9-ea/AbstractCollection.java:351)
> at java.util.HashSet.(java.base@9-ea/HashSet.java:119)
> at 
> org.apache.solr.common.cloud.ZkStateReader.getStateWatchers(ZkStateReader.java:1279)
> at 
> org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:116)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9405) ConcurrentModificationException in ZkStateReader.getStateWatchers

2016-08-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9405:

Summary: ConcurrentModificationException in ZkStateReader.getStateWatchers  
(was: ConcurrentModifcationException in ZkStateReader.getStateWatchers)

> ConcurrentModificationException in ZkStateReader.getStateWatchers
> -
>
> Key: SOLR-9405
> URL: https://issues.apache.org/jira/browse/SOLR-9405
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.2, master (7.0)
>
>
> Jenkins found this: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1432/
> {code}
> Stack Trace:
> java.util.ConcurrentModificationException
> at 
> __randomizedtesting.SeedInfo.seed([FA459DF725097EFF:A77E52876204E1C1]:0)
> at 
> java.util.HashMap$HashIterator.nextNode(java.base@9-ea/HashMap.java:1489)
> at 
> java.util.HashMap$KeyIterator.next(java.base@9-ea/HashMap.java:1513)
> at 
> java.util.AbstractCollection.addAll(java.base@9-ea/AbstractCollection.java:351)
> at java.util.HashSet.(java.base@9-ea/HashSet.java:119)
> at 
> org.apache.solr.common.cloud.ZkStateReader.getStateWatchers(ZkStateReader.java:1279)
> at 
> org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:116)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9405) ConcurrentModifcationException in ZkStateReader.getStateWatchers

2016-08-10 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-9405:
---

 Summary: ConcurrentModifcationException in 
ZkStateReader.getStateWatchers
 Key: SOLR-9405
 URL: https://issues.apache.org/jira/browse/SOLR-9405
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.1
Reporter: Shalin Shekhar Mangar
 Fix For: 6.2, master (7.0)


Jenkins found this: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1432/

{code}
Stack Trace:
java.util.ConcurrentModificationException
at 
__randomizedtesting.SeedInfo.seed([FA459DF725097EFF:A77E52876204E1C1]:0)
at 
java.util.HashMap$HashIterator.nextNode(java.base@9-ea/HashMap.java:1489)
at java.util.HashMap$KeyIterator.next(java.base@9-ea/HashMap.java:1513)
at 
java.util.AbstractCollection.addAll(java.base@9-ea/AbstractCollection.java:351)
at java.util.HashSet.(java.base@9-ea/HashSet.java:119)
at 
org.apache.solr.common.cloud.ZkStateReader.getStateWatchers(ZkStateReader.java:1279)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:116)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+129) - Build # 1432 - Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1432/
Java: 64bit/jdk-9-ea+129 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch

Error Message:


Stack Trace:
java.util.ConcurrentModificationException
at 
__randomizedtesting.SeedInfo.seed([FA459DF725097EFF:A77E52876204E1C1]:0)
at 
java.util.HashMap$HashIterator.nextNode(java.base@9-ea/HashMap.java:1489)
at java.util.HashMap$KeyIterator.next(java.base@9-ea/HashMap.java:1513)
at 
java.util.AbstractCollection.addAll(java.base@9-ea/AbstractCollection.java:351)
at java.util.HashSet.(java.base@9-ea/HashSet.java:119)
at 
org.apache.solr.common.cloud.ZkStateReader.getStateWatchers(ZkStateReader.java:1279)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:116)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-6.x - Build # 396 - Unstable

2016-08-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/396/

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxDocs

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([D50454635379F4AB:6C8582BC7F93F021]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:783)
at 
org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:198)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:14=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:776)
... 40 more




Build Log:
[...truncated 11320 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[GitHub] lucene-solr pull request #68: [SOLR-9269] Refactor the snapshot cleanup mech...

2016-08-10 Thread hgadre
GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/68

[SOLR-9269] Refactor the snapshot cleanup mechanism to rely on Lucene

The current snapshot cleanup mechanism is based on reference counting
the index files shared between multiple segments. Since this mechanism
completely skips the Lucene APIs, it is not portable (e.g. it doesn't
work on 4.10.x version).

This patch provides an alternate implementation which relies exclusively
on Lucene IndexWriter (+ IndexDeletionPolicy) for cleanup.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr SOLR-9269_update

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/68.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #68


commit d329f03830b88d4214790b7c85e19c55e947e918
Author: Hrishikesh Gadre 
Date:   2016-08-10T23:59:31Z

[SOLR-9269] Refactor the snapshot cleanup mechanism to rely on Lucene

The current snapshot cleanup mechanism is based on reference counting
the index files shared between multiple segments. Since this mechanism
completely skips the Lucene APIs, it is not portable (e.g. it doesn't
work on 4.10.x version).

This patch provides an alternate implementation which relies exclusively
on Lucene IndexWriter (+ IndexDeletionPolicy) for cleanup.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9055) Make collection backup/restore extensible

2016-08-10 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416116#comment-15416116
 ] 

Hrishikesh Gadre commented on SOLR-9055:


[~varunthacker] [~markrmil...@gmail.com] Please take a look at this pull 
request. I think we should check the compatibility of the backup index version 
during restore operation. But I am not quite sure about the compatibility 
guidelines for Solr (which includes Lucene index format + Solr config files + 
other collection level meta-data).

> Make collection backup/restore extensible
> -
>
> Key: SOLR-9055
> URL: https://issues.apache.org/jira/browse/SOLR-9055
> Project: Solr
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
> Attachments: SOLR-9055.patch, SOLR-9055.patch, SOLR-9055.patch
>
>
> SOLR-5750 implemented backup/restore API for Solr. This JIRA is to track the 
> code cleanup/refactoring. Specifically following improvements should be made,
> - Add Solr/Lucene version to check the compatibility between the backup 
> version and the version of Solr on which it is being restored.
> - Add a backup implementation version to check the compatibility between the 
> "restore" implementation and backup format.
> - Introduce a Strategy interface to define how the Solr index data is backed 
> up (e.g. using file copy approach).
> - Introduce a Repository interface to define the file-system used to store 
> the backup data. (currently works only with local file system but can be 
> extended). This should be enhanced to introduce support for "registering" 
> repositories (e.g. HDFS, S3 etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9055) Make collection backup/restore extensible

2016-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416107#comment-15416107
 ] 

ASF GitHub Bot commented on SOLR-9055:
--

GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/67

[SOLR-9055] Make collection backup/restore extensible

- Introduced a Strategy interface to define how the Solr index data is 
backed up.
- Two concrete implementations of this strategy interface defined.
  - One using core Admin API (BACKUPCORE)
  - Other skipping the backup of index data altogether. This is useful when
the index data is copied via an external mechanism in combination with 
named
snapshots (Please refer to SOLR-9038 for details)
  - In future we can add additional implementations of this interface (e.g. 
based on HDFS snapshots etc.)
- Added a backup property to record the Solr version. This helps to check 
the compatibility
  of backup with respect to the current version during the restore 
operation. This
  compatibility check is not added since its unclear what the Solr level 
compatibility guidelines
  are. But at-least having version information as part of the backup would 
be very useful.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr SOLR-9055_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/67.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #67


commit ee07e54a36989637c39b110f1cba19c8af14a0fb
Author: Hrishikesh Gadre 
Date:   2016-08-10T21:41:12Z

[SOLR-9055] Make collection backup/restore extensible

- Introduced a Strategy interface to define how the Solr index data is 
backed up.
- Two concrete implementations of this strategy interface defined.
  - One using core Admin API (BACKUPCORE)
  - Other skipping the backup of index data altogether. This is useful when
the index data is copied via an external mechanism in combination with 
named
snapshots (Please refer to SOLR-9038 for details)
  - In future we can add additional implementations of this interface (e.g. 
based on HDFS snapshots etc.)
- Added a backup property to record the Solr version. This helps to check 
the compatibility
  of backup with respect to the current version during the restore 
operation. This
  compatibility check is not added since its unclear what the Solr level 
compatibility guidelines
  are. But at-least having version information as part of the backup would 
be very useful.




> Make collection backup/restore extensible
> -
>
> Key: SOLR-9055
> URL: https://issues.apache.org/jira/browse/SOLR-9055
> Project: Solr
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
> Attachments: SOLR-9055.patch, SOLR-9055.patch, SOLR-9055.patch
>
>
> SOLR-5750 implemented backup/restore API for Solr. This JIRA is to track the 
> code cleanup/refactoring. Specifically following improvements should be made,
> - Add Solr/Lucene version to check the compatibility between the backup 
> version and the version of Solr on which it is being restored.
> - Add a backup implementation version to check the compatibility between the 
> "restore" implementation and backup format.
> - Introduce a Strategy interface to define how the Solr index data is backed 
> up (e.g. using file copy approach).
> - Introduce a Repository interface to define the file-system used to store 
> the backup data. (currently works only with local file system but can be 
> extended). This should be enhanced to introduce support for "registering" 
> repositories (e.g. HDFS, S3 etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #67: [SOLR-9055] Make collection backup/restore ext...

2016-08-10 Thread hgadre
GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/67

[SOLR-9055] Make collection backup/restore extensible

- Introduced a Strategy interface to define how the Solr index data is 
backed up.
- Two concrete implementations of this strategy interface defined.
  - One using core Admin API (BACKUPCORE)
  - Other skipping the backup of index data altogether. This is useful when
the index data is copied via an external mechanism in combination with 
named
snapshots (Please refer to SOLR-9038 for details)
  - In future we can add additional implementations of this interface (e.g. 
based on HDFS snapshots etc.)
- Added a backup property to record the Solr version. This helps to check 
the compatibility
  of backup with respect to the current version during the restore 
operation. This
  compatibility check is not added since its unclear what the Solr level 
compatibility guidelines
  are. But at-least having version information as part of the backup would 
be very useful.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr SOLR-9055_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/67.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #67


commit ee07e54a36989637c39b110f1cba19c8af14a0fb
Author: Hrishikesh Gadre 
Date:   2016-08-10T21:41:12Z

[SOLR-9055] Make collection backup/restore extensible

- Introduced a Strategy interface to define how the Solr index data is 
backed up.
- Two concrete implementations of this strategy interface defined.
  - One using core Admin API (BACKUPCORE)
  - Other skipping the backup of index data altogether. This is useful when
the index data is copied via an external mechanism in combination with 
named
snapshots (Please refer to SOLR-9038 for details)
  - In future we can add additional implementations of this interface (e.g. 
based on HDFS snapshots etc.)
- Added a backup property to record the Solr version. This helps to check 
the compatibility
  of backup with respect to the current version during the restore 
operation. This
  compatibility check is not added since its unclear what the Solr level 
compatibility guidelines
  are. But at-least having version information as part of the backup would 
be very useful.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 17525 - Still Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17525/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([89F354064A3C5DD5:1A76BDCE4C0302D]:0)
at 
org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12269 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerConcurrent
   [junit4]   2> Creating 

[JENKINS] Lucene-Solr-Tests-master - Build # 1338 - Unstable

2016-08-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1338/

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([26B58018653B4F9F:1B6D2E345DD511EF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.BasicAuthIntegrationTest.executeCommand(BasicAuthIntegrationTest.java:230)
at 
org.apache.solr.security.BasicAuthIntegrationTest.doExtraTests(BasicAuthIntegrationTest.java:145)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testCollectionCreateSearchDelete(TestMiniSolrCloudClusterBase.java:196)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testBasics(TestMiniSolrCloudClusterBase.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Updated] (LUCENE-7344) Deletion by query of uncommitted docs not working with DV updates

2016-08-10 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7344:
-
Attachment: LUCENE-7344.patch


bq. I briefly reviewed the test, but not thoroughly (I intend to). However, 
notice that committing (hard/soft ; commit/NRT) completely avoids the problem 
because a commit/NRT already means flushing DV updates. So if that's what this 
test does, I don't think it's going to expose the problem.

Understood -- but i was trying to write a generally robust randomized test that 
could sometimes commit (incase that uncovered otherproblems), not just a test 
targetting this specific problem (we already have that)

Reading through teh output, I realized the reason I wasn't seeing any failures 
even after man many runs was because the spread of unique values in the DV 
field (Long.MIN_VALUE to Long.MAX_VALUE) was just too large relative the size 
of the ranges i was using for the deletes.

I refactored the code int oa helper method that is now called from multiple 
tests -- so {{testBiasedMixOfRandomUpdates}} should now be functionally 
equivilent to what it was before this patch, but now we also have new test 
methods like  {{testBiasedMixOfRandomUpdatesWithNarrowValuesAndDeletes}} 
(-1000L to 1000L), {{testBiasedMixOfRandomUpdatesWithDeletesAndCommits}} (using 
the full spectrum of valid Longs), 
{{testBiasedMixOfRandomUpdatesWithCommitsAndLotsOfDeletes}} (using the full 
spectrum of valid Longs, but really hammering with lots of deletes), etc...

I also added in more checks of the expected values using NRT readers 
periodically (every 100 ops)

It's now easy to get failures (and AFAICT that are failure due to the 
bug/patch, not just silly test mistakes)



What this doesn't yet have (because it didn't occur to be me, because it hasn't 
come up yet in how Solr is trying to use updateNumericDocValue()) is tests that 
interleave multiple updateNumericDocValue() calls that affect _multiple_ 
overlapping sets of docs, with deleteDocuments() calls that affect _subsets_ of 
those documents (eg: delete by BooleanQuery that wraps a DocValuesRangeQuery 
and another mandatory clause) ... i'll try tackling that next.


> Deletion by query of uncommitted docs not working with DV updates
> -
>
> Key: LUCENE-7344
> URL: https://issues.apache.org/jira/browse/LUCENE-7344
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7344.patch, LUCENE-7344.patch, LUCENE-7344.patch, 
> LUCENE-7344.patch
>
>
> When DVs are updated, delete by query doesn't work with the updated DV value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-08-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415981#comment-15415981
 ] 

Hoss Man commented on SOLR-5944:


bq. I'd still suggest that instead of introducing yet another delay mechanism 
via the DebugFilter in JettySolrRunner, we should use the fault injection 
facilities that we already have in Solr.

I'm not sure how that would really work via TestInjection -- what Ishan's patch 
currently adds to DebugFilter allows for targeting specific requests (by 
sequence in the future) _on specific nodes_ with delays.  The way TestInjection 
is designed is to be used at very low levels in the (production) code as 
asserts and completely randomized based on static "odds" that test methods can 
set.

I have no idea how you would tell TestInjection "The HTTP request should be 
stalled for 10 seconds" and have that only apply to a _single_ specific node.



> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.62D328FA1DEA57FD.fail2.txt, 
> hoss.62D328FA1DEA57FD.fail3.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6040 - Still Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6040/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestRAFDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J1\temp\lucene.store.TestRAFDirectory_D76D909DA49DB3CC-001\testThreadSafety-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J1\temp\lucene.store.TestRAFDirectory_D76D909DA49DB3CC-001\testThreadSafety-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J1\temp\lucene.store.TestRAFDirectory_D76D909DA49DB3CC-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J1\temp\lucene.store.TestRAFDirectory_D76D909DA49DB3CC-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J1\temp\lucene.store.TestRAFDirectory_D76D909DA49DB3CC-001\testThreadSafety-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J1\temp\lucene.store.TestRAFDirectory_D76D909DA49DB3CC-001\testThreadSafety-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J1\temp\lucene.store.TestRAFDirectory_D76D909DA49DB3CC-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J1\temp\lucene.store.TestRAFDirectory_D76D909DA49DB3CC-001

at __randomizedtesting.SeedInfo.seed([D76D909DA49DB3CC]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:57948/forceleader_test_collection_shard1_replica1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:57948/forceleader_test_collection_shard1_replica1]
at 
__randomizedtesting.SeedInfo.seed([4559B5DD49F8D4E9:A3CE811D707A2D88]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:774)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1172)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1061)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:997)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741)
at 
org.apache.solr.cloud.ForceLeaderTest.sendDoc(ForceLeaderTest.java:424)
at 
org.apache.solr.cloud.ForceLeaderTest.assertSendDocFails(ForceLeaderTest.java:315)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 

How hard would a "wipe all deletes" operation be?

2016-08-10 Thread Shawn Heisey
My question is in the context of Solr, but I think it would probably be
best implemented in Lucene, for the benefit of all Lucene-based
software.  I'm describing it here to decide whether I should raise an issue.

I'm after something that would simply rewrite any segment containing
deleted documents, without actually merging the segments.  It would be
*like* a merge, except that it would usually merge one segment to one
segment, instead of many to one.

If the deleted documents are evenly scattered across the whole index
(shard), simply doing forceMerge might be just as efficient, assuming
disk space is not a concern.  A use case with highly-bunched deletes and
a relatively large number of segments would only need to work on some of
the segments, and would complete faster.  I suspect that bunched deletes
are probably common in actual user indexes, at least for the ones where
most deletes are related to document updates.

I don't know what this operation would be called.  I can start the
bikeshedding with something like wipeDeletes.  Using expungeDeletes
would be awesome, but this name is already used as a parameter for
another operation, at least in Solr.

I can imagine two methods, one which has no arguments and one that takes
two float percentage thresholds.

For the second method, the thresholds would control what happens if the
space used by segments with deletes is above or below the threshold. 
The first threshold, which might be called "mergeThreshold" would merge
the segments with deletes into a single segment IF the space used by the
segments with deletes is less than or equal to that percentage of the
whole index.  The second threshold, which might be called
"forceMergeThreshold" would change the request into a forceMerge if the
amount of space used by the segments with deletes is greater than or
equal to that percentage of the whole index.

The no-arg method could go two ways:  Either it *only* rewrites segments
one to one (maybe calling the other method with Float.MIN_VALUE for both
arguments), or it assigns reasonable default values to the two
thresholds, perhaps 30 and 90 percent.

On my dev server, optimizing a 33GB index shard takes over 3500 seconds
-- close to an hour.  I only do the optimize (forceMerge in Lucene) to
clean out deletes so they don't accumulate.  Any performance increase
that I obtain is a nice bonus -- not the reason for the optimize.

I would expect the operation I am describing here to take a fraction of
that time, if it is run on an index that has never been optimized.  My
TMP settings are roughly equivalent to a mergeFactor of 35.  I have the
potential for many segments.


  35
  35
  105


Most of my deletes are concentrated in the most recently added
documents.  Normal merging will eliminate some of them, and most of what
is left will be in the first tier of merged segments, which should be
pretty small.  Getting rid of deleted documents should be very efficient
on my indexes with this operation.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 17524 - Still Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17524/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch

Error Message:
CollectionStateWatcher wasn't cleared after completion

Stack Trace:
java.lang.AssertionError: CollectionStateWatcher wasn't cleared after completion
at 
__randomizedtesting.SeedInfo.seed([971A591F836C3573:CA21966FC461AA4D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:43876/solr/testSolrCloudCollection_shard1_replica2


[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+129) - Build # 1429 - Still Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1429/
Java: 64bit/jdk-9-ea+129 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:39162/kjn/bw/c8n_1x3_lf_shard1_replica2]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:39162/kjn/bw/c8n_1x3_lf_shard1_replica2]
at 
__randomizedtesting.SeedInfo.seed([8BB4FBC8D2E2F470:3E0C4127C1E9988]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:774)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1172)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1061)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:997)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:592)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:578)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:174)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (SOLR-9092) Add safety checks to delete replica/shard/collection commands

2016-08-10 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-9092:

Attachment: SOLR-9092.patch

I started to revisit this issue. Took a quick crack at it 

The approach I've taken is - before sending the core UNLOAD command verify if 
the core is part of a live node. If the node isn't part of live_nodes then it 
would fail anyways. So this check won't break back-compat. The cluster state 
info still gets deleted. It will protect us from the second scenario mentiond 
on the ticket

bq. Another situation where I saw this can be a problem - A second solr cluster 
cloned from the first but the script didn't correctly change the hostnames in 
the state.json file. When a delete command was issued against the second 
cluster Solr deleted the replica from the first cluster. In the above case the 
script was buggy obviously but if we verify against live_nodes then Solr 
wouldn't have gone ahead and deleted replicas not belonging to its cluster.

Just wanted to get some feedback before writing tests etc.

> Add safety checks to delete replica/shard/collection commands
> -
>
> Key: SOLR-9092
> URL: https://issues.apache.org/jira/browse/SOLR-9092
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-9092.patch
>
>
> We should verify the delete commands against live_nodes to make sure the API 
> can atleast be executed correctly
> If we have a two node cluster, a collection with 1 shard 2 replica. Call the 
> delete replica command against for the replica whose node is currently down.
> You get an exception:
> {code}
> 
>
>   0
>   5173
>
>
>name="192.168.1.101:7574_solr">org.apache.solr.client.solrj.SolrServerException:Server
>  refused connection at: http://192.168.1.101:7574/solr
>
> 
> {code}
> At this point the entry for the replica is gone from state.json . The client 
> application retries since an error was thrown but the delete command will 
> never succeed now and an error like this will be seen-
> {code}
> 
>
>   400
>   137
>
>org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Invalid replica : core_node3 in shard/collection : shard1/gettingstarted 
> available replicas are core_node1
>
>   Invalid replica : core_node3 in shard/collection : 
> shard1/gettingstarted available replicas are core_node1
>   400
>
>
>   
>  org.apache.solr.common.SolrException
>   name="root-error-class">org.apache.solr.common.SolrException
>   
>   Invalid replica : core_node3 in shard/collection : 
> shard1/gettingstarted available replicas are core_node1
>   400
>
> 
> {code}
> For create collection/add-replica we check the "createNodeSet" and "node" 
> params respectively against live_nodes to make sure it has a chance of 
> succeeding.
> We should add a check against live_nodes for the delete commands as well.
> Another situation where I saw this can be a problem - A second solr cluster 
> cloned from the first but the script didn't correctly change the hostnames in 
> the state.json file. When a delete command was issued against the second 
> cluster Solr deleted the replica from the first cluster.
> In the above case the script was buggy obviously but if we verify against 
> live_nodes then Solr wouldn't have gone ahead and deleted replicas not 
> belonging to its cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 17523 - Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17523/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E404532D8FF401D0:6C506CF721086C28]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testFillWorkQueue(MultiThreadedOCPTest.java:111)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 142 - Still Failing

2016-08-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/142/

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=134092, name=collection1, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=134092, name=collection1, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:34073: collection already exists: 
awholynewstresscollection_collection1_3
at __randomizedtesting.SeedInfo.seed([A64F024E870E254C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:592)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:261)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:403)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:356)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1291)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1061)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:997)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1599)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1620)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:987)




Build Log:
[...truncated 11969 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/solr/build/solr-core/test/J1/temp/solr.cloud.CollectionsAPIDistributedZkTest_A64F024E870E254C-001/init-core-data-001
   [junit4]   2> 2249274 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[A64F024E870E254C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 2249275 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[A64F024E870E254C]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 2249290 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[A64F024E870E254C]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2249293 INFO  (Thread-12227) [] o.a.s.c.ZkTestServer 
client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2249293 INFO  (Thread-12227) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2249393 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[A64F024E870E254C]) [] 
o.a.s.c.ZkTestServer start zk server on port:37694
   [junit4]   2> 2249393 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[A64F024E870E254C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2249393 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[A64F024E870E254C]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2249395 INFO  (zkCallback-32724-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@58994e50 
name:ZooKeeperConnection Watcher:127.0.0.1:37694 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2249395 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[A64F024E870E254C]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 2249395 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[A64F024E870E254C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 2249395 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[A64F024E870E254C]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 2249397 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[A64F024E870E254C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2249397 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 
o.a.z.s.NIOServerCnxn caught end of stream exception
   [junit4]   2> EndOfStreamException: Unable to read additional data from 
client sessionid 0x15674ff350f, likely client has closed socket
   [junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
   

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1428 - Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1428/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 10 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog, 
MockDirectoryWrapper, TransactionLog, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 10 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog, 
MockDirectoryWrapper, TransactionLog, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor]
at __randomizedtesting.SeedInfo.seed([86CE0CA0EA55C638]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:258)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11447 lines...]
   [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.schema.TestManagedSchemaAPI_86CE0CA0EA55C638-001/init-core-data-001
   [junit4]   2> 816297 INFO  
(SUITE-TestManagedSchemaAPI-seed#[86CE0CA0EA55C638]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 816299 INFO  
(SUITE-TestManagedSchemaAPI-seed#[86CE0CA0EA55C638]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 816299 INFO  (Thread-736) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 816299 INFO  (Thread-736) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 816399 INFO  
(SUITE-TestManagedSchemaAPI-seed#[86CE0CA0EA55C638]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:42899
   [junit4]   2> 816399 INFO  
(SUITE-TestManagedSchemaAPI-seed#[86CE0CA0EA55C638]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 816400 INFO  
(SUITE-TestManagedSchemaAPI-seed#[86CE0CA0EA55C638]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 816401 INFO  (zkCallback-322-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@8fae5a0 name:ZooKeeperConnection 

[jira] [Resolved] (SOLR-8566) umbrella ticket: various initialCapacity tweaks (Fix Versions: trunk/master 5.5)

2016-08-10 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8566.
---
Resolution: Fixed

> umbrella ticket: various initialCapacity tweaks (Fix Versions: trunk/master 
> 5.5)
> 
>
> Key: SOLR-8566
> URL: https://issues.apache.org/jira/browse/SOLR-8566
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Priority: Minor
> Fix For: 6.0, 5.5
>
> Attachments: ManagedResource-buildMapToStore.patch, 
> SyncStrategy-syncWithReplicas.patch, TransformerFactory-defaultFactories.patch
>
>
> Everyone is welcome to use/reference this ticket to make small, 
> uncontroversional initialCapacity changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9336) ReRankQuery.(hashCode|equalsTo) to consider length

2016-08-10 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke closed SOLR-9336.
-
Resolution: Not A Problem

Resolving in favour of SOLR-9331 change.

> ReRankQuery.(hashCode|equalsTo) to consider length
> --
>
> Key: SOLR-9336
> URL: https://issues.apache.org/jira/browse/SOLR-9336
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9336.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9331) Can we remove ReRankQuery's length constructor argument?

2016-08-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415448#comment-15415448
 ] 

ASF subversion and git services commented on SOLR-9331:
---

Commit bc25a565d23a7f791272be02685e71217234704b in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bc25a56 ]

SOLR-9331: Remove ReRankQuery's length constructor argument and member.


> Can we remove ReRankQuery's length constructor argument?
> 
>
> Key: SOLR-9331
> URL: https://issues.apache.org/jira/browse/SOLR-9331
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9331.patch, SOLR-9331.patch
>
>
> Can we remove ReRankQuery's length constructor argument? It is a 
> ReRankQParserPlugin private class.
> proposed patch summary:
> * change ReRankQuery.getTopDocsCollector to use its len argument (instead of 
> the length member)
> * remove ReRankQuery's length member and constructor argument
> * remove ReRankQParser.parse's use of the rows and start parameters
> motivation: towards ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) 
> sharing (more) code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9142) Improve JSON nested facets effeciency

2016-08-10 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415431#comment-15415431
 ] 

David Smiley commented on SOLR-9142:


I filed SOLR-9404 to do the simple refactorings.  I'll add a modicum of 
javadocs too.

[~yo...@apache.org] I looked at your refactoring in SOLR-7452 and it appears it 
won't conflict with SOLR-9404 or this as it doesn't touch FacetFieldProcessor 
-- at least not yet.

> Improve JSON nested facets effeciency
> -
>
> Key: SOLR-9142
> URL: https://issues.apache.org/jira/browse/SOLR-9142
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>
> I indexed a dataset of 2M docs
> {{top_facet_s}} has a cardinality of 1000 which is the top level facet.
> For nested facets it has two fields {{sub_facet_unique_s}} and 
> {{sub_facet_unique_td}} which are string and double and have cardinality 2M
> The nested query for the double field returns in the 1s mark always. The 
> nested query for the string field takes roughly 10s to execute.
> {code:title=nested string facet|borderStyle=solid}
> q=*:*=0=
>   {
>   "top_facet_s": {
>   "type": "terms",
>   "limit": -1,
>   "field": "top_facet_s",
>   "mincount": 1,
>   "excludeTags": "ANY",
>   "facet": {
>   "sub_facet_unique_s": {
>   "type": "terms",
>   "limit": 1,
>   "field": "sub_facet_unique_s",
>   "mincount": 1
>   }
>   }
>   }
>   }
> {code}
> {code:title=nested double facet|borderStyle=solid}
> q=*:*=0=
>   {
>   "top_facet_s": {
>   "type": "terms",
>   "limit": -1,
>   "field": "top_facet_s",
>   "mincount": 1,
>   "excludeTags": "ANY",
>   "facet": {
>   "sub_facet_unique_s": {
>   "type": "terms",
>   "limit": 1,
>   "field": "sub_facet_unique_td",
>   "mincount": 1
>   }
>   }
>   }
>   }
> {code}
> I tried to dig deeper to understand why are string nested faceting that slow 
> compared to numeric field
> Since the top facet has a cardinality of 1000 we have to calculate sub facets 
> on each of them. Now the key difference was in the implementation of the two .
> For the string field, In {{FacetField#getFieldCacheCounts}} we call 
> {{createCollectAcc}} with nDocs=0 and numSlots=2M . This then initializes an 
> array of 2M. So we create a 2M array 1000 times for this one query which from 
> what I understand makes this query slow.
> For numeric fields {{FacetFieldProcessorNumeric#calcFacets}} uses a 
> CountSlotAcc which doesn't assign a huge array. In this query it calls 
> {{createCollectAcc}} with numDocs=2k and numSlots=1024 .
> In string faceting, we create the 2M array because the cardinality is 2M and 
> we use the array position as the ordinal and value as the count. If we could 
> improve on this it would speed things up significantly? For sub-facets we 
> know the maximum cardinality can be at max the top level bucket count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9404) JSON FacetFieldProcessor subclass rename/moves

2016-08-10 Thread David Smiley (JIRA)
David Smiley created SOLR-9404:
--

 Summary: JSON FacetFieldProcessor subclass rename/moves
 Key: SOLR-9404
 URL: https://issues.apache.org/jira/browse/SOLR-9404
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: David Smiley
Assignee: David Smiley


... spinoff of my comment on 
https://issues.apache.org/jira/browse/SOLR-9142?focusedCommentId=15408535=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15408535
 ...


* taste: the fact that some FFP's are declared within FacetField.java and some 
are top-level is bad IMO; they should all be top-level once any subclasses 
start becoming so.
* FFPFCBase:  This is basically the base class for _array based_ accumulator 
implementations -- i.e. direct slot/value accumulators.  I suggest rename to 
FFPArray.  It can handle terms (strings), not numbers directly but those 
encoded as terms, and multi-valued capable.
* FFPDV: Rename to FFPArrayDV: accesses terms from DocValues
* FFPUIF: Rename to FFPArrayUIF: accesses terms via UIF, kind of a pseudo-DV
* FFPNumeric: Rename to FFPHashDV:  Now currently this thing is expressly for 
single-valued numeric DocValues.  _In SOLR-9142 (not here) I intend to make 
this generic to handle terms by global ordinal._
* FFPStream: Rename to FFPEnumTerms:  This does enumeration (not hash or array 
accumulation), and it gets data from Terms.  Perhaps Stream could also go in 
the name but I think Enum is more pertinent.  One day once we have PointValues 
in Solr, we might add a FFPEnumPoints.  Note that such a thing wouldn't stream, 
since that API uses a callback API instead of an iterator style.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9142) Improve JSON nested facets effeciency

2016-08-10 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415406#comment-15415406
 ] 

Yonik Seeley commented on SOLR-9142:


Yep, lots of refactorings still make sense (and why at the Java level I've been 
considering the entire thing *experimental*).

I've also done refactoring as part of SOLR-7452 , it prob makes sense for me to 
get those refactorings committed to try and minimize potential collisions. 

> Improve JSON nested facets effeciency
> -
>
> Key: SOLR-9142
> URL: https://issues.apache.org/jira/browse/SOLR-9142
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>
> I indexed a dataset of 2M docs
> {{top_facet_s}} has a cardinality of 1000 which is the top level facet.
> For nested facets it has two fields {{sub_facet_unique_s}} and 
> {{sub_facet_unique_td}} which are string and double and have cardinality 2M
> The nested query for the double field returns in the 1s mark always. The 
> nested query for the string field takes roughly 10s to execute.
> {code:title=nested string facet|borderStyle=solid}
> q=*:*=0=
>   {
>   "top_facet_s": {
>   "type": "terms",
>   "limit": -1,
>   "field": "top_facet_s",
>   "mincount": 1,
>   "excludeTags": "ANY",
>   "facet": {
>   "sub_facet_unique_s": {
>   "type": "terms",
>   "limit": 1,
>   "field": "sub_facet_unique_s",
>   "mincount": 1
>   }
>   }
>   }
>   }
> {code}
> {code:title=nested double facet|borderStyle=solid}
> q=*:*=0=
>   {
>   "top_facet_s": {
>   "type": "terms",
>   "limit": -1,
>   "field": "top_facet_s",
>   "mincount": 1,
>   "excludeTags": "ANY",
>   "facet": {
>   "sub_facet_unique_s": {
>   "type": "terms",
>   "limit": 1,
>   "field": "sub_facet_unique_td",
>   "mincount": 1
>   }
>   }
>   }
>   }
> {code}
> I tried to dig deeper to understand why are string nested faceting that slow 
> compared to numeric field
> Since the top facet has a cardinality of 1000 we have to calculate sub facets 
> on each of them. Now the key difference was in the implementation of the two .
> For the string field, In {{FacetField#getFieldCacheCounts}} we call 
> {{createCollectAcc}} with nDocs=0 and numSlots=2M . This then initializes an 
> array of 2M. So we create a 2M array 1000 times for this one query which from 
> what I understand makes this query slow.
> For numeric fields {{FacetFieldProcessorNumeric#calcFacets}} uses a 
> CountSlotAcc which doesn't assign a huge array. In this query it calls 
> {{createCollectAcc}} with numDocs=2k and numSlots=1024 .
> In string faceting, we create the 2M array because the cardinality is 2M and 
> we use the array position as the ordinal and value as the count. If we could 
> improve on this it would speed things up significantly? For sub-facets we 
> know the maximum cardinality can be at max the top level bucket count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9397) config API does not support adding/removing cache

2016-08-10 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-9397.
--
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.2

> config API does not support adding/removing cache
> -
>
> Key: SOLR-9397
> URL: https://issues.apache.org/jira/browse/SOLR-9397
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9397.patch
>
>
> example command
> {code}
> {"add-cache" : {"name":"lfuCacheDecayFalse", "class":"solr.search.LFUCache", 
> "size":10 ,"initialSize":9 , "timeDecay":false }}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9331) Can we remove ReRankQuery's length constructor argument?

2016-08-10 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-9331:
--
Attachment: SOLR-9331.patch

Previous patch rebased against latest master with conflicts resolved.

> Can we remove ReRankQuery's length constructor argument?
> 
>
> Key: SOLR-9331
> URL: https://issues.apache.org/jira/browse/SOLR-9331
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9331.patch, SOLR-9331.patch
>
>
> Can we remove ReRankQuery's length constructor argument? It is a 
> ReRankQParserPlugin private class.
> proposed patch summary:
> * change ReRankQuery.getTopDocsCollector to use its len argument (instead of 
> the length member)
> * remove ReRankQuery's length member and constructor argument
> * remove ReRankQParser.parse's use of the rows and start parameters
> motivation: towards ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) 
> sharing (more) code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9397) config API does not support adding/removing cache

2016-08-10 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-9397:
-
Description: 
example command

{code}
{"add-cache" : {"name":"lfuCacheDecayFalse", "class":"solr.search.LFUCache", 
"size":10 ,"initialSize":9 , "timeDecay":false }}
{code}

> config API does not support adding/removing cache
> -
>
> Key: SOLR-9397
> URL: https://issues.apache.org/jira/browse/SOLR-9397
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9397.patch
>
>
> example command
> {code}
> {"add-cache" : {"name":"lfuCacheDecayFalse", "class":"solr.search.LFUCache", 
> "size":10 ,"initialSize":9 , "timeDecay":false }}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9397) config API does not support adding/removing cache

2016-08-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415392#comment-15415392
 ] 

ASF subversion and git services commented on SOLR-9397:
---

Commit 64c99293d7d73c798c794cc647cf19636f62b2d6 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=64c9929 ]

SOLR-9397: Config API does not support adding caches


> config API does not support adding/removing cache
> -
>
> Key: SOLR-9397
> URL: https://issues.apache.org/jira/browse/SOLR-9397
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9397.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9397) config API does not support adding/removing cache

2016-08-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415389#comment-15415389
 ] 

ASF subversion and git services commented on SOLR-9397:
---

Commit 74b470a7a9516a799ec8f28fac579c461fc8e286 in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=74b470a ]

SOLR-9397: Config API does not support adding caches


> config API does not support adding/removing cache
> -
>
> Key: SOLR-9397
> URL: https://issues.apache.org/jira/browse/SOLR-9397
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9397.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9385) add QParser.getParser(String,SolrQueryRequest) variant

2016-08-10 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-9385.
---
   Resolution: Fixed
Fix Version/s: 6.x
   master (7.0)

> add QParser.getParser(String,SolrQueryRequest) variant
> --
>
> Key: SOLR-9385
> URL: https://issues.apache.org/jira/browse/SOLR-9385
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (7.0), 6.x
>
> Attachments: SOLR-9385.patch, SOLR-9385.patch
>
>
> For a relative majority (~32) of callers this variant will eliminate the 
> "What do i pass in for the default (since i do not have one)?" question, 
> compared to a relative minority (~19) of callers that pass in a default other 
> than the default.
> (Noticed as part of SOLR-8542 work.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7344) Deletion by query of uncommitted docs not working with DV updates

2016-08-10 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415339#comment-15415339
 ] 

Yonik Seeley commented on LUCENE-7344:
--

bq. Here, "doc-1" should not be deleted, because the DBQ is submitted before 
the DV update, but because we resolve all DV updates before DBQ (in this 
patch), it ends up deleted.

For Solr, this can never happen...  All updates are blocked during A DBQ, and 
then we re-open the reader before releasing the lock.
All we need at the Solr level is to resolve DV updates *before* resolving 
deletes (since we know that the DQB will always be last).

> Deletion by query of uncommitted docs not working with DV updates
> -
>
> Key: LUCENE-7344
> URL: https://issues.apache.org/jira/browse/LUCENE-7344
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7344.patch, LUCENE-7344.patch, LUCENE-7344.patch
>
>
> When DVs are updated, delete by query doesn't work with the updated DV value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7344) Deletion by query of uncommitted docs not working with DV updates

2016-08-10 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415339#comment-15415339
 ] 

Yonik Seeley edited comment on LUCENE-7344 at 8/10/16 2:29 PM:
---

bq. Here, "doc-1" should not be deleted, because the DBQ is submitted before 
the DV update, but because we resolve all DV updates before DBQ (in this 
patch), it ends up deleted.

For Solr, this can never happen (in solr-cloud mode at least)...  All updates 
are blocked during A DBQ, and then we re-open the reader before releasing the 
lock.
All we need at the Solr level is to resolve DV updates *before* resolving 
deletes (since we know that the DQB will always be last).


was (Author: ysee...@gmail.com):
bq. Here, "doc-1" should not be deleted, because the DBQ is submitted before 
the DV update, but because we resolve all DV updates before DBQ (in this 
patch), it ends up deleted.

For Solr, this can never happen...  All updates are blocked during A DBQ, and 
then we re-open the reader before releasing the lock.
All we need at the Solr level is to resolve DV updates *before* resolving 
deletes (since we know that the DQB will always be last).

> Deletion by query of uncommitted docs not working with DV updates
> -
>
> Key: LUCENE-7344
> URL: https://issues.apache.org/jira/browse/LUCENE-7344
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7344.patch, LUCENE-7344.patch, LUCENE-7344.patch
>
>
> When DVs are updated, delete by query doesn't work with the updated DV value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-08-10 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415326#comment-15415326
 ] 

Yonik Seeley commented on SOLR-5944:


bq. Document that when inplace updates are configured/requested, hard commits 
may happen

s/hard/soft commits?

Thinking about solr-cloud mode only (and the DBQ issue w/ updates that don't 
change the internal doc id): solr already blocks updates, issues the DBQ, and 
then forces open a new realtime searcher.
The easiest correctness workaround would perhaps just be to force open a 
realtime searcher before the DBQ as well.

Basically, in DUH2.deleteByQuery():
{code}
synchronized (solrCoreState.getUpdateLock()) {
  ulog.openRealtimeSearcher();
{code}

If Lucene could tell us if there were any unresolved updates (of the variety 
that change the doc w/o changing it's internal docid), then we could 
conditionally call that.

Lucene throwing an exception may be a non-starter... at the point in time that 
a DBQ is issued, there may not be any numeric updates?

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.62D328FA1DEA57FD.fail2.txt, 
> hoss.62D328FA1DEA57FD.fail3.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-08-10 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415219#comment-15415219
 ] 

Shalin Shekhar Mangar commented on SOLR-5944:
-

I hadn't had a chance to review the patch after Ishan incorporated my last few 
review comments. So I took a look again. 

# Hoss's suggestion of 
`bucket.wait(waitTimeout.timeLeft(TimeUnit.MILLISECONDS));` instead of 
`wait(5000)` is much better and should be implemented.
# Under no circumstances should we we calling `forceUpdateCollection` in the 
indexing code path. It is just too dangerous in the face of high indexing 
rates. Instead we should do what the DUP is already doing i.e. calling 
getLeaderRetry to figure out the current leader. If the current replica is 
partitioned from the leader then we have other mechanisms for taking care of it 
and the replica has no business trying to determine this.
# I'd still suggest that instead of introducing yet another delay mechanism via 
the DebugFilter in JettySolrRunner, we should use the fault injection 
facilities that we already have in Solr.
# The executor service in the new tests should be shutdown in a finally clause. 
You can use ExecutorUtil.shutdownAndAwaitTermination for that.

Hoss has given some excellent comments on the new tests that you added. Please 
incorporate them into your next patch.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.62D328FA1DEA57FD.fail2.txt, 
> hoss.62D328FA1DEA57FD.fail3.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9003) New Admin UI does not display DIH Debug output

2016-08-10 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415188#comment-15415188
 ] 

Alexandre Rafalovitch commented on SOLR-9003:
-

I did not see a guidance on this, so I added it next to the other Admin UI 
issues. I will make sure to put it on the bottom for the future issues. Thank 
you for pointing it out.

> New Admin UI does not display DIH Debug output
> --
>
> Key: SOLR-9003
> URL: https://issues.apache.org/jira/browse/SOLR-9003
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
> Fix For: 6.2, master (7.0)
>
> Attachments: New Admin-UI.png, Old Admin-UI.png
>
>
> When enabling *Debug* flag in DIH Dataimport screen, a new section *Raw 
> Debug-Response* is added.
> In the new Admin UI, it does not seem to show the output, just *No Request 
> executed*
> This was tested using the *db* core of the example-DIH setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9003) New Admin UI does not display DIH Debug output

2016-08-10 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415183#comment-15415183
 ] 

Varun Thacker commented on SOLR-9003:
-

Hi Alexandre,

I guess its not a big deal but the new CHANGES entry is generally added towards 
to bottom of the list. I think you inserted it in sorted order of the Jira 
numbers.

> New Admin UI does not display DIH Debug output
> --
>
> Key: SOLR-9003
> URL: https://issues.apache.org/jira/browse/SOLR-9003
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
> Fix For: 6.2, master (7.0)
>
> Attachments: New Admin-UI.png, Old Admin-UI.png
>
>
> When enabling *Debug* flag in DIH Dataimport screen, a new section *Raw 
> Debug-Response* is added.
> In the new Admin UI, it does not seem to show the output, just *No Request 
> executed*
> This was tested using the *db* core of the example-DIH setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1094 - Still Failing

2016-08-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1094/

11 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard1

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard1
at 
__randomizedtesting.SeedInfo.seed([1E3A7EBF76241B4E:4227993EB804AF78]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:794)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart(CdcrReplicationDistributedZkTest.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (SOLR-9003) New Admin UI does not display DIH Debug output

2016-08-10 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-9003.
-
   Resolution: Fixed
 Assignee: Alexandre Rafalovitch  (was: Upayavira)
Fix Version/s: master (7.0)
   6.2

Fixed and tested against DIH DB example.

> New Admin UI does not display DIH Debug output
> --
>
> Key: SOLR-9003
> URL: https://issues.apache.org/jira/browse/SOLR-9003
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
> Fix For: 6.2, master (7.0)
>
> Attachments: New Admin-UI.png, Old Admin-UI.png
>
>
> When enabling *Debug* flag in DIH Dataimport screen, a new section *Raw 
> Debug-Response* is added.
> In the new Admin UI, it does not seem to show the output, just *No Request 
> executed*
> This was tested using the *db* core of the example-DIH setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9003) New Admin UI does not display DIH Debug output

2016-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415162#comment-15415162
 ] 

ASF GitHub Bot commented on SOLR-9003:
--

Github user arafalov closed the pull request at:

https://github.com/apache/lucene-solr/pull/61


> New Admin UI does not display DIH Debug output
> --
>
> Key: SOLR-9003
> URL: https://issues.apache.org/jira/browse/SOLR-9003
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Upayavira
> Attachments: New Admin-UI.png, Old Admin-UI.png
>
>
> When enabling *Debug* flag in DIH Dataimport screen, a new section *Raw 
> Debug-Response* is added.
> In the new Admin UI, it does not seem to show the output, just *No Request 
> executed*
> This was tested using the *db* core of the example-DIH setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #61: SOLR-9003: Fix various UI/DIH debug features

2016-08-10 Thread arafalov
Github user arafalov closed the pull request at:

https://github.com/apache/lucene-solr/pull/61


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9003) New Admin UI does not display DIH Debug output

2016-08-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415156#comment-15415156
 ] 

ASF subversion and git services commented on SOLR-9003:
---

Commit e58c83f0aba8832eabc786a3a8dadd89099c8f61 in lucene-solr's branch 
refs/heads/branch_6x from [~arafalov]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e58c83f ]

SOLR-9003: DIH Debug now works in new Admin UI
This resolves #61


> New Admin UI does not display DIH Debug output
> --
>
> Key: SOLR-9003
> URL: https://issues.apache.org/jira/browse/SOLR-9003
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Upayavira
> Attachments: New Admin-UI.png, Old Admin-UI.png
>
>
> When enabling *Debug* flag in DIH Dataimport screen, a new section *Raw 
> Debug-Response* is added.
> In the new Admin UI, it does not seem to show the output, just *No Request 
> executed*
> This was tested using the *db* core of the example-DIH setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 17521 - Still Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17521/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistributedQueueTest.testPeekElements

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A6195951497C99AB:5B37E3709945CDB6]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.DistributedQueueTest.testPeekElements(DistributedQueueTest.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12041 lines...]
   [junit4] Suite: org.apache.solr.cloud.DistributedQueueTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-9003) New Admin UI does not display DIH Debug output

2016-08-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415141#comment-15415141
 ] 

ASF subversion and git services commented on SOLR-9003:
---

Commit dd03d39dd6624a5d41325397ca246e01b12ec71d in lucene-solr's branch 
refs/heads/master from [~arafalov]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dd03d39 ]

SOLR-9003: DIH Debug now works in new Admin UI


> New Admin UI does not display DIH Debug output
> --
>
> Key: SOLR-9003
> URL: https://issues.apache.org/jira/browse/SOLR-9003
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Upayavira
> Attachments: New Admin-UI.png, Old Admin-UI.png
>
>
> When enabling *Debug* flag in DIH Dataimport screen, a new section *Raw 
> Debug-Response* is added.
> In the new Admin UI, it does not seem to show the output, just *No Request 
> executed*
> This was tested using the *db* core of the example-DIH setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7410) Make cache keys and closed listeners less trappy

2016-08-10 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7410:
-
Attachment: LUCENE-7410.patch

I have a patch which makes the situation better I think:
 - the ability to remove a listener is gone
 - there is no "core" cache key or listener on {{IndexReader}} anymore, only on 
{{LeafReader}}
 - cache key and listener registration have moved to the 
{{IndexReader.CacheHelper}} class so that it is clear which key relates to 
which listener. It also makes it very hard to propagate the cache key from a 
filtered reader without propagating the listener registration or vice versa, 
you cannot do it by mistake anymore.
 - {{IndexReader.addReaderClosedListener}} and 
{{IndexReader.getCombinedCoreAndDeletesKey}} have been replaced by 
{{IndexReader.getReaderCacheHelper}}, which returns null by default, meaning 
that the reader is not suited for caching
 - {{IndexReader.addCoreClosedListener}} and {{IndexReader.getCoreCacheKey()}} 
have been replaced by {{LeafReader.getCoreCacheHelper}}, which returns null by 
default, meaning that this leaf reader has no concept of "core" data
 - there is only one impl that actually implements 
{{LeafReader.getCoreCacheHelper}}: {{SegmentReader}}. All other impls either 
delegate to it or do not support caching on a core key.
 - there are only two impls that actually implement 
{{IndexReader.getReaderCacheHelper}}: {{SegmentReader}} and 
{{StandardDirectoryReader}}. All other impls either delegate to it or do not 
support caching.
 - the query cache and BitSetProducer for joins skip caching when 
{{LeafReader.getCoreCacheHelper}} returns null. Other APIs like segment-based 
replication or FieldCache fail with an exception since not being able to cache 
is a problem/performance trap in those cases.
 - IndexReader.CacheKey is used as a cache key to avoid type safety issues.

On the cons side, I removed insanity checking since I could not implement it 
anymore with the new API but maybe it is not that much of an issue since cache 
insanity is not really possible anymore unless you really want it? I also found 
some usage of cache keys in Solr which can be dangerous since cache keys are 
shared between readers that have different content, I *think* it should be fine 
given how these readers are used (I left notes in the patch), but that is 
something we might still want to look into since it could cause very subtle 
bugs.

> Make cache keys and closed listeners less trappy
> 
>
> Key: LUCENE-7410
> URL: https://issues.apache.org/jira/browse/LUCENE-7410
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
> Attachments: LUCENE-7410.patch
>
>
> IndexReader currently exposes getCoreCacheKey(), 
> getCombinedCoreAndDeletesKey(), addCoreClosedListener() and 
> addReaderClosedListener(). They are typically used to manage resources whose 
> lifetime needs to mimic the lifetime of segments/indexes, typically caches.
> I think this is trappy for various reasons:
> h3. Memory leaks
> When maintaining a cache, entries are added to the cache based on the cache 
> key and then evicted using the cache key that is given back by the close 
> listener, so it is very important that both keys are the same.
> But if a filter reader happens to delegate get*Key() and not 
> add*ClosedListener() or vice-versa then there is potential for a memory leak 
> since the closed listener will be called on a different key and entries will 
> never be evicted from the cache.
> h3. Lifetime expectations
> The expectation of using the core cache key is that it will not change in 
> case of deletions, but this is only true on SegmentReader and LeafReader 
> impls that delegate to it. Other implementations such as composite readers or 
> parallel leaf readers use the same key for "core" and "combined core and 
> deletes".
> h3. Throw-away wrappers cause cache trashing
> An application might want to either expose more (with a ParrallelReader or 
> MultiReader) or less information (by filtering fields/docs that can be seen) 
> depending on the user who is logged in. In that case the application would 
> typically maintain a DirectoryReader and then wrap it per request depending 
> on the logged user and throw away the wrapper once the request is completed.
> The problem is that these wrappers have their own cache keys and the 
> application may build something costly and put it in a cache to throw it away 
> a couple milliseconds later. I would rather like for such readers to have a 
> way to opt out from caching on order to avoid this performance trap.
> h3. Type safety
> The keys that are exposed are plain java.lang.Object instances, which 
> requires caches to look like a {{Map}} which makes it very easy to 
> either try to get, put or remove 

[jira] [Commented] (SOLR-9252) Feature selection and logistic regression on text

2016-08-10 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415096#comment-15415096
 ] 

Cao Manh Dat commented on SOLR-9252:


I mean we should ignore that documents inside training for loop

So it will be
{code}
for (Map.Entry entry : docVectors.entrySet()) {
  ...
}
{code}
to
{code}
for (Map.Entry entry : docVectors.entrySet()) {
  if (isZeros(vector)) continue
  ...
}
{code}

Because we will have same zero vectors which have different label (both 
positive and negative).
I will submit a patch soon to include this change and regularization.


> Feature selection and logistic regression on text
> -
>
> Key: SOLR-9252
> URL: https://issues.apache.org/jira/browse/SOLR-9252
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud, SolrJ
>Reporter: Cao Manh Dat
>Assignee: Joel Bernstein
>  Labels: Streaming
> Fix For: 6.2
>
> Attachments: SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch
>
>
> This ticket adds two new streaming expressions: *features* and *train*
> These two functions work together to train a logistic regression model on 
> text, from a training set stored in a SolrCloud collection.
> The syntax is as follows:
> {code}
> train(collection1, q="*:*",
>   features(collection1, 
>q="*:*",  
>field="body", 
>outcome="out_i", 
>positiveLabel=1, 
>numTerms=100),
>   field="body",
>   outcome="out_i",
>   maxIterations=100)
> {code}
> The *features* function extracts the feature terms from a training set using 
> *information gain* to score the terms. 
> http://www.jiliang.xyz/publication/feature_selection_for_classification.pdf
> The *train* function uses the extracted features to train a logistic 
> regression model on a text field in the training set.
> For both *features* and *train* the training set is defined by a query. The 
> doc vectors in the *train* function use tf-idf to represent the terms in the 
> document. The idf is calculated for the specific training set, allowing 
> multiple training sets to be stored in the same collection without polluting 
> the idf. 
> In the *train* function a batch gradient descent approach is used to 
> iteratively train the model.
> Both the *features* and the *train* function are embedded in Solr using the 
> AnalyticsQuery framework. So only the model is transported across the network 
> with each iteration.
> Both the features and the models can be stored in a SolrCloud collection. 
> Using this approach Solr can hold millions of models which can be selectively 
> deployed. For example a model could be trained for each user, to personalize 
> ranking and recommendations.
> Below is the final iteration of a model trained on the Enron Ham/Spam 
> dataset. The model includes the terms and their idfs and weights as well as a 
> classification evaluation describing the accuracy of model on the training 
> set. 
> {code}
> {
>   "idfs_ds": [1.2627703388716238, 1.2043595767152093, 
> 1.3886172425360304, 1.5488587854881268, 1.6127302558747882, 
> 2.1359177807201526, 1.514866246141212, 1.7375701403808523, 
> 1.6166175299631897, 1.756428159015249, 1.7929202354640175, 
> 1.2834893120635762, 1.899442866302021, 1.8639061320252337, 
> 1.7631697575821685, 1.6820002892260415, 1.4411352768194767, 
> 2.103708877350535, 1.2225773869965861, 2.208893321170597, 1.878981794430681, 
> 2.043737027506736, 2.2819184561854864, 2.3264563106163885, 
> 1.9336117619172708, 2.0467265663551024, 1.7386696457142692, 
> 2.468795829515302, 2.069437610615317, 2.6294363202479327, 3.7388303845193307, 
> 2.5446615802900157, 1.7430797961918219, 3.0787440662202736, 
> 1.9579702057493114, 2.289523055570706, 1.5362003886162032, 
> 2.7549569891263763, 3.955894889757158, 2.587435396273302, 3.945844553903657, 
> 1.003513057076781, 3.0416264032637708, 2.248395764146843, 4.018415246738492, 
> 2.2876164773001246, 3.3636289340509933, 1.2438124251270097, 
> 2.733903579928544, 3.439026951535205, 0.6709665389201712, 0.9546224358275518, 
> 2.8080115520822657, 2.477970205791343, 2.2631561797299637, 
> 3.2378087608499606, 0.36177021415584676, 4.1083634834014315, 
> 4.120197941048435, 2.471081544796158, 2.424147775633, 2.92339362620, 
> 2.9269972337044097, 3.2987413118451183, 2.383498249003407, 4.168988105217867, 
> 2.877691472720256, 4.233526626355437, 3.8505343740993316, 2.3264563106163885, 
> 2.6429318017228174, 4.260555298743357, 3.0058372954121855, 
> 

[GitHub] lucene-solr pull request #:

2016-08-10 Thread mikemccand
Github user mikemccand commented on the pull request:


https://github.com/apache/lucene-solr/commit/3816a0eb2bbd1929523ae27db3c90d0942ed5f5f#commitcomment-18587727
  
Thank you for raising it @makeyang!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6039 - Still unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6039/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20160810085743940, index.20160810085755855, index.properties, 
replication.properties, snapshot_metadata] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [index.20160810085743940, index.20160810085755855, 
index.properties, replication.properties, snapshot_metadata] expected:<1> but 
was:<2>
at 
__randomizedtesting.SeedInfo.seed([ED1152266F24BEDE:36BA52E06A0CD76D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:907)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:874)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[GitHub] lucene-solr pull request #:

2016-08-10 Thread makeyang
Github user makeyang commented on the pull request:


https://github.com/apache/lucene-solr/commit/3816a0eb2bbd1929523ae27db3c90d0942ed5f5f#commitcomment-18587267
  
@mikemccand  
I am the guy who ask this question on discuss.elastic.co.
thanks again for helping clarifying the question.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+129) - Build # 17520 - Unstable!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17520/
Java: 32bit/jdk-9-ea+129 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:40834/solr/testSolrCloudCollection_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException 
occured when talking to server at: 
http://127.0.0.1:40834/solr/testSolrCloudCollection_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([C609FCCAF63D4073:FBD152E6CED31E03]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:760)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1172)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1061)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:997)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.security.BasicAuthIntegrationTest.doExtraTests(BasicAuthIntegrationTest.java:193)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testCollectionCreateSearchDelete(TestMiniSolrCloudClusterBase.java:196)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testBasics(TestMiniSolrCloudClusterBase.java:79)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7410) Make cache keys and closed listeners less trappy

2016-08-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414994#comment-15414994
 ] 

Adrien Grand commented on LUCENE-7410:
--

Agreed. I'm also looking into removing the ability to remove closed listeners.

> Make cache keys and closed listeners less trappy
> 
>
> Key: LUCENE-7410
> URL: https://issues.apache.org/jira/browse/LUCENE-7410
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>
> IndexReader currently exposes getCoreCacheKey(), 
> getCombinedCoreAndDeletesKey(), addCoreClosedListener() and 
> addReaderClosedListener(). They are typically used to manage resources whose 
> lifetime needs to mimic the lifetime of segments/indexes, typically caches.
> I think this is trappy for various reasons:
> h3. Memory leaks
> When maintaining a cache, entries are added to the cache based on the cache 
> key and then evicted using the cache key that is given back by the close 
> listener, so it is very important that both keys are the same.
> But if a filter reader happens to delegate get*Key() and not 
> add*ClosedListener() or vice-versa then there is potential for a memory leak 
> since the closed listener will be called on a different key and entries will 
> never be evicted from the cache.
> h3. Lifetime expectations
> The expectation of using the core cache key is that it will not change in 
> case of deletions, but this is only true on SegmentReader and LeafReader 
> impls that delegate to it. Other implementations such as composite readers or 
> parallel leaf readers use the same key for "core" and "combined core and 
> deletes".
> h3. Throw-away wrappers cause cache trashing
> An application might want to either expose more (with a ParrallelReader or 
> MultiReader) or less information (by filtering fields/docs that can be seen) 
> depending on the user who is logged in. In that case the application would 
> typically maintain a DirectoryReader and then wrap it per request depending 
> on the logged user and throw away the wrapper once the request is completed.
> The problem is that these wrappers have their own cache keys and the 
> application may build something costly and put it in a cache to throw it away 
> a couple milliseconds later. I would rather like for such readers to have a 
> way to opt out from caching on order to avoid this performance trap.
> h3. Type safety
> The keys that are exposed are plain java.lang.Object instances, which 
> requires caches to look like a {{Map}} which makes it very easy to 
> either try to get, put or remove on the wrong object since any object would 
> be accepted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7410) Make cache keys and closed listeners less trappy

2016-08-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414942#comment-15414942
 ] 

Robert Muir commented on LUCENE-7410:
-

{{getCombinedCoreAndDeletesKey()}}: what uses this one? Can we remove it?

> Make cache keys and closed listeners less trappy
> 
>
> Key: LUCENE-7410
> URL: https://issues.apache.org/jira/browse/LUCENE-7410
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>
> IndexReader currently exposes getCoreCacheKey(), 
> getCombinedCoreAndDeletesKey(), addCoreClosedListener() and 
> addReaderClosedListener(). They are typically used to manage resources whose 
> lifetime needs to mimic the lifetime of segments/indexes, typically caches.
> I think this is trappy for various reasons:
> h3. Memory leaks
> When maintaining a cache, entries are added to the cache based on the cache 
> key and then evicted using the cache key that is given back by the close 
> listener, so it is very important that both keys are the same.
> But if a filter reader happens to delegate get*Key() and not 
> add*ClosedListener() or vice-versa then there is potential for a memory leak 
> since the closed listener will be called on a different key and entries will 
> never be evicted from the cache.
> h3. Lifetime expectations
> The expectation of using the core cache key is that it will not change in 
> case of deletions, but this is only true on SegmentReader and LeafReader 
> impls that delegate to it. Other implementations such as composite readers or 
> parallel leaf readers use the same key for "core" and "combined core and 
> deletes".
> h3. Throw-away wrappers cause cache trashing
> An application might want to either expose more (with a ParrallelReader or 
> MultiReader) or less information (by filtering fields/docs that can be seen) 
> depending on the user who is logged in. In that case the application would 
> typically maintain a DirectoryReader and then wrap it per request depending 
> on the logged user and throw away the wrapper once the request is completed.
> The problem is that these wrappers have their own cache keys and the 
> application may build something costly and put it in a cache to throw it away 
> a couple milliseconds later. I would rather like for such readers to have a 
> way to opt out from caching on order to avoid this performance trap.
> h3. Type safety
> The keys that are exposed are plain java.lang.Object instances, which 
> requires caches to look like a {{Map}} which makes it very easy to 
> either try to get, put or remove on the wrong object since any object would 
> be accepted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7410) Make cache keys and closed listeners less trappy

2016-08-10 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7410:


 Summary: Make cache keys and closed listeners less trappy
 Key: LUCENE-7410
 URL: https://issues.apache.org/jira/browse/LUCENE-7410
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand


IndexReader currently exposes getCoreCacheKey(), 
getCombinedCoreAndDeletesKey(), addCoreClosedListener() and 
addReaderClosedListener(). They are typically used to manage resources whose 
lifetime needs to mimic the lifetime of segments/indexes, typically caches.

I think this is trappy for various reasons:

h3. Memory leaks

When maintaining a cache, entries are added to the cache based on the cache key 
and then evicted using the cache key that is given back by the close listener, 
so it is very important that both keys are the same.

But if a filter reader happens to delegate get*Key() and not 
add*ClosedListener() or vice-versa then there is potential for a memory leak 
since the closed listener will be called on a different key and entries will 
never be evicted from the cache.

h3. Lifetime expectations

The expectation of using the core cache key is that it will not change in case 
of deletions, but this is only true on SegmentReader and LeafReader impls that 
delegate to it. Other implementations such as composite readers or parallel 
leaf readers use the same key for "core" and "combined core and deletes".

h3. Throw-away wrappers cause cache trashing

An application might want to either expose more (with a ParrallelReader or 
MultiReader) or less information (by filtering fields/docs that can be seen) 
depending on the user who is logged in. In that case the application would 
typically maintain a DirectoryReader and then wrap it per request depending on 
the logged user and throw away the wrapper once the request is completed.

The problem is that these wrappers have their own cache keys and the 
application may build something costly and put it in a cache to throw it away a 
couple milliseconds later. I would rather like for such readers to have a way 
to opt out from caching on order to avoid this performance trap.

h3. Type safety

The keys that are exposed are plain java.lang.Object instances, which requires 
caches to look like a {{Map}} which makes it very easy to either try 
to get, put or remove on the wrong object since any object would be accepted.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7344) Deletion by query of uncommitted docs not working with DV updates

2016-08-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414906#comment-15414906
 ] 

Robert Muir commented on LUCENE-7344:
-

The performance of this seems really trappy, I am not sure we should do this?

Maybe we should just document the limitation unless there is a cleaner way.

> Deletion by query of uncommitted docs not working with DV updates
> -
>
> Key: LUCENE-7344
> URL: https://issues.apache.org/jira/browse/LUCENE-7344
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7344.patch, LUCENE-7344.patch, LUCENE-7344.patch
>
>
> When DVs are updated, delete by query doesn't work with the updated DV value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6968) LSH Filter

2016-08-10 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414895#comment-15414895
 ] 

Varun Thacker commented on LUCENE-6968:
---

Hi Tommaso,

I think we need to fix the CHANGES entry to move the entry to the 6.2 section. 
Its under 7.0 section currently 

> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7344) Deletion by query of uncommitted docs not working with DV updates

2016-08-10 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414871#comment-15414871
 ] 

Shai Erera commented on LUCENE-7344:


bq. I don't understand most of what you're saying

To clarify the problem, both for you but also in the interest of writing a 
detailed plan of the proposed solution: currently when a DBQ is processed, it 
uses the LeafReader *without* the NDV updates, and therefore has no knowledge 
of the updated values. This is relatively easily solved in the patch I 
uploaded, by applying the DV updates before the DBQ is processed. That way, the 
DBQ uses a LeafReader which is already aware of the updates and all works well.

However, there is an order of update operations that occur in IndexWriter. In 
our case it could be a mix in of DBQ and NDV updates. So if we apply *all* the 
DV updates before any of the DBQs, we'll get incorrect results where the DBQ 
either delete a document it shouldn't (see code example above, and also what 
your {{testDeleteFollowedByUpdateOfDeletedValue}} shows), or not delete a 
document that it should.

To properly solve this problem, we need to apply the DV updates and DBQs in the 
order they were received (as opposed to applying them in bulk in current code). 
Meaning if the order of operations is NDVU1, NDVU2, DBQ1, NDVU3, DBQ2, DBQ3, 
NDVU4, then we need to:
# Apply NDVU1 + NDVU2; this will cause a new LeafReader to be created
# Apply DBQ1; using the already updated LeafReader
# Apply NDVU3; another LeafReader will be created, now reflecting all 3 NDV 
updates
# Apply DBQ2 and DBQ3; using the updated LeafReader from above
# Apply NDVU4; this will cause another LeafReader to be created

The adversarial affect in this case is that we cause 3 LeafReader reopens, each 
time (due to how NDV updates are currently implemented) writing the full DV 
field to a new stack. If you have many documents, it's going to be very 
expensive. Also, if you have a bigger sequence of interleaving updates and 
deletes, this gets worse and worse.

And so here comes the optimization that Mike and I discussed above. Since the 
NDV updates are held in-memory until they're applied, we can avoid flushing 
them to disk and creating a LeafReader which reads the original DV field + the 
in-memory DV updates. Note though: not *all* DV updates, but only the ones that 
are relevant up until this point. So in the case above, that LeafReader will 
view only NDVU1 and NDVU2, and later it will be updated to view NDVU3 as well.

This is purely an optimization step and has nothing to do with correctness (of 
course, that optimization is tricky and needs to be implemented correctly!). 
Therefore my plan of attack in this case is:

# Have enough tests that try different cases before any of this is implemented. 
For example, Mike proposed above to have the LeafReader + DV field "view" use 
docIdUpto. I need to check the code again, but I want to make sure that if 
NDVU2, NDVU3 and NDVU4 (with the interleaving DBQs) all affect the *same* 
document, everything still works.
# Implement the less-efficient approach, i.e. flush the DV updates to disk 
before each DBQ is processed. This ensures that we have a proper solution 
implemented, and we leave the optimization to a later step (either literally a 
later commit, or just a different patch or whatever). I think this is 
complicated enough to start with.
# Improve the solution to avoid flushing DV updates between the DBQs, as 
proposed above.

bq. testBiasedMixOfRandomUpdates

I briefly reviewed the test, but not thoroughly (I intend to). However, notice 
that committing (hard/soft ; commit/NRT) completely avoids the problem because 
a commit/NRT already means flushing DV updates. So if that's what this test 
does, I don't think it's going to expose the problem. Perhaps with the 
explanation I wrote above, you can revisit the test and make it fail though.

> Deletion by query of uncommitted docs not working with DV updates
> -
>
> Key: LUCENE-7344
> URL: https://issues.apache.org/jira/browse/LUCENE-7344
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7344.patch, LUCENE-7344.patch, LUCENE-7344.patch
>
>
> When DVs are updated, delete by query doesn't work with the updated DV value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+129) - Build # 17519 - Failure!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17519/
Java: 64bit/jdk-9-ea+129 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 12559 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp/junit4-J0-20160810_064221_524.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps/java_pid22233.hprof 
...
   [junit4] Heap dump file created [398641078 bytes in 1.287 secs]
   [junit4] <<< JVM J0: EOF 

[...truncated 11032 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:763: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:715: Some of the 
tests produced a heap dump, but did not fail. Maybe a suppressed 
OutOfMemoryError? Dumps created:
* java_pid22233.hprof

Total time: 60 minutes 59 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (LUCENE-7408) Geo3d test failure

2016-08-10 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright resolved LUCENE-7408.
-
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.x

> Geo3d test failure
> --
>
> Key: LUCENE-7408
> URL: https://issues.apache.org/jira/browse/LUCENE-7408
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: 6.x, master (7.0)
>
>
> FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomMedium
> Error Message:
> FAIL: id=20600 should have matched but did not   shape=GeoStandardCircle: 
> {planetmodel=PlanetModel.WGS84, center=[lat=-2.7574435614238194E-13, 
> lon=0.0([X=1.0011188539924791, Y=0.0, Z=-2.760528738161554E-13])], 
> radius=1.5887859182593391(91.03072766607713)}   bounds=XYZBounds: 
> [xmin=-0.01779006715405413 xmax=1.0011188549924792 ymin=-1.0011188549924792 
> ymax=1.0011188549924792 zmin=-0.9977622930221051 zmax=0.9977622930221051]   
> world bounds=( minX=-1.0011188539924791 maxX=1.0011188539924791 
> minY=-1.0011188539924791 maxY=1.0011188539924791 minZ=-0.9977622920221051 
> maxZ=0.9977622920221051   quantized point=[X=-0.017929931093965013, 
> Y=0.6974607560008638, Z=0.7155524064776803] within shape? true within bounds? 
> false   unquantized point=[lat=0.7980359504429014, 
> lon=1.5964981068121482([X=-0.017929931150094086, Y=0.6974607557894967, 
> Z=0.7155524064918857])] within shape? true within bounds? false   docID=20241 
> deleted?=false   query=PointInGeo3DShapeQuery: field=point: Shape: 
> GeoStandardCircle: {planetmodel=PlanetModel.WGS84, 
> center=[lat=-2.7574435614238194E-13, lon=0.0([X=1.0011188539924791, Y=0.0, 
> Z=-2.760528738161554E-13])], radius=1.5887859182593391(91.03072766607713)}   
> explanation: target is in leaf _1(6.2.0):C40793 of full reader 
> StandardDirectoryReader(segments:5:nrt _1(6.2.0):C40793) full BKD path to 
> target doc:   Cell(x=-1.0011188543037526 TO 1.0011188543037524 
> y=-1.0011188510404756 TO 1.0011188543037524 z=-0.9977622923536127 TO 
> 0.9977622923536126); Shape relationship = OVERLAPS; Quantized point within 
> cell = true; Unquantized point within cell = true   
> Cell(x=-1.0011188543037526 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=-0.9977622923536127 TO 0.9977622923536126); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true   Cell(x=-1.0011188543037526 TO 
> 7.390138260796596E-4 y=-1.0011188510404756 TO 1.0011188543037524 z=0.0 TO 
> 0.9977622923536126); Shape relationship = OVERLAPS; Quantized point within 
> cell = true; Unquantized point within cell = true   
> Cell(x=-0.36164092093242806 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=0.0 TO 0.9977622923536126); Shape relationship = 
> OVERLAPS; Quantized point within cell = true; Unquantized point within cell = 
> true   Cell(x=-0.36164092093242806 TO 7.390138260796596E-4 
> y=-1.0011188510404756 TO 1.0011188543037524 z=0.0 TO 0.9913910577870918); 
> Shape relationship = OVERLAPS; Quantized point within cell = true; 
> Unquantized point within cell = true   Cell(x=-0.36164092093242806 TO 
> 7.390138260796596E-4 y=-1.0011188510404756 TO 1.0011188543037524 z=0.0 TO 
> 0.8703133879741732); Shape relationship = OVERLAPS; Quantized point within 
> cell = true; Unquantized point within cell = true   
> Cell(x=-0.36164092093242806 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=0.4397491496027799 TO 0.8703133879741732); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true   Cell(x=-0.36164092093242806 TO 
> 7.390138260796596E-4 y=-1.0011188510404756 TO 1.0011188543037524 
> z=0.4397491496027799 TO 0.7233329727533392); Shape relationship = OVERLAPS; 
> Quantized point within cell = true; Unquantized point within cell = true  
>  Cell(x=-0.2013357411633494 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=0.4397491496027799 TO 0.7233329727533392); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true on cell Cell(x=-1.0011188543037526 TO 
> 1.0011188543037524 y=-1.0011188510404756 TO 1.0011188543037524 
> z=-0.9977622923536127 TO 0.9977622923536126); Shape relationship = OVERLAPS; 
> Quantized point within cell = true; Unquantized point within cell = true, 
> wrapped visitor returned CELL_CROSSES_QUERY on cell 
> Cell(x=-1.0011188543037526 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=-0.9977622923536127 TO 0.9977622923536126); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true, wrapped visitor returned 

[jira] [Commented] (LUCENE-7408) Geo3d test failure

2016-08-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414805#comment-15414805
 ] 

ASF subversion and git services commented on LUCENE-7408:
-

Commit d6bd6bbcbe1c7315537cb6376fa4a36af62b6fa9 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d6bd6bb ]

LUCENE-7408: Detect degenerate case in lagrangian bounds computation when it 
pops up.


> Geo3d test failure
> --
>
> Key: LUCENE-7408
> URL: https://issues.apache.org/jira/browse/LUCENE-7408
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomMedium
> Error Message:
> FAIL: id=20600 should have matched but did not   shape=GeoStandardCircle: 
> {planetmodel=PlanetModel.WGS84, center=[lat=-2.7574435614238194E-13, 
> lon=0.0([X=1.0011188539924791, Y=0.0, Z=-2.760528738161554E-13])], 
> radius=1.5887859182593391(91.03072766607713)}   bounds=XYZBounds: 
> [xmin=-0.01779006715405413 xmax=1.0011188549924792 ymin=-1.0011188549924792 
> ymax=1.0011188549924792 zmin=-0.9977622930221051 zmax=0.9977622930221051]   
> world bounds=( minX=-1.0011188539924791 maxX=1.0011188539924791 
> minY=-1.0011188539924791 maxY=1.0011188539924791 minZ=-0.9977622920221051 
> maxZ=0.9977622920221051   quantized point=[X=-0.017929931093965013, 
> Y=0.6974607560008638, Z=0.7155524064776803] within shape? true within bounds? 
> false   unquantized point=[lat=0.7980359504429014, 
> lon=1.5964981068121482([X=-0.017929931150094086, Y=0.6974607557894967, 
> Z=0.7155524064918857])] within shape? true within bounds? false   docID=20241 
> deleted?=false   query=PointInGeo3DShapeQuery: field=point: Shape: 
> GeoStandardCircle: {planetmodel=PlanetModel.WGS84, 
> center=[lat=-2.7574435614238194E-13, lon=0.0([X=1.0011188539924791, Y=0.0, 
> Z=-2.760528738161554E-13])], radius=1.5887859182593391(91.03072766607713)}   
> explanation: target is in leaf _1(6.2.0):C40793 of full reader 
> StandardDirectoryReader(segments:5:nrt _1(6.2.0):C40793) full BKD path to 
> target doc:   Cell(x=-1.0011188543037526 TO 1.0011188543037524 
> y=-1.0011188510404756 TO 1.0011188543037524 z=-0.9977622923536127 TO 
> 0.9977622923536126); Shape relationship = OVERLAPS; Quantized point within 
> cell = true; Unquantized point within cell = true   
> Cell(x=-1.0011188543037526 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=-0.9977622923536127 TO 0.9977622923536126); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true   Cell(x=-1.0011188543037526 TO 
> 7.390138260796596E-4 y=-1.0011188510404756 TO 1.0011188543037524 z=0.0 TO 
> 0.9977622923536126); Shape relationship = OVERLAPS; Quantized point within 
> cell = true; Unquantized point within cell = true   
> Cell(x=-0.36164092093242806 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=0.0 TO 0.9977622923536126); Shape relationship = 
> OVERLAPS; Quantized point within cell = true; Unquantized point within cell = 
> true   Cell(x=-0.36164092093242806 TO 7.390138260796596E-4 
> y=-1.0011188510404756 TO 1.0011188543037524 z=0.0 TO 0.9913910577870918); 
> Shape relationship = OVERLAPS; Quantized point within cell = true; 
> Unquantized point within cell = true   Cell(x=-0.36164092093242806 TO 
> 7.390138260796596E-4 y=-1.0011188510404756 TO 1.0011188543037524 z=0.0 TO 
> 0.8703133879741732); Shape relationship = OVERLAPS; Quantized point within 
> cell = true; Unquantized point within cell = true   
> Cell(x=-0.36164092093242806 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=0.4397491496027799 TO 0.8703133879741732); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true   Cell(x=-0.36164092093242806 TO 
> 7.390138260796596E-4 y=-1.0011188510404756 TO 1.0011188543037524 
> z=0.4397491496027799 TO 0.7233329727533392); Shape relationship = OVERLAPS; 
> Quantized point within cell = true; Unquantized point within cell = true  
>  Cell(x=-0.2013357411633494 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=0.4397491496027799 TO 0.7233329727533392); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true on cell Cell(x=-1.0011188543037526 TO 
> 1.0011188543037524 y=-1.0011188510404756 TO 1.0011188543037524 
> z=-0.9977622923536127 TO 0.9977622923536126); Shape relationship = OVERLAPS; 
> Quantized point within cell = true; Unquantized point within cell = true, 
> wrapped visitor returned CELL_CROSSES_QUERY on cell 
> Cell(x=-1.0011188543037526 TO 

[jira] [Commented] (LUCENE-7408) Geo3d test failure

2016-08-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414801#comment-15414801
 ] 

ASF subversion and git services commented on LUCENE-7408:
-

Commit 5d06ca3da08f6904ca8151c05e384491f5278641 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5d06ca3 ]

LUCENE-7408: Detect degenerate case in lagrangian bounds computation when it 
pops up.


> Geo3d test failure
> --
>
> Key: LUCENE-7408
> URL: https://issues.apache.org/jira/browse/LUCENE-7408
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomMedium
> Error Message:
> FAIL: id=20600 should have matched but did not   shape=GeoStandardCircle: 
> {planetmodel=PlanetModel.WGS84, center=[lat=-2.7574435614238194E-13, 
> lon=0.0([X=1.0011188539924791, Y=0.0, Z=-2.760528738161554E-13])], 
> radius=1.5887859182593391(91.03072766607713)}   bounds=XYZBounds: 
> [xmin=-0.01779006715405413 xmax=1.0011188549924792 ymin=-1.0011188549924792 
> ymax=1.0011188549924792 zmin=-0.9977622930221051 zmax=0.9977622930221051]   
> world bounds=( minX=-1.0011188539924791 maxX=1.0011188539924791 
> minY=-1.0011188539924791 maxY=1.0011188539924791 minZ=-0.9977622920221051 
> maxZ=0.9977622920221051   quantized point=[X=-0.017929931093965013, 
> Y=0.6974607560008638, Z=0.7155524064776803] within shape? true within bounds? 
> false   unquantized point=[lat=0.7980359504429014, 
> lon=1.5964981068121482([X=-0.017929931150094086, Y=0.6974607557894967, 
> Z=0.7155524064918857])] within shape? true within bounds? false   docID=20241 
> deleted?=false   query=PointInGeo3DShapeQuery: field=point: Shape: 
> GeoStandardCircle: {planetmodel=PlanetModel.WGS84, 
> center=[lat=-2.7574435614238194E-13, lon=0.0([X=1.0011188539924791, Y=0.0, 
> Z=-2.760528738161554E-13])], radius=1.5887859182593391(91.03072766607713)}   
> explanation: target is in leaf _1(6.2.0):C40793 of full reader 
> StandardDirectoryReader(segments:5:nrt _1(6.2.0):C40793) full BKD path to 
> target doc:   Cell(x=-1.0011188543037526 TO 1.0011188543037524 
> y=-1.0011188510404756 TO 1.0011188543037524 z=-0.9977622923536127 TO 
> 0.9977622923536126); Shape relationship = OVERLAPS; Quantized point within 
> cell = true; Unquantized point within cell = true   
> Cell(x=-1.0011188543037526 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=-0.9977622923536127 TO 0.9977622923536126); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true   Cell(x=-1.0011188543037526 TO 
> 7.390138260796596E-4 y=-1.0011188510404756 TO 1.0011188543037524 z=0.0 TO 
> 0.9977622923536126); Shape relationship = OVERLAPS; Quantized point within 
> cell = true; Unquantized point within cell = true   
> Cell(x=-0.36164092093242806 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=0.0 TO 0.9977622923536126); Shape relationship = 
> OVERLAPS; Quantized point within cell = true; Unquantized point within cell = 
> true   Cell(x=-0.36164092093242806 TO 7.390138260796596E-4 
> y=-1.0011188510404756 TO 1.0011188543037524 z=0.0 TO 0.9913910577870918); 
> Shape relationship = OVERLAPS; Quantized point within cell = true; 
> Unquantized point within cell = true   Cell(x=-0.36164092093242806 TO 
> 7.390138260796596E-4 y=-1.0011188510404756 TO 1.0011188543037524 z=0.0 TO 
> 0.8703133879741732); Shape relationship = OVERLAPS; Quantized point within 
> cell = true; Unquantized point within cell = true   
> Cell(x=-0.36164092093242806 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=0.4397491496027799 TO 0.8703133879741732); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true   Cell(x=-0.36164092093242806 TO 
> 7.390138260796596E-4 y=-1.0011188510404756 TO 1.0011188543037524 
> z=0.4397491496027799 TO 0.7233329727533392); Shape relationship = OVERLAPS; 
> Quantized point within cell = true; Unquantized point within cell = true  
>  Cell(x=-0.2013357411633494 TO 7.390138260796596E-4 y=-1.0011188510404756 TO 
> 1.0011188543037524 z=0.4397491496027799 TO 0.7233329727533392); Shape 
> relationship = OVERLAPS; Quantized point within cell = true; Unquantized 
> point within cell = true on cell Cell(x=-1.0011188543037526 TO 
> 1.0011188543037524 y=-1.0011188510404756 TO 1.0011188543037524 
> z=-0.9977622923536127 TO 0.9977622923536126); Shape relationship = OVERLAPS; 
> Quantized point within cell = true; Unquantized point within cell = true, 
> wrapped visitor returned CELL_CROSSES_QUERY on cell 
> Cell(x=-1.0011188543037526 TO 7.390138260796596E-4 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 770 - Failure!

2016-08-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/770/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testDeletionsTriggerWatches

Error Message:
Error from server at http://127.0.0.1:38352/solr: Could not find collection : 
tobedeleted

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:38352/solr: Could not find collection : 
tobedeleted
at 
__randomizedtesting.SeedInfo.seed([92D33ED9CAB39D02:3012F65648D04B6F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:608)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:261)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1291)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1061)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:997)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testDeletionsTriggerWatches(TestCollectionStateWatchers.java:252)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)