[jira] [Commented] (LUCENE-7508) [smartcn] tokens are not correctly created if text length > 1024

2016-12-15 Thread Chang KaiShin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753733#comment-15753733
 ] 

Chang KaiShin commented on LUCENE-7508:
---

After I looked into the internal handling of the input text. I found the source 
have already completed what I'm trying to do - Taking the input text as stream 
and leveraging the BUFFERMAX(length 1024) to loop through the entire text. The 
isSafeEnd method you mentioned previously is a key method to decide token 
boundaries. Currently lucene doesn't not contain Chinese token breakers so that 
it truncate possible tokens.
After overrided, it finds the last possible breaker position and leaves behind 
the remaining text after the break position
to the next loop. So the sentenses are remained intact and processed correctly. 
To Michael McCandless, additional ending sentence characters is necessary. I 
also named some Chinese breakers such as
';'
'。'
','
'、'
'~'
'('
')'

> [smartcn] tokens are not correctly created if text length > 1024
> 
>
> Key: LUCENE-7508
> URL: https://issues.apache.org/jira/browse/LUCENE-7508
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 6.2.1
> Environment: Mac OS X 10.10
>Reporter: peina
>  Labels: chinese, tokenization
> Attachments: lucene-7508-test.patch, lucene-7508.patch
>
>
> If text length is > 1024, HMMChineseTokenizer failed to split sentences 
> correctly.
> Test Sample:
> public static void main(String[] args) throws IOException{
> Analyzer analyzer = new SmartChineseAnalyzer(); /* will load stopwords */
> //String sentence = 
> "“七八个物管工作人员对我一个文弱书生拳打脚踢,我极力躲避时还被追打。”前天,微信网友爆料称,一名50多岁的江西教师在昆明被物管群殴,手指骨折,向网友求助。教师为何会被物管殴打?事情的真相又是如何?昨天,记者来到圣世一品小区,通过调查了解,事情的起因源于这名教师在小区里帮女儿散发汗蒸馆广告单,被物管保安发现后,引发冲突。对于群殴教师的说法,该小区物管保安队长称:“保安在追的过程中,确实有拉扯,但并没有殴打教师,至于手指骨折是他自己摔伤的。”爆料江西教师在昆明被物管殴打记者注意到,消息于8月27日发出,爆料者称,自己是江西宜丰崇文中学的一名中年教师黄敏。暑假期间来昆明的女儿家度假。他女儿在昆明与人合伙开了一家汗蒸馆,7月30日开业。8月9日下午6点30分许,他到昆明东二环圣世一品小区为女儿的汗蒸馆散发宣传小广告。小区物管前来制止,他就停止发放行为。黄敏称,小区物管保安人员要求他收回散发出去的广告单,他就去收了。物管要求他到办公室里去接受处理,他也配合了。让他没有想到的是,在处理的过程中,七八个年轻的物管人员突然对他拳打脚踢,他极力躲避时还被追着打,而且这一切,是在小区物管领导的注视下发生的。黄敏说,被打后,他立即报了警。除身上多处软组织挫伤外,伤得最严重的是右手大拇指粉碎性骨折,一掌骨骨折。他到云南省第三人民医院住了7天院,医生说无法手术,只能用夹板固定,也不吃药,待其自然修复,至少要3个月以上,右手大拇指还有可能伤残。为证明自己的说法,黄敏还拿出了官渡区公安分局菊花派出所出具的伤情鉴定委托书。他的伤情被鉴定为轻伤二级。说法帮女儿发宣传小广告教师在小区里被殴打昨日,记者者拨通了黄敏的电话。他说,当时他看见该小区的大门没有关,也没有保安值班。于是,他就进到了小区里帮女儿的汗蒸馆发广告单。在楼栋值班的保安没有阻止的前提下,他乘电梯来到了楼上,为了不影响住户,他将名片放在了房门的把手上。被保安发现时,他才发了四五十张。保安问他干什么?他回答,家里开了汗蒸馆,来宣传一下。两名保安叫他不要发了,并要求他到物管办公室等待领导处理。交谈中,由于对方一直在说方言,黄敏只能听清楚的一句话是,物管叫他去收回小广告。他当即同意了,准备去收。这时,小区的七八名工作人员就殴打了他,其中有穿保安服装的,也有身着便衣的。让他气愤的是,他试图逃跑躲起来,依然被追着殴打。黄敏说,女儿将他被打又维权无门的遭遇发到了微信上,希望找到相关视频和照片,还原事件真相。。";
> String sentence = 
> "“七八个物管工作人员对我一个文弱书生拳打脚踢,我极力躲避时还被追打。”前天,微信网友爆料称,一名50多岁的江西教师在昆明被物管群殴,手指骨折,向网友求助。教师为何会被物管殴打?事情的真相又是如何?昨天,记者来到圣世一品小区,通过调查了解,事情的起因源于这名教师在小区里帮女儿散发汗蒸馆广告单,被物管保安发现后,引发冲突。对于群殴教师的说法,该小区物管保安队长称:“保安在追的过程中,确实有拉扯,但并没有殴打教师,至于手指骨折是他自己摔伤的。”爆料江西教师在昆明被物管殴打记者注意到,消息于8月27日发出,爆料者称,自己是江西宜丰崇文中学的一名中年教师黄敏。暑假期间来昆明的女儿家度假。他女儿在昆明与人合伙开了一家汗蒸馆,7月30日开业。8月9日下午6点30分许,他到昆明东二环圣世一品小区为女儿的汗蒸馆散发宣传小广告。小区物管前来制止,他就停止发放行为。黄敏称,小区物管保安人员要求他收回散发出去的广告单,他就去收了。物管要求他到办公室里去接受处理,他也配合了。让他没有想到的是,在处理的过程中,七八个年轻的物管人员突然对他拳打脚踢,他极力躲避时还被追着打,而且这一切,是在小区物管领导的注视下发生的。黄敏说,被打后,他立即报了警。除身上多处软组织挫伤外,伤得最严重的是右手大拇指粉碎性骨折,一掌骨骨折。他到云南省第三人民医院住了7天院,医生说无法手术,只能用夹板固定,也不吃药,待其自然修复,至少要3个月以上,右手大拇指还有可能伤残。为证明自己的说法,黄敏还拿出了官渡区公安分局菊花派出所出具的伤情鉴定委托书。他的伤情被鉴定为轻伤二级。说法帮女儿发宣传小广告教师在小区里被殴打昨日,��者拨通了黄敏的电话。他说,当时他看见该小区的大门没有关,也没有保安值班。于是,他就进到了小区里帮女儿的汗蒸馆发广告单。在楼栋值班的保安没有阻止的前提下,他乘电梯来到了楼上,为了不影响住户,他将名片放在了房门的把手上。被保安发现时,他才发了四五十张。保安问他干什么?他回答,家里开了汗蒸馆,来宣传一下。两名保安叫他不要发了,并要求他到物管办公室等待领导处理。交谈中,由于对方一直在说方言,黄敏只能听清楚的一句话是,物管叫他去收回小广告。他当即同意了,准备去收。这时,小区的七八名工作人员就殴打了他,其中有穿保安服装的,也有身着便衣的。让他气愤的是,他试图逃跑躲起来,依然被追着殴打。黄敏说,女儿将他被打又维权无门的遭遇发到了微信上,希望找到相关视频和照片,还原事件真相";
> System.out.println(sentence.length());
>// String sentence = "女儿将他被打又维权无门的遭遇发到了微信上,希望找到相关视频和照片,还原事件真相。";
> TokenStream tokens = analyzer.tokenStream("dummyfield", sentence);
> tokens.reset();
> CharTermAttribute termAttr = (CharTermAttribute) 
> tokens.getAttribute(CharTermAttribute.class);
> while (tokens.incrementToken()) {
>  // System.out.println(termAttr.toString());
> }
> 
> analyzer.close();
>   }
> The text length in above sample is 1027, with this sample, the sentences are 
> like this:
> .
> Sentence:黄敏说,女儿将他被打又维权无门的遭遇发到了微信上,希望找到相关视频和照片,还原事
> Sentence:件真相
> The last 3 characters are detected as an individual sentence, so 还原事件真相 is 
> tokenized as 还原|事|件|真相. when the correct tokens should be 还原|事件|真相。
> Override isSafeEnd method in HMMChineseTokenizer fixes this issue by consider 
> ','  or '。'  as a safe end of text:
> public class HMMChineseTokenizer extends SegmentingTokenizerBase {
> 
>  /** For sentence tokenization, these are the unambiguous break positions. */
>   protected 

[jira] [Commented] (SOLR-9317) ADDREPLICA command should be more flexible and add 'n' replicas to a collection,shard

2016-12-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753476#comment-15753476
 ] 

Anshum Gupta commented on SOLR-9317:


[~noble.paul] are you working on this one?

> ADDREPLICA command should be more flexible and add 'n' replicas to a 
> collection,shard
> -
>
> Key: SOLR-9317
> URL: https://issues.apache.org/jira/browse/SOLR-9317
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Noble Paul
> Fix For: 6.1
>
>
> It should automatically identify the nodes where these replicas should be 
> created as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 557 - Still Unstable!

2016-12-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/557/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor156.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:870)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:774)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor156.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005)
at org.apache.solr.core.SolrCore.(SolrCore.java:870)
at org.apache.solr.core.SolrCore.(SolrCore.java:774)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([7AC57E5DEEE16784]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753387#comment-15753387
 ] 

Julian Hyde edited comment on SOLR-8593 at 12/16/16 4:21 AM:
-

Would it be correct to say that you have a physical operator which is a 
combination of Aggregate and TopN? This physical operator would have a sorted 
list of grouping fields and also a parameter N (which affects the cost 
estimate). Maybe it's a sub-class of Aggregate with some extra fields. It could 
be created by a planner rule that matches a Sort (with limit) on top of an 
Aggregate and also looks at estimated cardinality of the fields in order to 
sort them.


was (Author: julianhyde):
Would it be correct to say that you have a physical operator which is a 
combination of Aggregate and TopN? This physical operator would have a sorted 
list of grouping fields and also a parameter N (which affects the cost 
estimate). Maybe it's a sub-class of Aggregate with some extra fields.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753387#comment-15753387
 ] 

Julian Hyde commented on SOLR-8593:
---

Would it be correct to say that you have a physical operator which is a 
combination of Aggregate and TopN? This physical operator would have a sorted 
list of grouping fields and also a parameter N (which affects the cost 
estimate). Maybe it's a sub-class of Aggregate with some extra fields.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753373#comment-15753373
 ] 

Joel Bernstein commented on SOLR-8593:
--

If you we have two grouping fields *A, B* nested facets will be gathered using 
the following approach:

1) Gather the *top N* facets for field A.
2) For each of the  *top N* facets of field A, find the top N sub-facets for 
field B

This avoids the exhaustive processing of all the unique combinations of A, B.

This is very performant (sub-second) when N is a relatively small number and 
the cardinality of A, B is not too high.

In high cardinality scenarios we can switch to MapReduce mode which sorts the 
Tuples on the GROUP BY fields and shuffles them to worker nodes. In MapReduce 
mode the order of the GROUP BY fields is not important.

Having the ability to use faceting or MapReduce depending on cardinality is one 
of the key features of Solr's SQL implementation.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers

2016-12-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753369#comment-15753369
 ] 

David Smiley commented on SOLR-9712:


+1 good advice

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers

2016-12-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753353#comment-15753353
 ] 

Yonik Seeley commented on SOLR-9712:


OK, how about this?
{quote}
maxWarmingSearchers now defaults to 1, and more importantly commits will now 
*block* if this limit is exceeded instead of throwing an exception (a good 
thing). Consequently there is no longer a risk in overlapping commits. 
Nonetheless users should continue to avoid excessive committing.  Users are 
advised to remove any pre-existing maxWarmingSearchers entries from their 
solrconfig.xml files.
{quote}

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2016-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753330#comment-15753330
 ] 

Mark Miller commented on SOLR-9836:
---

bq. Possibly best to have two options 

The third option is not very difficult. Lucene already loads the last segments 
file it can. So if we get a corrupt index, we can just sanity check that the 
segments file can be loaded. If it can't, we can't fix things anyway, so 
recover. If the segments file looks fine, don't recover because the index could 
be corrected.

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
> Attachments: SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2016-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753324#comment-15753324
 ] 

Mark Miller commented on SOLR-9836:
---

bq. Fall back to earlier segments file implementation is missing

This should already be Lucene's behavior. I assume if it's not falling back 
it's because there is no previous segments file to fall back to. 

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
> Attachments: SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers

2016-12-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753323#comment-15753323
 ] 

Yonik Seeley commented on SOLR-9712:


bq. If the CHANGES.txt is no more clear than the title of this issue then users 
won't realize what this is all about

Heh... yep.  I often ignore the original JIRA title for the commit message + 
CHANGES entry and try to pick something that means the most to developers for 
the former and users for the latter.  This change should probably go under the 
"change of behavior" section.

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers

2016-12-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753275#comment-15753275
 ] 

David Smiley commented on SOLR-9712:


This is wonderful and a big deal; thanks Yonik!  I didn't notice a CHANGES.txt 
in the patch, so naturally you'll do that when you commit (I do the same 
approach).  If the CHANGES.txt is no more clear than the title of this issue 
then users won't realize what this is all about, I think.  Here's my attempt to 
word it:

bq. SOLR-9712: maxWarmingSearchers now defaults to 1, and more importantly 
commits will now \*block\* if this limit is reached instead of throwing an 
error (a good thing).  Consequently there is no longer a risk in overlapping 
commits.  Nonetheless users should continue to avoid excessive committing.

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9712) Saner default for maxWarmingSearchers

2016-12-15 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9712:
---
Attachment: SOLR-9712.patch

OK, here's the final patch that I'll probably commit tomorrow. Most 
maxWarmingSearcher specifications have been removed from test configurations.  
This doesn't really reduce coverage much due to the fact that 
maxWarmingSearchers=1 (the default) is not special-cased so all code paths will 
still be exercised with concurrent commits.

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9867) The schemaless example can not be started after being stopped.

2016-12-15 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753154#comment-15753154
 ] 

Alexandre Rafalovitch commented on SOLR-9867:
-

Is this with trunk or a particular version?

> The schemaless example can not be started after being stopped.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7592) EOFException while opening index should be rethrown as CorruptIndexException

2016-12-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753056#comment-15753056
 ] 

Michael McCandless commented on LUCENE-7592:


bq. Did you forget to assign to yourself?

I tend not to bother doing this :)  Just an extra seemingly useless step.

> EOFException while opening index should be rethrown as CorruptIndexException
> 
>
> Key: LUCENE-7592
> URL: https://issues.apache.org/jira/browse/LUCENE-7592
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Mike Drob
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7592.patch, LUCENE-7592.patch
>
>
> When opening an index, if some files were previously truncated then this 
> should throw the more general CorruptIndexException instead of the specific 
> EOFException to indicate to a consumer that this is not a transient or 
> internally recoverable state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+147) - Build # 18538 - Unstable!

2016-12-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18538/
Java: 64bit/jdk-9-ea+147 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.component.SpellCheckComponentTest.test

Error Message:
List size mismatch @ spellcheck/suggestions

Stack Trace:
java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
at 
__randomizedtesting.SeedInfo.seed([5DDC36407C02487C:D588099AD2FE2584]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:906)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:853)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.test(SpellCheckComponentTest.java:147)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 11776 lines...]
   [junit4] Suite: org.apache.solr.handler.component.SpellCheckComponentTest
   

[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers

2016-12-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752939#comment-15752939
 ] 

Yonik Seeley commented on SOLR-9712:


OK, I have clean runs now (with no further changes)... looks like it was a bad 
checkout.

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9712.patch, SOLR-9712.patch
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752805#comment-15752805
 ] 

Julian Hyde edited comment on SOLR-8593 at 12/15/16 11:15 PM:
--

I wasn't familiar with faceting, but I quickly read 
https://wiki.apache.org/solr/SolrFacetingOverview.

Suppose table T has fields a, b, c, d, and you want to do a faceted search on 
b, a. If you issue the query {{select b, a, count\(*) from t group by b, a}} 
then you will end up with

{code}
Project($1, $0, $2)
  Aggregate({0, 1}, COUNT(*))
Scan(table=T)
{code}

and as you correctly say, {{0, 1}} represents {{a, b}} because that is the 
physical order of the columns.

Can you explain why the faceting algorithm is interested in the order of the 
columns? Is it because it needs to produce the output ordered or nested on 
those columns? If so, we can rephrase the SQL query so that we are accurately 
expressing in relational algebra what we need.


was (Author: julianhyde):
I wasn't familiar with faceting, but I quickly read 
https://wiki.apache.org/solr/SolrFacetingOverview.

Suppose table T has fields a, b, c, d, and you want to do a faceted search on 
b, a. If you issue the query {{select b, a, count\(*) from t group by b, a}} 
then you will end up with

{code}
Project($1, $0, $2)
  Aggregate({0, 1}, COUNT(*))
Scan(table=T)
{code}

and as you correctly say, {{ \{0, 1\} }} represents {{ \{a, b\} }} because that 
is the physical order of the columns.

Can you explain why the faceting algorithm is interested in the order of the 
columns? Is it because it needs to produce the output ordered or nested on 
those columns? If so, we can rephrase the SQL query so that we are accurately 
expressing in relational algebra what we need.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752805#comment-15752805
 ] 

Julian Hyde edited comment on SOLR-8593 at 12/15/16 11:14 PM:
--

I wasn't familiar with faceting, but I quickly read 
https://wiki.apache.org/solr/SolrFacetingOverview.

Suppose table T has fields a, b, c, d, and you want to do a faceted search on 
b, a. If you issue the query {{select b, a, count\(*) from t group by b, a}} 
then you will end up with

{code}
Project($1, $0, $2)
  Aggregate({0, 1}, COUNT(*))
Scan(table=T)
{code}

and as you correctly say, {{ \{0, 1\} }} represents {{ \{a, b\} }} because that 
is the physical order of the columns.

Can you explain why the faceting algorithm is interested in the order of the 
columns? Is it because it needs to produce the output ordered or nested on 
those columns? If so, we can rephrase the SQL query so that we are accurately 
expressing in relational algebra what we need.


was (Author: julianhyde):
I wasn't familiar with faceting, but I quickly read 
https://wiki.apache.org/solr/SolrFacetingOverview.

Suppose table T has fields a, b, c, d, and you want to do a faceted search on 
b, a. If you issue the query {{select b, a, count\(*) from t group by b, a}} 
then you will end up with

{code}
Project($1, $0, $2)
  Aggregate({0, 1}, COUNT(*))
Scan(table=T)
{code}

and as you correctly say, {0, 1} represents {a, b} because that is the physical 
order of the columns.

Can you explain why the faceting algorithm is interested in the order of the 
columns? Is it because it needs to produce the output ordered or nested on 
those columns? If so, we can rephrase the SQL query so that we are accurately 
expressing in relational algebra what we need.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 603 - Still Unstable

2016-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/603/

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([71B5672EFBEAD997:39C0139AFDD9F602]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11032 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752805#comment-15752805
 ] 

Julian Hyde commented on SOLR-8593:
---

I wasn't familiar with faceting, but I quickly read 
https://wiki.apache.org/solr/SolrFacetingOverview.

Suppose table T has fields a, b, c, d, and you want to do a faceted search on 
b, a. If you issue the query {{select b, a, count(*) from t group by b, a}} 
then you will end up with

{code}
Project($1, $0, $2)
  Aggregate({0, 1}, COUNT(*))
Scan(table=T)
{code}

and as you correctly say, {0, 1} represents {a, b} because that is the physical 
order of the columns.

Can you explain why the faceting algorithm is interested in the order of the 
columns? Is it because it needs to produce the output ordered or nested on 
those columns? If so, we can rephrase the SQL query so that we are accurately 
expressing in relational algebra what we need.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752805#comment-15752805
 ] 

Julian Hyde edited comment on SOLR-8593 at 12/15/16 11:13 PM:
--

I wasn't familiar with faceting, but I quickly read 
https://wiki.apache.org/solr/SolrFacetingOverview.

Suppose table T has fields a, b, c, d, and you want to do a faceted search on 
b, a. If you issue the query {{select b, a, count\(*) from t group by b, a}} 
then you will end up with

{code}
Project($1, $0, $2)
  Aggregate({0, 1}, COUNT(*))
Scan(table=T)
{code}

and as you correctly say, {0, 1} represents {a, b} because that is the physical 
order of the columns.

Can you explain why the faceting algorithm is interested in the order of the 
columns? Is it because it needs to produce the output ordered or nested on 
those columns? If so, we can rephrase the SQL query so that we are accurately 
expressing in relational algebra what we need.


was (Author: julianhyde):
I wasn't familiar with faceting, but I quickly read 
https://wiki.apache.org/solr/SolrFacetingOverview.

Suppose table T has fields a, b, c, d, and you want to do a faceted search on 
b, a. If you issue the query {{select b, a, count(*) from t group by b, a}} 
then you will end up with

{code}
Project($1, $0, $2)
  Aggregate({0, 1}, COUNT(*))
Scan(table=T)
{code}

and as you correctly say, {0, 1} represents {a, b} because that is the physical 
order of the columns.

Can you explain why the faceting algorithm is interested in the order of the 
columns? Is it because it needs to produce the output ordered or nested on 
those columns? If so, we can rephrase the SQL query so that we are accurately 
expressing in relational algebra what we need.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8538) Kerberos ticket is not renewed automatically when storing index on secured HDFS

2016-12-15 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752795#comment-15752795
 ] 

Kevin Risden commented on SOLR-8538:


The related HWX community post: 
https://community.hortonworks.com/questions/9394/kerberos-ticket-isnt-being-renewed-by-solr-when-st.html

> Kerberos ticket is not renewed automatically when storing index on secured 
> HDFS
> ---
>
> Key: SOLR-8538
> URL: https://issues.apache.org/jira/browse/SOLR-8538
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs, security
>Affects Versions: 5.2.1
> Environment: HDP 2.3
>Reporter: Andrew Bumstead
>
> It seems that when Solr is configured to stores its index files on a 
> Kerberized HDFS, there is no built in mechanism by which Solr will renew its 
> Kerberos ticket before it expires.
> The impact is that after the default ticket lifetime has elapsed (typically 
> 24 hours) Solr becomes unable to connect to HDFS to read/write and requires a 
> restart or a manual kinit command to be run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9599) DocValues performance regression with new iterator API

2016-12-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752788#comment-15752788
 ] 

Yonik Seeley commented on SOLR-9599:


No, the list of subtasks isn't comprehensive or anything... I added some as 
needed so I could commit some progress.

> DocValues performance regression with new iterator API
> --
>
> Key: SOLR-9599
> URL: https://issues.apache.org/jira/browse/SOLR-9599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
> Fix For: master (7.0)
>
>
> I did a quick performance comparison of faceting indexed fields (i.e. 
> docvalues are not stored) using method=dv before and after the new docvalues 
> iterator went in (LUCENE-7407).
> 5M document index, 21 segments, single valued string fields w/ no missing 
> values.
> || field cardinality || new_time / old_time ||
> |10|2.01|
> |1000|2.02|
> |1|1.85|
> |10|1.56|
> |100|1.31|
> So unfortunately, often twice as slow.
> See followup messages for tests using real docvalues as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752746#comment-15752746
 ] 

ASF GitHub Bot commented on SOLR-8593:
--

Github user joel-bernstein commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/104#discussion_r92718063
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/client/solrj/io/ops/GreaterThanOperation.java
 ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.client.solrj.io.ops;
+
+import java.io.IOException;
+import java.util.UUID;
+
+import org.apache.solr.client.solrj.io.Tuple;
+import org.apache.solr.client.solrj.io.stream.expr.Explanation;
+import 
org.apache.solr.client.solrj.io.stream.expr.Explanation.ExpressionType;
+import org.apache.solr.client.solrj.io.stream.expr.StreamExpression;
+import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
+
+public class GreaterThanOperation extends LeafOperation {
+
+  private static final long serialVersionUID = 1;
+  private UUID operationNodeId = UUID.randomUUID();
+
+  public void operate(Tuple tuple) {
+this.tuple = tuple;
+  }
+
+  public GreaterThanOperation(String field, double val) {
--- End diff --

This is the implementation Tuple.getDouble():

public Double getDouble(Object key) {
Object o = this.fields.get(key);

if(o == null) {
  return null;
}

if(o instanceof Double) {
  return (Double)o;
} else {
  //Attempt to parse the double
  return Double.parseDouble(o.toString());
}
  }

So, any number will be translated to a double for comparison. We can 
implement a String comparison in a different operation I think.


> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #104: SOLR-8593 - WIP

2016-12-15 Thread joel-bernstein
Github user joel-bernstein commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/104#discussion_r92718063
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/client/solrj/io/ops/GreaterThanOperation.java
 ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.client.solrj.io.ops;
+
+import java.io.IOException;
+import java.util.UUID;
+
+import org.apache.solr.client.solrj.io.Tuple;
+import org.apache.solr.client.solrj.io.stream.expr.Explanation;
+import 
org.apache.solr.client.solrj.io.stream.expr.Explanation.ExpressionType;
+import org.apache.solr.client.solrj.io.stream.expr.StreamExpression;
+import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
+
+public class GreaterThanOperation extends LeafOperation {
+
+  private static final long serialVersionUID = 1;
+  private UUID operationNodeId = UUID.randomUUID();
+
+  public void operate(Tuple tuple) {
+this.tuple = tuple;
+  }
+
+  public GreaterThanOperation(String field, double val) {
--- End diff --

This is the implementation Tuple.getDouble():

public Double getDouble(Object key) {
Object o = this.fields.get(key);

if(o == null) {
  return null;
}

if(o instanceof Double) {
  return (Double)o;
} else {
  //Attempt to parse the double
  return Double.parseDouble(o.toString());
}
  }

So, any number will be translated to a double for comparison. We can 
implement a String comparison in a different operation I think.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2016-12-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9836:

Attachment: SOLR-9836.patch

Current WIP patch.

* Moved {{modifyIndexProps}} to {{SolrCore}}
* Added system property toggle for controlling desired behaviour here.
** Property name and values are shots in the dark and by no means final
** Used an enum because it made sense logically at the time, not sure if this 
actually matters.
* Switched to looking for CorruptIndexException

* Fall back to earlier segments file implementation is missing, pending some 
questions below. (there's a unit test though)
** It's very hard to tell if it was actually the segments file that is corrupt, 
or if it was something else.
** Is it sufficient to delete {{segments_n}} and let lucene try to read from 
the new "latest" commit? Will this screw up replication? Do we need to update 
the generation anywhere else? And I'm still nervous about indiscriminately 
deleting files where recovery might be possible. I guess that's the point of 
the config options.
** Another option is to hack a FilterDirectory on the index that would hide the 
latest segments_n file instead of deleting it. That might work to open it, but 
we will likely end up with write conflicts next time we commit.

The more I toss this idea around, the more it feels like something that would 
be more cleanly handled at the Lucene level. Possibly best to have two options 
(recover from leader, do nothing) instead of the initial three proposed by 
[~markrmil...@gmail.com] and expand on them later.

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
> Attachments: SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-12-15 Thread Sanne Grinovero (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752649#comment-15752649
 ] 

Sanne Grinovero commented on LUCENE-6989:
-

We have more recent releases of Hibernate Search using Lucene 5.5.x, but we 
typically aim to support older releases as well, for some reasonable time. It 
just so happens that Lucene 5.3 isn't that old yet in our perspective. While I 
constantly work to motivate people to move to the latest, for many Lucene 5.3 
is working just great.

The OSS communities we target typically will not expect API changes in a 
maintenance release, and we happen to (proudly) expose Lucene as public API, as 
I believe that hiding it all under some wrapping layer would not be able to be 
as powerful. Since we expose Lucene as public API implies I can't really update 
my dependency to Lucene with other than a micro (bugfix) release, when doing a 
micro/bugfix release myself: people got used that a Lucene major/minor update 
will only happen in an Hibernate Search major/minor update.

Of course if that's not feasible, we might have to advise that those older 
releases won't be compatible with Java 9; that's a possible outcome, I guess 
we'll see how the final Java 9 release will make this doable. See you at 
FOSDEM, hopefully with my colleague Andrew Haley as well ;-)

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.0, 6.4
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, 
> LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7592) EOFException while opening index should be rethrown as CorruptIndexException

2016-12-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752634#comment-15752634
 ] 

Mike Drob commented on LUCENE-7592:
---

Thanks, [~mikemccand]!

Did you forget to assign to yourself?

> EOFException while opening index should be rethrown as CorruptIndexException
> 
>
> Key: LUCENE-7592
> URL: https://issues.apache.org/jira/browse/LUCENE-7592
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Mike Drob
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7592.patch, LUCENE-7592.patch
>
>
> When opening an index, if some files were previously truncated then this 
> should throw the more general CorruptIndexException instead of the specific 
> EOFException to indicate to a consumer that this is not a transient or 
> internally recoverable state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9870) Typos in SolrCore

2016-12-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9870:

Priority: Minor  (was: Major)

> Typos in SolrCore
> -
>
> Key: SOLR-9870
> URL: https://issues.apache.org/jira/browse/SOLR-9870
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Mike Drob
>Priority: Minor
> Attachments: SOLR-9870.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9870) Typos in SolrCore

2016-12-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9870:

Attachment: SOLR-9870.patch

> Typos in SolrCore
> -
>
> Key: SOLR-9870
> URL: https://issues.apache.org/jira/browse/SOLR-9870
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Mike Drob
> Attachments: SOLR-9870.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9870) Typos in SolrCore

2016-12-15 Thread Mike Drob (JIRA)
Mike Drob created SOLR-9870:
---

 Summary: Typos in SolrCore
 Key: SOLR-9870
 URL: https://issues.apache.org/jira/browse/SOLR-9870
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Mike Drob






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9869) MiniSolrCloudCluster does not always remove jettys from running list after stopping them

2016-12-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9869:

Attachment: SOLR-9869.patch

Patch available.

> MiniSolrCloudCluster does not always remove jettys from running list after 
> stopping them
> 
>
> Key: SOLR-9869
> URL: https://issues.apache.org/jira/browse/SOLR-9869
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mike Drob
> Attachments: SOLR-9869.patch
>
>
> MiniSolrCloudCluster has two {{stopJettySolrRunner}} methods that behave 
> differently.
> The {{int}} version calls {{jettys.remove(index);}} to remove the now stopped 
> jetty from the list of running jettys.
> The version that takes a {{JettySolrRunner}}, however, does not modify the 
> running list.
> This can cause calls to {{getReplicaJetty}} to fail after a call to {{stop}} 
> because we will try to get the base url of a stopped jetty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9869) MiniSolrCloudCluster does not always remove jettys from running list after stopping them

2016-12-15 Thread Mike Drob (JIRA)
Mike Drob created SOLR-9869:
---

 Summary: MiniSolrCloudCluster does not always remove jettys from 
running list after stopping them
 Key: SOLR-9869
 URL: https://issues.apache.org/jira/browse/SOLR-9869
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Reporter: Mike Drob


MiniSolrCloudCluster has two {{stopJettySolrRunner}} methods that behave 
differently.

The {{int}} version calls {{jettys.remove(index);}} to remove the now stopped 
jetty from the list of running jettys.
The version that takes a {{JettySolrRunner}}, however, does not modify the 
running list.

This can cause calls to {{getReplicaJetty}} to fail after a call to {{stop}} 
because we will try to get the base url of a stopped jetty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9599) DocValues performance regression with new iterator API

2016-12-15 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752588#comment-15752588
 ] 

Otis Gospodnetic commented on SOLR-9599:


[~ysee...@gmail.com] all sub-tasks seem to be done/resolved should this 
then be resolved, too?

> DocValues performance regression with new iterator API
> --
>
> Key: SOLR-9599
> URL: https://issues.apache.org/jira/browse/SOLR-9599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
> Fix For: master (7.0)
>
>
> I did a quick performance comparison of faceting indexed fields (i.e. 
> docvalues are not stored) using method=dv before and after the new docvalues 
> iterator went in (LUCENE-7407).
> 5M document index, 21 segments, single valued string fields w/ no missing 
> values.
> || field cardinality || new_time / old_time ||
> |10|2.01|
> |1000|2.02|
> |1|1.85|
> |10|1.56|
> |100|1.31|
> So unfortunately, often twice as slow.
> See followup messages for tests using real docvalues as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7253) Make sparse doc values and segments merging more efficient

2016-12-15 Thread Otis Gospodnetic (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otis Gospodnetic resolved LUCENE-7253.
--
Resolution: Duplicate
  Assignee: Michael McCandless

LUCENE-7457 and many others actually took care of the issue reported here.

> Make sparse doc values and segments merging more efficient 
> ---
>
> Key: LUCENE-7253
> URL: https://issues.apache.org/jira/browse/LUCENE-7253
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.5, 6.0
>Reporter: Pawel Rog
>Assignee: Michael McCandless
>  Labels: performance
> Fix For: master (7.0)
>
>
> Doc Values were optimized recently to efficiently store sparse data. 
> Unfortunately there is still big problem with Doc Values merges for sparse 
> fields. When we imagine 1 billion documents index it seems it doesn't matter 
> if all documents have value for this field or there is only 1 document with 
> value. Segment merge time is the same for both cases. In most cases this is 
> not a problem but there are several cases in which one can expect having many 
> fields with sparse doc values.
> I can describe an example. During performance tests of a system with large 
> number of sparse fields I realized that Doc Values merges are a bottleneck. I 
> had hundreds of different numeric fields. Each document contained only small 
> subset of all fields. Average document contains 5-7 different numeric values. 
> As you can see data was very sparse in these fields. It turned out that 
> ingestion process was CPU-bound. Most of CPU time was spent in DocValues 
> related methods (SingletonSortedNumericDocValues#setDocument, 
> DocValuesConsumer$10$1#next, DocValuesConsumer#isSingleValued, 
> DocValuesConsumer$4$1#setNext, ...) - mostly during merging segments.
> Adrien Grand suggested to reduce the number of sparse fields and replace them 
> with smaller number of denser fields. This helped a lot but complicated 
> fields naming. 
> I am not much familiar with Doc Values source code but I have small 
> suggestion how to improve Doc Values merges for sparse fields. I realized 
> that Doc Values producers and consumers use Iterators. Let's take an example 
> of numeric Doc Values. Would it be possible to replace Iterator which 
> "travels" through all documents with Iterator over collection of non empty 
> values? Of course this would require storing object (instead of numeric) 
> which contains value and document ID. Such an iterator could significantly 
> improve merge time of sparse Doc Values fields. IMHO this won't cause big 
> overhead for dense structures but it can be game changer for sparse 
> structures.
> This is what happens in NumericDocValuesWriter on flush
> {code}
> dvConsumer.addNumericField(fieldInfo,
>new Iterable() {
>  @Override
>  public Iterator iterator() {
>return new NumericIterator(maxDoc, values, 
> docsWithField);
>  }
>});
> {code}
> Before this happens during addValue, this loop is executed to fill holes.
> {code}
> // Fill in any holes:
> for (int i = (int)pending.size(); i < docID; ++i) {
>   pending.add(MISSING);
> }
> {code}
> It turns out that variable called pending is used only internally in 
> NumericDocValuesWriter. I know pending is PackedLongValues and it wouldn't be 
> good to change it with different class (some kind of list) because this may 
> break DV performance for dense fields. I hope someone can suggest interesting 
> solutions for this problem :).
> It would be great if discussion about sparse Doc Values merge performance can 
> start here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752570#comment-15752570
 ] 

Joel Bernstein edited comment on SOLR-8593 at 12/15/16 9:33 PM:


I just pushed out a commit to the solr/jira-8593 branch:

https://github.com/apache/lucene-solr/commit/37fdc37fc3d88054634482d39b5774893751f91f

This is a pretty large refactoring of the SolrTable class which includes 
implementations for aggregationMode=facet for both GROUP BY aggregations and 
SELECT DISTINCT aggregations.

All tests in TestSQLHandler are passing.

There is only one thing that I'm not quite happy about in this patch which is 
specific to Calcite. I am wondering if [~julianhyde] has any thoughts on the 
issue. The specific issue deals with how the set of GROUP BY fields are handled 
in Calcite. From what I can see there isn't an easy way to get the ordering of 
the GROUP BY fields preserved from the query. The Solr faceting implementations 
requires the correct order of the GROUP BY fields to return a correct response. 
So, I'm getting the ordering from the field list of the query instead. This may 
actually be the correct approach from a SQL standpoint but I was wondering what 
Julian thought about this issue.


 


was (Author: joel.bernstein):
I just pushed out a commit to the solr/jira-8593 branch:

https://github.com/apache/lucene-solr/commit/37fdc37fc3d88054634482d39b5774893751f91f

This is a pretty large refactoring of the SolrTable class which includes 
implementations for aggregationMode=facet for both GROUP BY aggregations and 
SELECT DISTINCT aggregations.

All tests in TestSQLHandler are passing.

There is only one thing that I'm not quite happy about in this patch which is 
specific to Calcite. I am wondering if [~julianhyde] has any thoughts on the 
issue. The specific issue deals with how the set of GROUP BY fields is dealt 
with in Calcite. From what I can see there isn't an easy way to get the 
ordering of the GROUP BY fields preserved from the query. The Solr faceting 
implementations requires the correct order of the GROUP BY fields to return a 
correct response. So, I'm getting the ordering from the field list of the query 
instead. This may actually be the correct approach from a SQL standpoint but I 
was wondering what Julian thought about this issue.


 

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752570#comment-15752570
 ] 

Joel Bernstein edited comment on SOLR-8593 at 12/15/16 9:32 PM:


I just pushed out a commit to the solr/jira-8593 branch:

https://github.com/apache/lucene-solr/commit/37fdc37fc3d88054634482d39b5774893751f91f

This is a pretty large refactoring of the SolrTable class which includes 
implementations for aggregationMode=facet for both GROUP BY aggregations and 
SELECT DISTINCT aggregations.

All test in the TestSQLHandler are passing.

There is only one thing that I'm not quite happy about in this patch which is 
specific to Calcite. I am wondering if [~julianhyde] has any thoughts on the 
issue. The specific issue deals with how the set of GROUP BY fields is dealt 
with in Calcite. From what I can see there isn't an easy way to get the 
ordering of the GROUP BY fields preserved from the query. The Solr faceting 
implementations requires the correct order of the GROUP BY fields to return a 
correct response. So, I'm getting the ordering from the field list of the query 
instead. This may actually be the correct approach from a SQL standpoint but I 
was wondering what Julian thought about this issue.


 


was (Author: joel.bernstein):
I just pushed out a commit to the solr/jira-8593 branch:

https://github.com/apache/lucene-solr/commit/37fdc37fc3d88054634482d39b5774893751f91f

This is a pretty large refactoring of the SolrTable class which includes 
implementations for aggregationMode=facet for both GROUP BY aggregations and 
SELECT DISTINCT aggregations.

All test is TestSQLHandler are passing.

There is only one thing that I'm not quite happy about in this patch which is 
specific to Calcite. I am wondering if [~julianhyde] has any thoughts on the 
issue. The specific issue deals with how the set of GROUP BY fields is dealt 
with in Calcite. From what I can see there isn't an easy way to get the 
ordering of the GROUP BY fields preserved from the query. The Solr faceting 
implementations requires the correct order of the GROUP BY fields to return a 
correct response. So, I'm getting the ordering from the field list of the query 
instead. This may actually be the correct approach from a SQL standpoint but I 
was wondering what Julian thought about this issue.


 

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752570#comment-15752570
 ] 

Joel Bernstein edited comment on SOLR-8593 at 12/15/16 9:32 PM:


I just pushed out a commit to the solr/jira-8593 branch:

https://github.com/apache/lucene-solr/commit/37fdc37fc3d88054634482d39b5774893751f91f

This is a pretty large refactoring of the SolrTable class which includes 
implementations for aggregationMode=facet for both GROUP BY aggregations and 
SELECT DISTINCT aggregations.

All tests in TestSQLHandler are passing.

There is only one thing that I'm not quite happy about in this patch which is 
specific to Calcite. I am wondering if [~julianhyde] has any thoughts on the 
issue. The specific issue deals with how the set of GROUP BY fields is dealt 
with in Calcite. From what I can see there isn't an easy way to get the 
ordering of the GROUP BY fields preserved from the query. The Solr faceting 
implementations requires the correct order of the GROUP BY fields to return a 
correct response. So, I'm getting the ordering from the field list of the query 
instead. This may actually be the correct approach from a SQL standpoint but I 
was wondering what Julian thought about this issue.


 


was (Author: joel.bernstein):
I just pushed out a commit to the solr/jira-8593 branch:

https://github.com/apache/lucene-solr/commit/37fdc37fc3d88054634482d39b5774893751f91f

This is a pretty large refactoring of the SolrTable class which includes 
implementations for aggregationMode=facet for both GROUP BY aggregations and 
SELECT DISTINCT aggregations.

All test in the TestSQLHandler are passing.

There is only one thing that I'm not quite happy about in this patch which is 
specific to Calcite. I am wondering if [~julianhyde] has any thoughts on the 
issue. The specific issue deals with how the set of GROUP BY fields is dealt 
with in Calcite. From what I can see there isn't an easy way to get the 
ordering of the GROUP BY fields preserved from the query. The Solr faceting 
implementations requires the correct order of the GROUP BY fields to return a 
correct response. So, I'm getting the ordering from the field list of the query 
instead. This may actually be the correct approach from a SQL standpoint but I 
was wondering what Julian thought about this issue.


 

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752570#comment-15752570
 ] 

Joel Bernstein commented on SOLR-8593:
--

I just pushed out a commit to the solr/jira-8593 branch:

https://github.com/apache/lucene-solr/commit/37fdc37fc3d88054634482d39b5774893751f91f

This is a pretty large refactoring of the SolrTable class which includes 
implementations for aggregationMode=facet for both GROUP BY aggregations and 
SELECT DISTINCT aggregations.

All test is TestSQLHandler are passing.

There is only one thing that I'm not quite happy about in this patch which is 
specific to Calcite. I am wondering if [~julianhyde] has any thoughts on the 
issue. The specific issue deals with how the set of GROUP BY fields is dealt 
with in Calcite. From what I can see there isn't an easy way to get the 
ordering of the GROUP BY fields preserved from the query. The Solr faceting 
implementations requires the correct order of the GROUP BY fields to return a 
correct response. So, I'm getting the ordering from the field list of the query 
instead. This may actually be the correct approach from a SQL standpoint but I 
was wondering what Julian thought about this issue.


 

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752564#comment-15752564
 ] 

ASF GitHub Bot commented on SOLR-8593:
--

Github user risdenk commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/104#discussion_r92703928
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/client/solrj/io/ops/GreaterThanOperation.java
 ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.client.solrj.io.ops;
+
+import java.io.IOException;
+import java.util.UUID;
+
+import org.apache.solr.client.solrj.io.Tuple;
+import org.apache.solr.client.solrj.io.stream.expr.Explanation;
+import 
org.apache.solr.client.solrj.io.stream.expr.Explanation.ExpressionType;
+import org.apache.solr.client.solrj.io.stream.expr.StreamExpression;
+import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
+
+public class GreaterThanOperation extends LeafOperation {
+
+  private static final long serialVersionUID = 1;
+  private UUID operationNodeId = UUID.randomUUID();
+
+  public void operate(Tuple tuple) {
+this.tuple = tuple;
+  }
+
+  public GreaterThanOperation(String field, double val) {
--- End diff --

@joel-bernstein - Does this mean that string/int/long comparisons won't 
work? I noticed that this assumes the fields are doubles.


> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #104: SOLR-8593 - WIP

2016-12-15 Thread risdenk
Github user risdenk commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/104#discussion_r92703928
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/client/solrj/io/ops/GreaterThanOperation.java
 ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.client.solrj.io.ops;
+
+import java.io.IOException;
+import java.util.UUID;
+
+import org.apache.solr.client.solrj.io.Tuple;
+import org.apache.solr.client.solrj.io.stream.expr.Explanation;
+import 
org.apache.solr.client.solrj.io.stream.expr.Explanation.ExpressionType;
+import org.apache.solr.client.solrj.io.stream.expr.StreamExpression;
+import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
+
+public class GreaterThanOperation extends LeafOperation {
+
+  private static final long serialVersionUID = 1;
+  private UUID operationNodeId = UUID.randomUUID();
+
+  public void operate(Tuple tuple) {
+this.tuple = tuple;
+  }
+
+  public GreaterThanOperation(String field, double val) {
--- End diff --

@joel-bernstein - Does this mean that string/int/long comparisons won't 
work? I noticed that this assumes the fields are doubles.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7466) Allow optional leading wildcards in complexphrase

2016-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-7466:
--

Assignee: Mikhail Khludnev

> Allow optional leading wildcards in complexphrase
> -
>
> Key: SOLR-7466
> URL: https://issues.apache.org/jira/browse/SOLR-7466
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Andy hardin
>Assignee: Mikhail Khludnev
>  Labels: complexPhrase, query-parser, wildcards
> Attachments: SOLR-7466.patch
>
>
> Currently ComplexPhraseQParser (SOLR-1604) allows trailing wildcards on terms 
> in a phrase, but does not allow leading wildcards.  I would like the option 
> to be able to search for terms with both trailing and leading wildcards.  
> For example with:
> {!complexphrase allowLeadingWildcard=true} "j* *th"
> would match "John Smith", "Jim Smith", but not "John Schmitt"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation

2016-12-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752528#comment-15752528
 ] 

Michael McCandless commented on LUCENE-7588:


Thanks [~ekeller]: this is an impressive change!

Can you add a minimal javadocs to {{ParallelDrillSideways}}, and include 
{{\@lucene.experimental}}?

Can you fix the indent to 2 spaces, and change your IDE to not use
wildcard imports?  (Most of the new classes seem to do so, but at
least one didn't).  Or we can fix this up before pushing...

Should {{CallableCollector}} be renamed to {{CallableCollectorManager}}?

I assume you're using this for your QWAZR search server built on lucene 
(https://github.com/qwazr/QWAZR)?  Thank you for giving back!

There are quite a few new abstractions here,
{{MultiCollectorManager}}, {{FacetsCollectorManager}}; must they be
public?  Can you explain what they do?

It seems like this change opens up concurrency in 2 ways; the first
way is it uses the {{IndexSearcher.search}} API that takes a
{{CollectorManager}} such that if you had created that
{{IndexSearcher}} with an executor, you get concurrency across the
segments in the index.  In general I'm not a huge fan of this
concurrency since you are at the whim of how the segments are
structured, and, confusingly, running {{forceMerge(1)}} on your index
removes all concurrency.  But it's better than nothing: progress not
perfection!

The second way is that the new {{ParallelDrillSideways}} takes its own
executor and then runs the N {{DrillDown}} queries concurrently (to
compute the sideways counts), which is very different from the current
doc-at-a-time computation.  Have you compared the performance, using a
single thread? ... I'm curious how "doc at a time" vs "query at a
time" (which is also Solr's approach) compare.  But, still, the fact
that this "query at a time" approach enables concurrency is a big win.

I wonder if we could absorb {{ParallelDrillSideways}} under
{{DrillSideways}} such that if you pass an executor it uses the
concurrent implementation?  It's really an implementation/execution
detail I think?  Similar to how {{IndexSearcher}} takes an optional
executor.


> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.3.1
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-12-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752483#comment-15752483
 ] 

Uwe Schindler commented on LUCENE-6989:
---

That's my plan, if it detects Java 9, I will disable it.

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.0, 6.4
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, 
> LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9513) Introduce a generic authentication plugin which delegates all functionality to Hadoop authentication framework

2016-12-15 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752438#comment-15752438
 ] 

Hrishikesh Gadre commented on SOLR-9513:


[~ichattopadhyaya] I have updated the PR. Can you please take a look?



> Introduce a generic authentication plugin which delegates all functionality 
> to Hadoop authentication framework
> --
>
> Key: SOLR-9513
> URL: https://issues.apache.org/jira/browse/SOLR-9513
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>
> Currently Solr kerberos authentication plugin delegates the core logic to 
> Hadoop authentication framework. But the configuration parameters required by 
> the Hadoop authentication framework are hardcoded in the plugin code itself. 
> https://github.com/apache/lucene-solr/blob/5b770b56d012279d334f41e4ef7fe652480fd3cf/solr/core/src/java/org/apache/solr/security/KerberosPlugin.java#L119
> The problem with this approach is that we need to make code changes in Solr 
> to expose new capabilities added in Hadoop authentication framework. e.g. 
> HADOOP-12082
> We should implement a generic Solr authentication plugin which will accept 
> configuration parameters via security.json (in Zookeeper) and delegate them 
> to Hadoop authentication framework. This will allow to utilize new features 
> in Hadoop without code changes in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 602 - Still Unstable

2016-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/602/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamExpressionTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest: 1) 
Thread[id=3070, 
name=OverseerHdfsCoreFailoverThread-97113268209647625-127.0.0.1:55199_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.io.stream.StreamExpressionTest: 
   1) Thread[id=3070, 
name=OverseerHdfsCoreFailoverThread-97113268209647625-127.0.0.1:55199_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([D9300CFD5BF131EE]:0)




Build Log:
[...truncated 13391 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.StreamExpressionTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.io.stream.StreamExpressionTest_D9300CFD5BF131EE-001/init-core-data-001
   [junit4]   2> 1INFO  
(SUITE-StreamExpressionTest-seed#[D9300CFD5BF131EE]-worker) [] o.e.j.u.log 
Logging initialized @5914ms
   [junit4]   2> 21   INFO  
(SUITE-StreamExpressionTest-seed#[D9300CFD5BF131EE]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 66   INFO  
(SUITE-StreamExpressionTest-seed#[D9300CFD5BF131EE]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 4 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.io.stream.StreamExpressionTest_D9300CFD5BF131EE-001/tempDir-001
   [junit4]   2> 77   INFO  
(SUITE-StreamExpressionTest-seed#[D9300CFD5BF131EE]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 79   INFO  (Thread-1) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 80   INFO  (Thread-1) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 192  INFO  
(SUITE-StreamExpressionTest-seed#[D9300CFD5BF131EE]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:57942
   [junit4]   2> 439  WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 
o.a.z.s.NIOServerCnxn Exception causing close of session 0x0 due to 
java.io.IOException: ZooKeeperServer not running
   [junit4]   2> 2153 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 
o.a.z.s.NIOServerCnxn caught end of stream exception
   [junit4]   2> EndOfStreamException: Unable to read additional data from 
client sessionid 0x1590400be9d, likely client has closed socket
   [junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
   [junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
   [junit4]   2>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 2583 INFO  (jetty-launcher-1-thread-2) [] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 2592 INFO  (jetty-launcher-1-thread-3) [] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 2592 INFO  (jetty-launcher-1-thread-4) [] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 2592 INFO  (jetty-launcher-1-thread-1) [] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 2714 INFO  (jetty-launcher-1-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@706867f2{/solr,null,AVAILABLE}
   [junit4]   2> 2717 INFO  (jetty-launcher-1-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@25670404{/solr,null,AVAILABLE}
   [junit4]   2> 2715 INFO  (jetty-launcher-1-thread-4) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@704b77db{/solr,null,AVAILABLE}
   [junit4]   2> 2715 INFO  (jetty-launcher-1-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@7f5db696{/solr,null,AVAILABLE}
   [junit4]   2> 2730 INFO  (jetty-launcher-1-thread-3) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@42e95371{HTTP/1.1,[http/1.1]}{127.0.0.1:41342}
   [junit4]   2> 2730 INFO  (jetty-launcher-1-thread-3) [] o.e.j.s.Server 
Started @8648ms
   [junit4]   2> 2731 INFO  (jetty-launcher-1-thread-3) [] 
o.a.s.c.s.e.JettySolrRunner Jetty 

[jira] [Updated] (SOLR-9859) replication.properties cannot be updated after being written and neither replication.properties or index.properties are durable in the face of a crash

2016-12-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9859:
--
Attachment: SOLR-9859.patch

> replication.properties cannot be updated after being written and neither 
> replication.properties or index.properties are durable in the face of a crash
> --
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch, 
> SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9859) replication.properties cannot be updated after being written and neither replication.properties or index.properties are durable in the face of a crash

2016-12-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9859:
--
Attachment: SOLR-9859.patch

> replication.properties cannot be updated after being written and neither 
> replication.properties or index.properties are durable in the face of a crash
> --
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch, 
> SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 556 - Still Unstable!

2016-12-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/556/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudPivotFacet.test

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:53510 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:53510 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([A18C22F895F66719:29D81D223B0A0AE1]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:111)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:98)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.printLayout(AbstractDistribZkTestBase.java:322)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribTearDown(AbstractFullDistribZkTestBase.java:1500)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:969)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:53510 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:233)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:174)
... 37 more


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudPivotFacet


[jira] [Updated] (SOLR-9712) Saner default for maxWarmingSearchers

2016-12-15 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9712:
---
Attachment: SOLR-9712.patch

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9712.patch, SOLR-9712.patch
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers

2016-12-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752122#comment-15752122
 ] 

Yonik Seeley commented on SOLR-9712:


Here's an updated patch that changes the default maxWarmingSearchers to 1.
It looks like I didn't need to create any additional tests... by changing 
maxWarmingSearchers to 1 first, a bunch of the stress tests started failing 
since they also test concurrent commits.  TestStressVersions, 
TestStressReorder, and TestRealTimeGet all failed (not an exhausted list tested 
by hand) w/o the blocking patch and succeeded with it.

Unfortunately, running the full test suite results in some errors.  Not sure 
what's going on there yet.

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9712.patch
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9860) Enable configuring invariantParams via HttpSolrClient.Builder

2016-12-15 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752091#comment-15752091
 ] 

Hrishikesh Gadre commented on SOLR-9860:


[~ichattopadhyaya] I have updated the PR. Can you please take a look?

> Enable configuring invariantParams via HttpSolrClient.Builder
> -
>
> Key: SOLR-9860
> URL: https://issues.apache.org/jira/browse/SOLR-9860
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> HttpSolrClient provides a facility to add default parameters for every 
> request via the invariantParams attribute. Currently HttpSolrClient.Builder 
> does not provide any option to configure this attribute. This jira is to add 
> this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9859) replication.properties cannot be updated after being written and neither replication.properties or index.properties are durable in the face of a crash

2016-12-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9859:
--
Summary: replication.properties cannot be updated after being written and 
neither replication.properties or index.properties are durable in the face of a 
crash  (was: replication.properties cannot be updated after being written on 
Windows and neither replication.properties or index.properties are durable in 
the face of a crash.)

> replication.properties cannot be updated after being written and neither 
> replication.properties or index.properties are durable in the face of a crash
> --
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9859) replication.properties cannot be updated after being written on Windows and neither replication.properties or index.properties are durable in the face of a crash.

2016-12-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9859:
--
Attachment: SOLR-9859.patch

Patch polished up a bit.

> replication.properties cannot be updated after being written on Windows and 
> neither replication.properties or index.properties are durable in the face of 
> a crash.
> --
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9859) replication.properties cannot be updated after being written on Windows and neither replication.properties or index.properties are durable in the face of a crash.

2016-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752042#comment-15752042
 ] 

Mark Miller commented on SOLR-9859:
---

That is the fallback behavior. See the overrides.

> replication.properties cannot be updated after being written on Windows and 
> neither replication.properties or index.properties are durable in the face of 
> a crash.
> --
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4587) Implement Saved Searches a la ElasticSearch Percolator

2016-12-15 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752039#comment-15752039
 ] 

Otis Gospodnetic commented on SOLR-4587:


http://search-lucene.com/m/Solr/eHNlnz4JxwIMSo1?subj=Deep+dive+on+the+topic+streaming+expression
 for anyone who wants to follow.

> Implement Saved Searches a la ElasticSearch Percolator
> --
>
> Key: SOLR-4587
> URL: https://issues.apache.org/jira/browse/SOLR-4587
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other, SolrCloud
>Reporter: Otis Gospodnetic
> Fix For: 6.0
>
>
> Use Lucene MemoryIndex for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7594) Float/DoublePoint should not recommend using Math.nextUp/nextDown for exclusive ranges

2016-12-15 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752029#comment-15752029
 ] 

Dawid Weiss commented on LUCENE-7594:
-

I checked and javac doesn't recognize it as a static expression, for example:
{code}
  10: ldc   #31 // float -0.0f
  12: invokestatic  #27 // Method 
java/lang/Float.floatToIntBits:(F)I
{code}
very likely it'd be optimized away later in hotspot, but any of the 
alternatives I mentioned are just as good.


> Float/DoublePoint should not recommend using Math.nextUp/nextDown for 
> exclusive ranges
> --
>
> Key: LUCENE-7594
> URL: https://issues.apache.org/jira/browse/LUCENE-7594
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7594.patch
>
>
> Float/Double points are supposed to be consistent with Double/Float.compare, 
> so +0 is supposed to compare greater than -0. However Math.nextUp/nextDown is 
> not consistent with Double/Float.compare and returns MIN_VALUE for nextUp(-0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9859) replication.properties cannot be updated after being written on Windows and neither replication.properties or index.properties are durable in the face of a crash.

2016-12-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9859:
--
Summary: replication.properties cannot be updated after being written on 
Windows and neither replication.properties or index.properties are durable in 
the face of a crash.  (was: replication.properties does not get updated the 
second time around if index recovers via replication)

> replication.properties cannot be updated after being written on Windows and 
> neither replication.properties or index.properties are durable in the face of 
> a crash.
> --
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7466) Allow optional leading wildcards in complexphrase

2016-12-15 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-7466:
---
Attachment: SOLR-7466.patch

What would you think about [^SOLR-7466.patch]? We can add SolrQP as mixing to 
Lucene's ComplexPhraseQP and delegate wildcards to the former! 

> Allow optional leading wildcards in complexphrase
> -
>
> Key: SOLR-7466
> URL: https://issues.apache.org/jira/browse/SOLR-7466
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Andy hardin
>  Labels: complexPhrase, query-parser, wildcards
> Attachments: SOLR-7466.patch
>
>
> Currently ComplexPhraseQParser (SOLR-1604) allows trailing wildcards on terms 
> in a phrase, but does not allow leading wildcards.  I would like the option 
> to be able to search for terms with both trailing and leading wildcards.  
> For example with:
> {!complexphrase allowLeadingWildcard=true} "j* *th"
> would match "John Smith", "Jim Smith", but not "John Schmitt"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7594) Float/DoublePoint should not recommend using Math.nextUp/nextDown for exclusive ranges

2016-12-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752009#comment-15752009
 ] 

Adrien Grand commented on LUCENE-7594:
--

Thanks for having a look Dawid. I'll apply your suggestion when pushing.

> Float/DoublePoint should not recommend using Math.nextUp/nextDown for 
> exclusive ranges
> --
>
> Key: LUCENE-7594
> URL: https://issues.apache.org/jira/browse/LUCENE-7594
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7594.patch
>
>
> Float/Double points are supposed to be consistent with Double/Float.compare, 
> so +0 is supposed to compare greater than -0. However Math.nextUp/nextDown is 
> not consistent with Double/Float.compare and returns MIN_VALUE for nextUp(-0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9859) replication.properties does not get updated the second time around if index recovers via replication

2016-12-15 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751997#comment-15751997
 ] 

Pushkar Raste commented on SOLR-9859:
-

[~markrmil...@gmail.com] looks like in the `atomicRename` file you are deleting 
existing file and then renaming the temp file. How is this better than just 
deleting the file a writing a new file, if we crash at a wrong time (as you 
have mentioned above). 

Would we need to manually rename the temp file in such a scenario?

> replication.properties does not get updated the second time around if index 
> recovers via replication
> 
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7594) Float/DoublePoint should not recommend using Math.nextUp/nextDown for exclusive ranges

2016-12-15 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751994#comment-15751994
 ] 

Dawid Weiss commented on LUCENE-7594:
-

Ah, that is trappy... In a way it makes sense as -0.0f is 8000_ and min 
value is _0001, so if you disregard the sign bit there's some logic there. 

Patch looks good. I don't know if the compiler will be smart enough to avoid 
recomputing the static {{Float.floatToIntBits(-0f)}}. An alternative would be 
to:
{code}
if (Float.compare(f, -0.0) == 0)
{code}
or simply compare the int representation (Float.toIntBits(f) == 0x8000_). 
Either way, looks good to me.


> Float/DoublePoint should not recommend using Math.nextUp/nextDown for 
> exclusive ranges
> --
>
> Key: LUCENE-7594
> URL: https://issues.apache.org/jira/browse/LUCENE-7594
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7594.patch
>
>
> Float/Double points are supposed to be consistent with Double/Float.compare, 
> so +0 is supposed to compare greater than -0. However Math.nextUp/nextDown is 
> not consistent with Double/Float.compare and returns MIN_VALUE for nextUp(-0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7593) FastVectorHighlighter Overlapping Queries Do Not Highlight

2016-12-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751985#comment-15751985
 ] 

David Smiley commented on LUCENE-7593:
--

Hi Luo; thanks for reporting this problem.  If you can supply a patch that 
fixes the problem with a test, I'd take a look.

I've been working on the UnifiedHighlighter (introduced in Lucene 6.3) and it 
doesn't suffer from this problem.  Might you ascertain if the UH addresses your 
highlighting needs, and if not let me/us know what's needed?

> FastVectorHighlighter Overlapping Queries Do Not Highlight
> --
>
> Key: LUCENE-7593
> URL: https://issues.apache.org/jira/browse/LUCENE-7593
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.x, 5.5.2
>Reporter: Luo Ji
>  Labels: fastvectorhighlighter
>
> Example Text:
> ABCDEF
> Example Query:
> AB or ABC
> I got two terms hit, (AB, startOffset=0, endOffset=2, weight=4), (ABC, 
> startOffset=0, endOffset=3, weight=5), in FieldPhraseList's constructor, line 
> 103, addIfNoOverlap, only (AB) were highlighted, and (ABC) with higher weight 
> had been dropped because offset overlapping



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9859) replication.properties does not get updated the second time around if index recovers via replication

2016-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751963#comment-15751963
 ] 

Mark Miller commented on SOLR-9859:
---

bq. but it's impl dependent on if that is supported and it wouldn't work for 
the arbitrary FileSystem's we support

I've read this should actually work across the major operating systems (I'd 
expect it on Unix systems, but seems Windows should be fine too).

We can support it on HDFS as well it seems. So perhaps something like this is 
the easiest solution.

> replication.properties does not get updated the second time around if index 
> recovers via replication
> 
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9859) replication.properties does not get updated the second time around if index recovers via replication

2016-12-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9859:
--
Attachment: SOLR-9859.patch

>From what I've read, this is one possible solution. Would still want to test a 
>little on Windows I think.

> replication.properties does not get updated the second time around if index 
> recovers via replication
> 
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7588) A parallel DrillSideways implementation

2016-12-15 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751916#comment-15751916
 ] 

Emmanuel Keller edited comment on LUCENE-7588 at 12/15/16 5:18 PM:
---

Thanks for your feedback guys, it's pretty clear. FYI, the patch includes unit 
tests derived from the already existing test on facets.


was (Author: ekeller):
Thanks for your feedback guys, it's pretty clear. FYI, the patch includes unit 
tests derived for the already existing test on facets.

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.3.1
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7594) Float/DoublePoint should not recommend using Math.nextUp/nextDown for exclusive ranges

2016-12-15 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7594:
-
Attachment: LUCENE-7594.patch

Here is a patch that adds an alternative nextUp/nextDown impl that treats -0 
and +0 as different values.

> Float/DoublePoint should not recommend using Math.nextUp/nextDown for 
> exclusive ranges
> --
>
> Key: LUCENE-7594
> URL: https://issues.apache.org/jira/browse/LUCENE-7594
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7594.patch
>
>
> Float/Double points are supposed to be consistent with Double/Float.compare, 
> so +0 is supposed to compare greater than -0. However Math.nextUp/nextDown is 
> not consistent with Double/Float.compare and returns MIN_VALUE for nextUp(-0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation

2016-12-15 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751916#comment-15751916
 ] 

Emmanuel Keller commented on LUCENE-7588:
-

Thanks for your feedback guys, it's pretty clear. FYI, the patch includes unit 
tests derived for the already existing test on facets.

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.3.1
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7594) Float/DoublePoint should not recommend using Math.nextUp/nextDown for exclusive ranges

2016-12-15 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7594:


 Summary: Float/DoublePoint should not recommend using 
Math.nextUp/nextDown for exclusive ranges
 Key: LUCENE-7594
 URL: https://issues.apache.org/jira/browse/LUCENE-7594
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Priority: Minor


Float/Double points are supposed to be consistent with Double/Float.compare, so 
+0 is supposed to compare greater than -0. However Math.nextUp/nextDown is not 
consistent with Double/Float.compare and returns MIN_VALUE for nextUp(-0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7587) New FacetQuery and MultiFacetQuery

2016-12-15 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751868#comment-15751868
 ] 

Emmanuel Keller commented on LUCENE-7587:
-

True. I will do that and submit a new patch. A review of my approximative 
english prose will still probably be required.

> New FacetQuery and MultiFacetQuery
> --
>
> Key: LUCENE-7587
> URL: https://issues.apache.org/jira/browse/LUCENE-7587
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.3.1
>
> Attachments: LUCENE-7587.patch
>
>
> This patch introduces two convenient queries: FacetQuery and MultiFacetQuery.
> It can be useful to be able to filter a complex query on one or many facet 
> value.
> - FacetQuery acts as a TermQuery on a FacetField.
> - MultiFacetQuery acts as a MultiTermQuery on a FacetField.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+147) - Build # 2428 - Unstable!

2016-12-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2428/
Java: 64bit/jdk-9-ea+147 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:36452/id_g/wr","node_name":"127.0.0.1:36452_id_g%2Fwr","state":"active","leader":"true"}];
 clusterState: 
DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:33243/id_g/wr;,   
"core":"c8n_1x3_lf_shard1_replica1",   
"node_name":"127.0.0.1:33243_id_g%2Fwr"}, "core_node2":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:34544/id_g/wr;,   
"node_name":"127.0.0.1:34544_id_g%2Fwr",   "state":"down"}, 
"core_node3":{   "core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:36452/id_g/wr;,   
"node_name":"127.0.0.1:36452_id_g%2Fwr",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:36452/id_g/wr","node_name":"127.0.0.1:36452_id_g%2Fwr","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:33243/id_g/wr;,
  "core":"c8n_1x3_lf_shard1_replica1",
  "node_name":"127.0.0.1:33243_id_g%2Fwr"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:34544/id_g/wr;,
  "node_name":"127.0.0.1:34544_id_g%2Fwr",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:36452/id_g/wr;,
  "node_name":"127.0.0.1:36452_id_g%2Fwr",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([3732C7D7F59DF470:BF66F80D5B619988]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9859) replication.properties does not get updated the second time around if index recovers via replication

2016-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751827#comment-15751827
 ] 

Mark Miller commented on SOLR-9859:
---

bq. Is there a way we can write a temp file and do a mv to rename/overwrite 
replication.properties

Nothing great. Java 7 gives us an atomic move that can overwrite an existing 
file, but it's impl dependent on if that is supported and it wouldn't work for 
the arbitrary FileSystem's we support. We would still need some start up logic 
that could address a crashed state.

bq. Alternate solution would be to keep appending to existing file and read the 
latest stats from the file.

The problem is we use this same strategy for index.properties which is not so 
straightforward to do this way.

bq.  I think we can simply delete the exist replication.properties before write 
a new one.

That is the easy fix I mention above, but it's fragile, and like 
index.properties, not robust in a crash.

> replication.properties does not get updated the second time around if index 
> recovers via replication
> 
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157)
>   at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409)
>   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-12-15 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751820#comment-15751820
 ] 

Dawid Weiss commented on LUCENE-6989:
-

This is part of {{StaticFieldsInvariantRule}}; if you remove it form 
{{LuceneTestCase}} the rest will work fine (should).

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.0, 6.4
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, 
> LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers

2016-12-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751816#comment-15751816
 ] 

Yonik Seeley commented on SOLR-9712:


OK, I'm going to create a stress test for concurrent commits...

> Saner default for maxWarmingSearchers
> -
>
> Key: SOLR-9712
> URL: https://issues.apache.org/jira/browse/SOLR-9712
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9712.patch
>
>
> As noted in SOLR-9710, the default for maxWarmingSearchers is 
> Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we 
> log a performance warning when the number of on deck searchers goes over 1. 
> What if we had the default as 1 that expert users can increase if needed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7572) Cache the hashcode of the doc values terms queries

2016-12-15 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7572.
--
   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

> Cache the hashcode of the doc values terms queries
> --
>
> Key: LUCENE-7572
> URL: https://issues.apache.org/jira/browse/LUCENE-7572
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7572.patch
>
>
> DocValuesNumbersQuery and DocValuesTermsQuery can potentially wrap a large 
> number of terms so it would help if we cached their hashcode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7589) Prevent outliers from raising the number of bits of everyone with numeric doc values

2016-12-15 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7589.
--
   Resolution: Fixed
Fix Version/s: master (7.0)

> Prevent outliers from raising the number of bits of everyone with numeric doc 
> values
> 
>
> Key: LUCENE-7589
> URL: https://issues.apache.org/jira/browse/LUCENE-7589
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7589.patch
>
>
> Today we encode entire segments with a single number of bits per value. It 
> was done this way because it was faster, but it also means a single outlier 
> can significantly increase the space requirements. I think we should have 
> protection against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7572) Cache the hashcode of the doc values terms queries

2016-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751795#comment-15751795
 ] 

ASF subversion and git services commented on LUCENE-7572:
-

Commit 9a72bd871ec684f186c7818ff1582fc1d1fe5f3f in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9a72bd8 ]

LUCENE-7572: Cache the hash code of doc values queries.


> Cache the hashcode of the doc values terms queries
> --
>
> Key: LUCENE-7572
> URL: https://issues.apache.org/jira/browse/LUCENE-7572
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7572.patch
>
>
> DocValuesNumbersQuery and DocValuesTermsQuery can potentially wrap a large 
> number of terms so it would help if we cached their hashcode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7589) Prevent outliers from raising the number of bits of everyone with numeric doc values

2016-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751785#comment-15751785
 ] 

ASF subversion and git services commented on LUCENE-7589:
-

Commit 3b182aa2fb3e4062f6ec5be819f3aa70aa2e523d in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3b182aa ]

LUCENE-7589: Prevent outliers from raising the bpv for everyone.


> Prevent outliers from raising the number of bits of everyone with numeric doc 
> values
> 
>
> Key: LUCENE-7589
> URL: https://issues.apache.org/jira/browse/LUCENE-7589
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7589.patch
>
>
> Today we encode entire segments with a single number of bits per value. It 
> was done this way because it was faster, but it also means a single outlier 
> can significantly increase the space requirements. I think we should have 
> protection against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7572) Cache the hashcode of the doc values terms queries

2016-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751784#comment-15751784
 ] 

ASF subversion and git services commented on LUCENE-7572:
-

Commit ea1569e2914f9ba914b582a0801d6cb83a29529b in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ea1569e ]

LUCENE-7572: Cache the hash code of doc values queries.


> Cache the hashcode of the doc values terms queries
> --
>
> Key: LUCENE-7572
> URL: https://issues.apache.org/jira/browse/LUCENE-7572
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7572.patch
>
>
> DocValuesNumbersQuery and DocValuesTermsQuery can potentially wrap a large 
> number of terms so it would help if we cached their hashcode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Problem with fetchindex perhaps? Or at least a scary message

2016-12-15 Thread Erick Erickson
OK, this is SOLR-9859 I think, so we can ignore it.

On Wed, Oct 26, 2016 at 9:42 PM, Erick Erickson  wrote:
> Setup:
>
> I have a 5.3.1 techproducts example (renamed to "tech"). Start the
> techproducts example solr 6x on another port. Index some stuff to my
> 5.3.1 instance and then issue:
>
> http://localhost:8983/solr/techproducts/replication?command=fetchindex=http://localhost:8981/solr/tech
>
> So far, so good, the index is replicated just fine.
>
> Now index docs on "tech" and re-issue the fetchindex command. The
> index replicates, but then the stack trace below comes out in the
> logs. I don't know whether this happens in earlier 6x versions. Should
> we be looking at this before we cut 6.3? Is it worth a JIRA? One thing
> that concerns me is that the state of the last replication won't be
> written. Of course it could be something wonky with my testing, I ran
> across this testing something totally different (fetchindex into a
> SolrCloud replica if you must know)
>
> WARN  - 2016-10-27 04:00:22.924; [   x:techproducts]
> org.apache.solr.handler.IndexFetcher; Exception while updating
> statistics
>
> java.nio.file.FileAlreadyExistsException:
> /Users/Erick/apache/solr/lucene-solr-6x/solr/example/techproducts/solr/techproducts/data/replication.properties
>
> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)
>
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>
> at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>
> at 
> java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
>
> at java.nio.file.Files.newOutputStream(Files.java:216)
>
> at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>
> at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>
> at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>
> at 
> org.apache.lucene.store.NRTCachingDirectory.createOutput(NRTCachingDirectory.java:157)
>
> at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:675)
>
> at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:487)
>
> at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251)
>
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>
> at 
> org.apache.solr.handler.ReplicationHandler.lambda$handleRequestBody$146(ReplicationHandler.java:279)
>
> at org.apache.solr.handler.ReplicationHandler$$Lambda$61/140045219.run(Unknown
> Source)
>
> at java.lang.Thread.run(Thread.java:745)
>
> INFO  - 2016-10-27 04:00:22.925; [   x:techproducts]
> org.apache.solr.handler.IndexFetcher; removing old index directory
> NRTCachingDirectory(MMapDirectory@/Users/Erick/apache/solr/lucene-solr-6x/solr/example/techproducts/solr/techproducts/data/index.20161027035949763
> lockFactory=org.apache.lucene.store.NativeFSLockFactory@7e50cd1;
> maxCacheMB=48.0 maxMergeSizeMB=4.0)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7587) New FacetQuery and MultiFacetQuery

2016-12-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751753#comment-15751753
 ] 

Michael McCandless commented on LUCENE-7587:


These helper classes look great, thank you [~ekeller].

Maybe add javadocs explaining that this is an alternative to 
{{DrillDownQuery}}, especially in cases where you don't intend to use 
{{DrillSideways}}?

> New FacetQuery and MultiFacetQuery
> --
>
> Key: LUCENE-7587
> URL: https://issues.apache.org/jira/browse/LUCENE-7587
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.3.1
>
> Attachments: LUCENE-7587.patch
>
>
> This patch introduces two convenient queries: FacetQuery and MultiFacetQuery.
> It can be useful to be able to filter a complex query on one or many facet 
> value.
> - FacetQuery acts as a TermQuery on a FacetField.
> - MultiFacetQuery acts as a MultiTermQuery on a FacetField.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation

2016-12-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751726#comment-15751726
 ] 

Michael McCandless commented on LUCENE-7588:


Sorry, I have been meaning to have a look at this cool idea/patch, and what 
you've done (open issue, put patch up, gently nudge) is exactly the right 
process!  Thank you [~ekeller] ... I'll have a look soon.

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.3.1
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation

2016-12-15 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751716#comment-15751716
 ] 

Erick Erickson commented on LUCENE-7588:


Well, for DrillSideways I'd ping [~mikemccand] for the quickest read if he has 
the time.

Basically you gently prompt the JIRA from time to time and see if you can get 
someone's attention.

Perhaps a short description of the approach the patch takes would help orient 
someone who's looking at it.



> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.3.1
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 601 - Still Unstable

2016-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/601/

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=3086, 
name=SocketProxy-Response-58341:44485, state=RUNNABLE, 
group=TGRP-LeaderInitiatedRecoveryOnCommitTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=3086, name=SocketProxy-Response-58341:44485, 
state=RUNNABLE, group=TGRP-LeaderInitiatedRecoveryOnCommitTest]
at 
__randomizedtesting.SeedInfo.seed([978136A83E311B2D:1FD5097290CD76D5]:0)
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([978136A83E311B2D]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)




Build Log:
[...truncated 10995 lines...]
   [junit4] Suite: org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J1/temp/solr.cloud.LeaderInitiatedRecoveryOnCommitTest_978136A83E311B2D-001/init-core-data-001
   [junit4]   2> 275131 INFO  
(SUITE-LeaderInitiatedRecoveryOnCommitTest-seed#[978136A83E311B2D]-worker) [
] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 275131 INFO  
(SUITE-LeaderInitiatedRecoveryOnCommitTest-seed#[978136A83E311B2D]-worker) [
] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 275150 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 275158 INFO  (Thread-643) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 275158 INFO  (Thread-643) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 275257 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.ZkTestServer start zk server on port:46421
   [junit4]   2> 275531 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 275534 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 275537 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 275538 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 275540 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 275548 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 275562 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 275564 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 275565 INFO  
(TEST-LeaderInitiatedRecoveryOnCommitTest.test-seed#[978136A83E311B2D]) [] 
o.a.s.c.AbstractZkTestCase put 

[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation

2016-12-15 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751561#comment-15751561
 ] 

Emmanuel Keller commented on LUCENE-7588:
-

Hi, I was wondering what is the current process for this kind of patch 
proposal. I suppose there is a review process. Let me know how I can help. 
Thanks.

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.3.1
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7579) Sorting on flushed segment

2016-12-15 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751527#comment-15751527
 ] 

Adrien Grand commented on LUCENE-7579:
--

Some questions/comments:
 * CompressingStoredFieldsWriter.sort should always have a 
CompressingStoredFieldsReader as an input, since the codec cannot change in the 
middle of the flush, so I think we should be able to skip the instanceof check?
 * It would personally help me to have comments eg. in 
MergeState.maybeSortReaders that the {{indexSort==null}} case may only happen 
for bwc reasons. Maybe we should also assert that if index sorting is 
configured, then the non-sorted segments can only have 6.2 or 6.3 as a version.

Thanks for working on this change!

> Sorting on flushed segment
> --
>
> Key: LUCENE-7579
> URL: https://issues.apache.org/jira/browse/LUCENE-7579
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ferenczi Jim
>
> Today flushed segments built by an index writer with an index sort specified 
> are not sorted. The merge is responsible of sorting these segments 
> potentially with others that are already sorted (resulted from another 
> merge). 
> I'd like to investigate the cost of sorting the segment directly during the 
> flush. This could make the merge faster since they are some cheap 
> optimizations that can be done only if all segments to be merged are sorted.
>  For instance the merge of the points could use the bulk merge instead of 
> rebuilding the points from scratch.
> I made a small prototype which sort the segment on flush here:
> https://github.com/apache/lucene-solr/compare/master...jimczi:flush_sort
> The idea is simple, for points, norms, docvalues and terms I use the 
> SortingLeafReader implementation to translate the values that we have in RAM 
> in a sorted enumeration for the writers.
> For stored fields I use a two pass scheme where the documents are first 
> written to disk unsorted and then copied to another file with the correct 
> sorting. I use the same stored field format for the two steps and just remove 
> the file produced by the first pass at the end of the process.
> This prototype has no implementation for index sorting that use term vectors 
> yet. I'll add this later if the tests are good enough.
> Speaking of testing, I tried this branch on [~mikemccand] benchmark scripts 
> and compared master with index sorting against my branch with index sorting 
> on flush. I tried with sparsetaxis and wikipedia and the first results are 
> weird. When I use the SerialScheduler and only one thread to write the docs,  
> index sorting on flush is slower. But when I use two threads the sorting on 
> flush is much faster even with the SerialScheduler. I'll continue to run the 
> tests in order to be able to share something more meaningful.
> The tests are passing except one about concurrent DV updates. I don't know 
> this part at all so I did not fix the test yet. I don't even know if we can 
> make it work with index sorting ;).
>  [~mikemccand] I would love to have your feedback about the prototype. Could 
> you please take a look ? I am sure there are plenty of bugs, ... but I think 
> it's a good start to evaluate the feasibility of this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3713 - Unstable!

2016-12-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3713/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds

Error Message:
soft529 wasn't fast enough

Stack Trace:
java.lang.AssertionError: soft529 wasn't fast enough
at 
__randomizedtesting.SeedInfo.seed([5D9E29CB098AE9D5:C4AD04BB8F9D972]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds(SoftAutoCommitTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10668 lines...]
   [junit4] Suite: org.apache.solr.update.SoftAutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-12-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751454#comment-15751454
 ] 

Uwe Schindler commented on LUCENE-6989:
---

To conclude: The RAMUsageEstimator/Tester/whatever that sums up all static 
fields before and after test no longer works with Java 9.

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.0, 6.4
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, 
> LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-12-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751445#comment-15751445
 ] 

Uwe Schindler commented on LUCENE-6989:
---

Unsafe is not the problem. The problem is RUE!

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.0, 6.4
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, 
> LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-12-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751437#comment-15751437
 ] 

Uwe Schindler commented on LUCENE-6989:
---

I am not sure if you have a clone of RAMUsageEstimator inside RR, this is why I 
asked the question. The RAMUsageEstimator (as it is in Lucene) dooes not work 
at all with Jigsaw anymore, because you can no longer look into String.class or 
whatever!

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.0, 6.4
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, 
> LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9835) Create another replication mode for SolrCloud

2016-12-15 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-9835:
---

Assignee: Shalin Shekhar Mangar

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9863) Write some fundamental micro benchmark algorithms for Solr.

2016-12-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751398#comment-15751398
 ] 

Shalin Shekhar Mangar commented on SOLR-9863:
-

No, this is Lucidworks internal only at the moment. But we do plan to publish 
it once we have stable hardware running it nightly.

> Write some fundamental micro benchmark algorithms for Solr.
> ---
>
> Key: SOLR-9863
> URL: https://issues.apache.org/jira/browse/SOLR-9863
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Attachments: indexing.html
>
>
> Once SOLR-2646 is committed it becomes much easier to start looking at 
> tracking some basic performance metrics over time like Lucene does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9835) Create another replication mode for SolrCloud

2016-12-15 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9835:
---
Attachment: SOLR-9835.patch

Updated patch for this issue
- When a new tlog is created, it copy all old updates that not have been made 
to its local index.

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+147) - Build # 18534 - Still Unstable!

2016-12-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18534/
Java: 64bit/jdk-9-ea+147 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.component.SpellCheckComponentTest.testThresholdTokenFrequency

Error Message:
Path not found: /spellcheck/suggestions/[1]/suggestion

Stack Trace:
java.lang.RuntimeException: Path not found: 
/spellcheck/suggestions/[1]/suggestion
at 
__randomizedtesting.SeedInfo.seed([27BF835D8D4CDFDE:AD180CAC02A7E6A5]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:906)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:853)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.testThresholdTokenFrequency(SpellCheckComponentTest.java:277)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1590 lines...]
   

[jira] [Commented] (LUCENE-7590) Add DocValues statistics helpers

2016-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751275#comment-15751275
 ] 

ASF subversion and git services commented on LUCENE-7590:
-

Commit 2a0814fc34b76d8031938d09e11bedc7f604f543 in lucene-solr's branch 
refs/heads/branch_6x from [~shaie]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2a0814f ]

LUCENE-7590: add sum, variance and stdev stats to NumericDVStats


> Add DocValues statistics helpers
> 
>
> Key: LUCENE-7590
> URL: https://issues.apache.org/jira/browse/LUCENE-7590
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/misc
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-7590-2.patch, LUCENE-7590.patch, 
> LUCENE-7590.patch, LUCENE-7590.patch, LUCENE-7590.patch, LUCENE-7590.patch, 
> LUCENE-7590.patch, LUCENE-7590.patch
>
>
> I think it can be useful to have DocValues statistics helpers, that can allow 
> users to query for the min/max/avg etc. stats of a DV field. In this issue 
> I'd like to cover numeric DV, but there's no reason not to add it to other DV 
> types too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7590) Add DocValues statistics helpers

2016-12-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751272#comment-15751272
 ] 

ASF subversion and git services commented on LUCENE-7590:
-

Commit 295cab7216ca76debaf4d354409741058a8641a1 in lucene-solr's branch 
refs/heads/master from [~shaie]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=295cab7 ]

LUCENE-7590: add sum, variance and stdev stats to NumericDVStats


> Add DocValues statistics helpers
> 
>
> Key: LUCENE-7590
> URL: https://issues.apache.org/jira/browse/LUCENE-7590
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/misc
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-7590-2.patch, LUCENE-7590.patch, 
> LUCENE-7590.patch, LUCENE-7590.patch, LUCENE-7590.patch, LUCENE-7590.patch, 
> LUCENE-7590.patch, LUCENE-7590.patch
>
>
> I think it can be useful to have DocValues statistics helpers, that can allow 
> users to query for the min/max/avg etc. stats of a DV field. In this issue 
> I'd like to cover numeric DV, but there's no reason not to add it to other DV 
> types too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 600 - Unstable

2016-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/600/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor139.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:870)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:774)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor139.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005)
at org.apache.solr.core.SolrCore.(SolrCore.java:870)
at org.apache.solr.core.SolrCore.(SolrCore.java:774)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([4B7B51A7A6DB8FF2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_112) - Build # 6290 - Still Unstable!

2016-12-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6290/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.http.HttpReplicatorTest_5F469629B2DC8AFB-001\httpReplicatorTest-001\2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.http.HttpReplicatorTest_5F469629B2DC8AFB-001\httpReplicatorTest-001\2
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.http.HttpReplicatorTest_5F469629B2DC8AFB-001\httpReplicatorTest-001\2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.http.HttpReplicatorTest_5F469629B2DC8AFB-001\httpReplicatorTest-001\2

at 
__randomizedtesting.SeedInfo.seed([5F469629B2DC8AFB:F4BC8B3C6D000CD5]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.replicator.PerSessionDirectoryFactory.cleanupSession(PerSessionDirectoryFactory.java:58)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:259)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:401)
at 
org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic(HttpReplicatorTest.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_112) - Build # 18533 - Unstable!

2016-12-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18533/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery

Error Message:
ObjectTracker found 5 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at org.apache.solr.core.SolrCore.(SolrCore.java:954)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:792)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:868)  at 
org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1139)  at 
org.apache.solr.core.TestCoreDiscovery.testTooManyTransientCores(TestCoreDiscovery.java:211)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
  at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
  at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
  at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
  at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
  at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
  at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
  at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
  at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 

  1   2   >