[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_51) - Build # 13953 - Failure!

2015-08-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13953/
Java: 32bit/jdk1.8.0_51 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([B9FD6ED57C71335F:1EB9D67111CA20E6]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplication(CdcrReplicationHandlerTest.java:86)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-7795) Fold Interval Faceting into Range Faceting

2015-08-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-7795:

Attachment: SOLR-7795.patch

The distributed case was not working correctly yet. Fixed that and added a test 
case. Still need to validate pivot faceting

 Fold Interval Faceting into Range Faceting
 --

 Key: SOLR-7795
 URL: https://issues.apache.org/jira/browse/SOLR-7795
 Project: Solr
  Issue Type: Task
Reporter: Tomás Fernández Löbbe
 Fix For: 5.3, Trunk

 Attachments: SOLR-7795.patch, SOLR-7795.patch


 Now that range faceting supports a filter and a dv method, and that 
 interval faceting is supported on fields with {{docValues=false}}, I think we 
 should make it so that interval faceting is just a different way of 
 specifying ranges in range faceting, allowing users to indicate specific 
 ranges.
 I propose we use the same syntax for intervals, but under the range 
 parameter family:
 {noformat}
 facet.range=price
 f.price.facet.range.set=[0,10]
 f.price.facet.range.set=(10,100]
 {noformat}
 The counts for those ranges would come in the response also inside of the 
 range_facets section. I'm not sure if it's better to include the ranges in 
 the counts section, or in a different section (intervals?sets?buckets?). 
 I'm open to suggestions. 
 {code}
 facet_ranges:{
   price:{
 counts:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 or…
 {code}
 facet_ranges:{
   price:{
 intervals:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 We should support people specifying both things on the same field.
 Once this is done, interval faceting could be deprecated, as all it's 
 functionality is now possible through range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2658 - Failure!

2015-08-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2658/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CdcrRequestHandlerTest.doTest

Error Message:
expected:[dis]abled but was:[en]abled

Stack Trace:
org.junit.ComparisonFailure: expected:[dis]abled but was:[en]abled
at 
__randomizedtesting.SeedInfo.seed([28644B592B51A1DD:8F20F3FD46EAB264]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:289)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTestBufferActions(CdcrRequestHandlerTest.java:138)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTest(CdcrRequestHandlerTest.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Comment Edited] (SOLR-7954) Exception while using {!cardinality=1.0}.

2015-08-21 Thread Modassar Ather (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706525#comment-14706525
 ] 

Modassar Ather edited comment on SOLR-7954 at 8/21/15 10:35 AM:


Following test method can be used to add data using which the exception can be 
reproduced. Please do the necessary changes.
Change ZKHOST:ZKPORT to point to zkhost available and COLECTION to the 
available collection.
{noformat}
public void index() throws SolrServerException, IOException {
CloudSolrClient s = new CloudSolrClient(ZKHOST:ZKPORT);
int count = 0;
s.setDefaultCollection(COLECTION);
ListSolrInputDocument documents = new ArrayList();
for (int i = 1; i = 100; i++) {
SolrInputDocument doc = new SolrInputDocument();
doc.addField(field1, i);
doc.addField(colid, val!+i+!-+ref+i);
doc.addField(field, DATA+(12345+i));
documents.add(doc);
if((documents.size() % 1) == 0){
count = count + 1;
s.add(documents);
System.out.println(System.currentTimeMillis() +  - Indexed 
document #  + NumberFormat.getInstance().format(count));
documents = new ArrayList();
}
}


System.out.println(Comitting.);
s.commit(true, true);

System.out.println(Optimizing.);
s.optimize(true, true, 1);
s.close();
System.out.println(Done.);
}
{noformat}


was (Author: modassar):
Following test method can be used to add data using which the exception can be 
reproduced. Please do the necessary changes.
Change ZKHOST:ZKPORT to point to zkhost available and COLECTION to the 
available collection.
{noformat}
public void index() throws SolrServerException, IOException {
CloudSolrClient s = new CloudSolrClient(ZKHOST:ZKPORT);
int count = 0;
s.setDefaultCollection(COLECTION);
ListSolrInputDocument documents = new ArrayList();
for (int i = 1; i = 100; i++) {
SolrInputDocument doc = new SolrInputDocument();
doc.addField(field1, i);
doc.addField(colid, val!+i+!-+ref+i);
doc.addField(field, DATA+(12345+i));
documents.add(doc);
if((documents.size() % 1) == 0){
count = count + 1;
s.add(documents);
System.out.println(System.currentTimeMillis() +  - Indexed 
document #  + NumberFormat.getInstance().format(count));
documents = new ArrayList();
}
}


System.out.println(Comitting.);
s.commit(true, true);

System.out.println(Optimizing.);
s.close();
System.out.println(Done.);
}
{noformat}

 Exception while using {!cardinality=1.0}.
 -

 Key: SOLR-7954
 URL: https://issues.apache.org/jira/browse/SOLR-7954
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2.1
 Environment: SolrCloud 4 node cluster.
 Ubuntu 12.04
 OS Type 64 bit
Reporter: Modassar Ather

 Following exception is thrown for the query : 
 bq. q=field:querystats=truestats.field={!cardinality=1.0}field.
 The exception is not seen once the cardinality is set to 0.9 or less.
 The field is docValues enabled and indexed=false. The same exception I tried 
 to reproduce on non docValues field but could not.
 ERROR - 2015-08-11 12:24:00.222; [core] org.apache.solr.common.SolrException; 
 null:java.lang.ArrayIndexOutOfBoundsException: 3
 at 
 net.agkn.hll.serialization.BigEndianAscendingWordSerializer.writeWord(BigEndianAscendingWordSerializer.java:152)
 at net.agkn.hll.util.BitVector.getRegisterContents(BitVector.java:247)
 at net.agkn.hll.HLL.toBytes(HLL.java:917)
 at net.agkn.hll.HLL.toBytes(HLL.java:869)
 at 
 org.apache.solr.handler.component.AbstractStatsValues.getStatsValues(StatsValuesFactory.java:348)
 at 
 org.apache.solr.handler.component.StatsComponent.convertToResponse(StatsComponent.java:151)
 at 
 org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:62)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
 at 

[jira] [Created] (SOLR-7955) Auto create .system collection on first start if it does not exist

2015-08-21 Thread JIRA
Jan Høydahl created SOLR-7955:
-

 Summary: Auto create .system collection on first start if it does 
not exist
 Key: SOLR-7955
 URL: https://issues.apache.org/jira/browse/SOLR-7955
 Project: Solr
  Issue Type: Improvement
Reporter: Jan Høydahl


Why should a user need to create the {{.system}} collection manually? It would 
simplify instructions related to BLOB store if user could assume it is always 
there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC1 release of apache-solr-ref-guide-5.3.pdf

2015-08-21 Thread Jan Høydahl
Comments

General:
* The line wrapping for linked and monospaced text breaks mid-word e.g. p6 
“Ta-king”. Realize that it is desired for lng http links etc, but can we do 
better?
* Long CODE blocks starts on one page with only blank lines and continues on 
the next. Would be better with white space than the grey background.
* The refGuide should not link to SearchHub.org (which now redirects to 
LucidWorks.com) (p225,263,264,266,440). We could instead refer to 
lucene.apache.org/solr/resources for more info
* Some places “Windows” is written with lowercase “w”

Specific:
* P5: In the “Got Java” chapter, should we start recommending (not requiring) 
Java8 and update -version command print to the same (since 7 is not supported)?
* P222, 4th paragraph in Overview of Searching in Solr: The default query 
parser is the DisMax query parser.”. This is wrong, the default query parser is 
“lucene”!
* P222: 3rd paragraph from bottom: Search parameters may also specify a query 
filter.” - The common terminology is “filter query”, not “query filter
* P223: Is the CNET screen shot property attributed? Should probably be © CBS 
Interactive Inc. Also it is old, they have a completely new layout..
* P282: The example for pivot+range uses localparam {!query=q1} from previous 
example, but correct is {!range=r1}
* P386: Reformat the code block with new line wrappings for readability
* P411: Font size of some of the URProcessor links are 11 instead of 10 :)
* P577: The “Errata” url link links to itself instead of opening a browser to 
cwiki

-0 to release as is

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

 20. aug. 2015 kl. 18.07 skrev Cassandra Targett casstarg...@gmail.com:
 
 Please VOTE to release the following as apache-solr-ref-guide-5.3.pdf
 
 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.3-RC1/
  
 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.3-RC1/
 
 $cat apache-solr-ref-guide-5.3.pdf.sha1 
 1255cba4413023e30aff345d30bce33846189975  apache-solr-ref-guide-5.3.pdf
 
 
 
 Here's my +1.
 
 Thanks,
 
 Cassandra
 



[jira] [Commented] (SOLR-7955) Auto create .system collection on first start if it does not exist

2015-08-21 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706565#comment-14706565
 ] 

Noble Paul commented on SOLR-7955:
--

Yes, this was the original plan. I was waiting for the feature to settle

* In a cluster when a create collection is invoked check if {{.system}} 
collection exists. If not create a {{.system}} collection with shards=1, and 
replicationFactor=2
* if the first collection to be created is {{.system}} then it's fine 

 Auto create .system collection on first start if it does not exist
 --

 Key: SOLR-7955
 URL: https://issues.apache.org/jira/browse/SOLR-7955
 Project: Solr
  Issue Type: Improvement
Reporter: Jan Høydahl

 Why should a user need to create the {{.system}} collection manually? It 
 would simplify instructions related to BLOB store if user could assume it is 
 always there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7949) Thers is a xss issue in plugins/stats page of Admin Web UI.

2015-08-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706572#comment-14706572
 ] 

Jan Høydahl commented on SOLR-7949:
---

[~davidchiu] thanks for your bug reports. I don't know if you do all your 
research in FireBug or if you download the full Solr source code and build 
yourself. If you do the latter, please consider uploading your findings as a 
patch file. See more in https://wiki.apache.org/solr/HowToContribute

 Thers is a xss issue in plugins/stats page of Admin Web UI.
 ---

 Key: SOLR-7949
 URL: https://issues.apache.org/jira/browse/SOLR-7949
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.9, 4.10.4, 5.2.1
Reporter: davidchiu
Assignee: Jan Høydahl
 Fix For: Trunk, 5.4, 5.3.1


 Open Solr Admin Web UI, select a core(such as collection1) and then click 
 Plugins/stats,and type a url like 
 http://127.0.0.1:8983/solr/#/collection1/plugins/cache?entry=score=img 
 src=1 onerror=alert(1); to the browser address, you will get alert box with 
 1.
 I changed follow code to resolve this problem:
 The Original code:
   for( var i = 0; i  entry_count; i++ )
   {
 $( 'a[data-bean=' + entries[i] + ']', frame_element )
   .parent().addClass( 'expanded' );
   }
 The Changed code:
   for( var i = 0; i  entry_count; i++ )
   {
 $( 'a[data-bean=' + entries[i].esc() + ']', frame_element )
   .parent().addClass( 'expanded' );
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7954) Exception while using {!cardinality=1.0}.

2015-08-21 Thread Modassar Ather (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706525#comment-14706525
 ] 

Modassar Ather commented on SOLR-7954:
--

Following test method can be used to add data using which the exception can be 
reproduced. Please do the necessary changes.
Change ZKHOST:ZKPORT to point to zkhost available and COLECTION to the 
available collection.
{noformat}
public void index() throws SolrServerException, IOException {
CloudSolrClient s = new CloudSolrClient(ZKHOST:ZKPORT);
int count = 0;
s.setDefaultCollection(COLECTION);
ListSolrInputDocument documents = new ArrayList();
for (int i = 1; i = 100; i++) {
SolrInputDocument doc = new SolrInputDocument();
doc.addField(field1, i);
doc.addField(colid, val!+i+!-+ref+i);
doc.addField(field, DATA+(12345+i));
documents.add(doc);
if((documents.size() % 1) == 0){
count = count + 1;
s.add(documents);
System.out.println(System.currentTimeMillis() +  - Indexed 
document #  + NumberFormat.getInstance().format(count));
documents = new ArrayList();
}
}


System.out.println(Comitting.);
s.commit(true, true);

System.out.println(Optimizing.);
s.close();
System.out.println(Done.);
}
{noformat}

 Exception while using {!cardinality=1.0}.
 -

 Key: SOLR-7954
 URL: https://issues.apache.org/jira/browse/SOLR-7954
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2.1
 Environment: SolrCloud 4 node cluster.
 Ubuntu 12.04
 OS Type 64 bit
Reporter: Modassar Ather

 Following exception is thrown for the query : 
 bq. q=field:querystats=truestats.field={!cardinality=1.0}field.
 The exception is not seen once the cardinality is set to 0.9 or less.
 The field is docValues enabled and indexed=false. The same exception I tried 
 to reproduce on non docValues field but could not.
 ERROR - 2015-08-11 12:24:00.222; [core] org.apache.solr.common.SolrException; 
 null:java.lang.ArrayIndexOutOfBoundsException: 3
 at 
 net.agkn.hll.serialization.BigEndianAscendingWordSerializer.writeWord(BigEndianAscendingWordSerializer.java:152)
 at net.agkn.hll.util.BitVector.getRegisterContents(BitVector.java:247)
 at net.agkn.hll.HLL.toBytes(HLL.java:917)
 at net.agkn.hll.HLL.toBytes(HLL.java:869)
 at 
 org.apache.solr.handler.component.AbstractStatsValues.getStatsValues(StatsValuesFactory.java:348)
 at 
 org.apache.solr.handler.component.StatsComponent.convertToResponse(StatsComponent.java:151)
 at 
 org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:62)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
 at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
 at 
 

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_51) - Build # 13943 - Still Failing!

2015-08-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13943/
Java: 32bit/jdk1.8.0_51 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([3D704534180BEC4:A493BCF72C3BAD7D]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplication(CdcrReplicationHandlerTest.java:86)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Lucene/Solr release notes

2015-08-21 Thread Shalin Shekhar Mangar
I have added some of the interesting changes to the Lucene release
notes. Please feel free to expand/edit/correct.

On Fri, Aug 21, 2015 at 12:01 AM, Noble Paul noble.p...@gmail.com wrote:
 I didn't know what was really the important features , So the Lucene
 release note is a just TODO page . Please pitch in and fill it

 https://wiki.apache.org/lucene-java/ReleaseNote53

 On Fri, Aug 21, 2015 at 12:00 AM, Noble Paul noble.p...@gmail.com wrote:
 I’ve made drafts for the Lucene and Solr release notes - please feel
 free to edit or suggest edits:

 Lucene: https://wiki.apache.org/lucene-java/ReleaseNote53

 Solr: http://wiki.apache.org/solr/ReleaseNote53


 --
 -
 Noble Paul



 --
 -
 Noble Paul

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7955) Auto create .system collection on first start if it does not exist

2015-08-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706548#comment-14706548
 ] 

Jan Høydahl commented on SOLR-7955:
---

Reference guide https://cwiki.apache.org/confluence/display/solr/Blob+Store+API 
requires you to create the collection manually. If instead the Overseer created 
it on startup with shards=1, replicationfactor=2, then the refguide could 
instead document that if you require more shards or replicas then you should 
modify or recreate the collection manually.

 Auto create .system collection on first start if it does not exist
 --

 Key: SOLR-7955
 URL: https://issues.apache.org/jira/browse/SOLR-7955
 Project: Solr
  Issue Type: Improvement
Reporter: Jan Høydahl

 Why should a user need to create the {{.system}} collection manually? It 
 would simplify instructions related to BLOB store if user could assume it is 
 always there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6756) Give MatchAllDocsQuery a dedicated BulkScorer

2015-08-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706714#comment-14706714
 ] 

Robert Muir commented on LUCENE-6756:
-

Can you check that the specialization does not make hotspot crazy? AFAIK its 
already crazy around this stuff...

 Give MatchAllDocsQuery a dedicated BulkScorer
 -

 Key: LUCENE-6756
 URL: https://issues.apache.org/jira/browse/LUCENE-6756
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6756.patch, MABench.java


 MatchAllDocsQuery currently uses the default BulkScorer, which creates a 
 Scorer and iterates over matching doc IDs up to NO_MORE_DOCS. I tried to 
 build a dedicated BulkScorer, which seemed to help remove abstractions as it 
 helped improve throughput by a ~2x factor with simple collectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7734) MapReduce Indexer can error when using collection

2015-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-7734:

Attachment: SOLR-7734.branch5x.patch

Attached is an addendum patch for branch_5x to be applied on top of the 
original patch.

 MapReduce Indexer can error when using collection
 -

 Key: SOLR-7734
 URL: https://issues.apache.org/jira/browse/SOLR-7734
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
Reporter: Mike Drob
Assignee: Gregory Chanan
 Fix For: Trunk, 5.4

 Attachments: SOLR-7734.branch5x.patch, SOLR-7734.patch, 
 SOLR-7734.patch, SOLR-7734.patch, SOLR-7734.patch, SOLR-7734.patch, 
 SOLR-7734.patch


 When running the MapReduceIndexerTool, it will usually pull a 
 {{solrconfig.xml}} from ZK for the collection that it is running against. 
 This can be problematic for several reasons:
 * Performance: The configuration in ZK will likely have several query 
 handlers, and lots of other components that don't make sense in an 
 indexing-only use of EmbeddedSolrServer (ESS).
 * Classpath Resources: If the Solr services are using some kind of additional 
 service (such as Sentry for auth) then the indexer will not have access to 
 the necessary configurations without the user jumping through several hoops.
 * Distinct Configuration Needs: Enabling Soft Commits on the ESS doesn't make 
 sense. There's other configurations that 
 * Update Chain Behaviours: I'm under the impression that UpdateChains may 
 behave differently in ESS than a SolrCloud cluster. Is it safe to depend on 
 consistent behaviour here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Ant/JUnit/Bash bug with RTL languages?

2015-08-21 Thread Dawid Weiss
https://github.com/carrotsearch/randomizedtesting/blob/master/junit4-ant/src/main/java/com/carrotsearch/ant/tasks/junit4/JUnit4.java#L137-L159

I think it's correct in the source. Must be an issue with the console.

Dawid


On Fri, Aug 21, 2015 at 4:22 PM, Robert Muir rcm...@gmail.com wrote:
 Yes exactly, Benson is correct. Look at an arabic text file with cat
 on your same console and see if its correct: usually its not.

 On Fri, Aug 21, 2015 at 10:13 AM, Benson Margulies
 bimargul...@gmail.com wrote:
 Isn't this all about how your console does the Unicode bidi algo, and
 not about anything in the code?


 On Fri, Aug 21, 2015 at 10:12 AM, Mike Drob mad...@cloudera.com wrote:
 Hello,

 I noticed that when running tests, if the language selected is RTL then the
 JUnit says hello output is backwards. However, if I copy the output and
 try to paste it into firefox or gedit then the text is properly
 right-to-left.

 For example, when selecting hebrew, on my system it prints
 JUnit4 says [shin-lamed-vav-mem] instead of starting with [shin] on the
 right.

 This shouldn't be a high priority, since the tests themselves still pass,
 but I was wondering if that's something that we can fix or if the error is
 in a lower level - like the junit libs, or maybe even bash. Anybody have any
 ideas?

 Mike

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7890) By default require admin rights to access /security.json in ZK

2015-08-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-7890:
--
Attachment: SOLR-7890.patch

First patch with tests that succeed. It requires the solr backend credentials 
for ZK in order to show content in the ZK tree browser for the protected nodes 
(configurable).

If a non-backed user tries to access, the node will be seen but {{*** ZNODE 
DATA PROTECTED ***}} will be displayed in place of the content.

 By default require admin rights to access /security.json in ZK
 --

 Key: SOLR-7890
 URL: https://issues.apache.org/jira/browse/SOLR-7890
 Project: Solr
  Issue Type: Sub-task
  Components: security
Reporter: Jan Høydahl
 Fix For: Trunk

 Attachments: SOLR-7890.patch


 Perhaps {{VMParamsAllAndReadonlyDigestZkACLProvider}} should by default 
 require admin access for read/write of {{/security.json}}, and other 
 sensitive paths. Today this is left to the user to implement.
 Also, perhaps factor out the already-known sensitive paths into a separate 
 class, so that various {{ACLProvider}} implementations can get a list of 
 paths that should be admin-only, read-only etc from one central place. Then 
 3rd party impls pulling ZK creds from elsewhere will still do the right thing 
 in the future if we introduce other sensitive Znodes...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7890) By default require admin rights to access /security.json in ZK

2015-08-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-7890:
-

Assignee: Jan Høydahl

 By default require admin rights to access /security.json in ZK
 --

 Key: SOLR-7890
 URL: https://issues.apache.org/jira/browse/SOLR-7890
 Project: Solr
  Issue Type: Sub-task
  Components: security
Reporter: Jan Høydahl
Assignee: Jan Høydahl
 Fix For: Trunk

 Attachments: SOLR-7890.patch


 Perhaps {{VMParamsAllAndReadonlyDigestZkACLProvider}} should by default 
 require admin access for read/write of {{/security.json}}, and other 
 sensitive paths. Today this is left to the user to implement.
 Also, perhaps factor out the already-known sensitive paths into a separate 
 class, so that various {{ACLProvider}} implementations can get a list of 
 paths that should be admin-only, read-only etc from one central place. Then 
 3rd party impls pulling ZK creds from elsewhere will still do the right thing 
 in the future if we introduce other sensitive Znodes...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Ant/JUnit/Bash bug with RTL languages?

2015-08-21 Thread Benson Margulies
Isn't this all about how your console does the Unicode bidi algo, and
not about anything in the code?


On Fri, Aug 21, 2015 at 10:12 AM, Mike Drob mad...@cloudera.com wrote:
 Hello,

 I noticed that when running tests, if the language selected is RTL then the
 JUnit says hello output is backwards. However, if I copy the output and
 try to paste it into firefox or gedit then the text is properly
 right-to-left.

 For example, when selecting hebrew, on my system it prints
 JUnit4 says [shin-lamed-vav-mem] instead of starting with [shin] on the
 right.

 This shouldn't be a high priority, since the tests themselves still pass,
 but I was wondering if that's something that we can fix or if the error is
 in a lower level - like the junit libs, or maybe even bash. Anybody have any
 ideas?

 Mike

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Ant/JUnit/Bash bug with RTL languages?

2015-08-21 Thread Benson Margulies
On Fri, Aug 21, 2015 at 10:18 AM, Mike Drob mad...@cloudera.com wrote:
 Yea, I'm fully ready to hear that it's an issue with bash. Didn't mean to
 cast any aspersions on the test framework, mostly was curious if anybody had
 ever thought about this before.

Not bash, X windows or whatever implements your actual UI output :-)


 On Fri, Aug 21, 2015 at 9:13 AM, Benson Margulies bimargul...@gmail.com
 wrote:

 Isn't this all about how your console does the Unicode bidi algo, and
 not about anything in the code?


 On Fri, Aug 21, 2015 at 10:12 AM, Mike Drob mad...@cloudera.com wrote:
  Hello,
 
  I noticed that when running tests, if the language selected is RTL then
  the
  JUnit says hello output is backwards. However, if I copy the output
  and
  try to paste it into firefox or gedit then the text is properly
  right-to-left.
 
  For example, when selecting hebrew, on my system it prints
  JUnit4 says [shin-lamed-vav-mem] instead of starting with [shin] on
  the
  right.
 
  This shouldn't be a high priority, since the tests themselves still
  pass,
  but I was wondering if that's something that we can fix or if the error
  is
  in a lower level - like the junit libs, or maybe even bash. Anybody have
  any
  ideas?
 
  Mike

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7569) Create an API to force a leader election between nodes

2015-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706723#comment-14706723
 ] 

Mark Miller commented on SOLR-7569:
---

I don't really like the idea of choosing a leader. It seems to me this feature 
should force a new election and address the state that prevents someone from 
becoming leader somehow. You still want the sync stage and the system to pick 
the best leader though. This should just get you out of the state that is 
preventing a leader from being elected.

 Create an API to force a leader election between nodes
 --

 Key: SOLR-7569
 URL: https://issues.apache.org/jira/browse/SOLR-7569
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
  Labels: difficulty-medium, impact-high
 Attachments: SOLR-7569.patch, SOLR-7569.patch


 There are many reasons why Solr will not elect a leader for a shard e.g. all 
 replicas' last published state was recovery or due to bugs which cause a 
 leader to be marked as 'down'. While the best solution is that they never get 
 into this state, we need a manual way to fix this when it does get into this  
 state. Right now we can do a series of dance involving bouncing the node 
 (since recovery paths between bouncing and REQUESTRECOVERY are different), 
 but that is difficult when running a large cluster. Although it is possible 
 that such a manual API may lead to some data loss but in some cases, it is 
 the only possible option to restore availability.
 This issue proposes to build a new collection API which can be used to force 
 replicas into recovering a leader while avoiding data loss on a best effort 
 basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2604 - Failure!

2015-08-21 Thread Michael McCandless
Hmm, fun, this is a new test I just added, and I through only RAMDir
was buggy ... I'll dig.

Mike McCandless

http://blog.mikemccandless.com


On Fri, Aug 21, 2015 at 3:52 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2604/
 Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC

 1 tests failed.
 FAILED:  org.apache.lucene.store.TestMultiMMap.testCloneThreadSafety

 Error Message:
 Captured an uncaught exception in thread: Thread[id=587, name=Thread-428, 
 state=RUNNABLE, group=TGRP-TestMultiMMap]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=587, name=Thread-428, state=RUNNABLE, 
 group=TGRP-TestMultiMMap]
 at 
 __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D:7603B17ACE521C97]:0)
 Caused by: java.lang.AssertionError: java.io.EOFException: seek past EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
 at __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D]:0)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
 at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:487)
 at 
 org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
 Caused by: java.io.EOFException: seek past EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
 at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.seek(ByteBufferIndexInput.java:504)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:257)
 ... 2 more




 Build Log:
 [...truncated 1150 lines...]
[junit4] Suite: org.apache.lucene.store.TestMultiMMap
[junit4]   2 aug 21, 2015 9:50:09 AM 
 com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
  uncaughtException
[junit4]   2 WARNING: Uncaught exception in thread: 
 Thread[Thread-428,5,TGRP-TestMultiMMap]
[junit4]   2 java.lang.AssertionError: java.io.EOFException: seek past 
 EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
[junit4]   2at 
 __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D]:0)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:487)
[junit4]   2at 
 org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
[junit4]   2 Caused by: java.io.EOFException: seek past EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.seek(ByteBufferIndexInput.java:504)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:257)
[junit4]   2... 2 more
[junit4]   2
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestMultiMMap 
 -Dtests.method=testCloneThreadSafety -Dtests.seed=F4E5EB2F2F531D5D 
 -Dtests.slow=true -Dtests.locale=da -Dtests.timezone=Europe/Zurich 
 -Dtests.asserts=true -Dtests.file.encoding=UTF-8
[junit4] ERROR   0.42s J1 | TestMultiMMap.testCloneThreadSafety 
[junit4] Throwable #1: 
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=587, name=Thread-428, state=RUNNABLE, 
 group=TGRP-TestMultiMMap]
[junit4]at 
 __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D:7603B17ACE521C97]:0)
[junit4] Caused by: java.lang.AssertionError: java.io.EOFException: 
 seek past EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
[junit4]at 
 __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D]:0)
[junit4]at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
[junit4]at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:487)
[junit4]at 
 

Re: [JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2604 - Failure!

2015-08-21 Thread Michael McCandless
OK I reopened https://issues.apache.org/jira/browse/LUCENE-6745 for this ...

Mike McCandless

http://blog.mikemccandless.com


On Fri, Aug 21, 2015 at 9:51 AM, Michael McCandless
luc...@mikemccandless.com wrote:
 Hmm, fun, this is a new test I just added, and I through only RAMDir
 was buggy ... I'll dig.

 Mike McCandless

 http://blog.mikemccandless.com


 On Fri, Aug 21, 2015 at 3:52 AM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2604/
 Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC

 1 tests failed.
 FAILED:  org.apache.lucene.store.TestMultiMMap.testCloneThreadSafety

 Error Message:
 Captured an uncaught exception in thread: Thread[id=587, name=Thread-428, 
 state=RUNNABLE, group=TGRP-TestMultiMMap]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=587, name=Thread-428, 
 state=RUNNABLE, group=TGRP-TestMultiMMap]
 at 
 __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D:7603B17ACE521C97]:0)
 Caused by: java.lang.AssertionError: java.io.EOFException: seek past EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
 at __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D]:0)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
 at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:487)
 at 
 org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
 Caused by: java.io.EOFException: seek past EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
 at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.seek(ByteBufferIndexInput.java:504)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:257)
 ... 2 more




 Build Log:
 [...truncated 1150 lines...]
[junit4] Suite: org.apache.lucene.store.TestMultiMMap
[junit4]   2 aug 21, 2015 9:50:09 AM 
 com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
  uncaughtException
[junit4]   2 WARNING: Uncaught exception in thread: 
 Thread[Thread-428,5,TGRP-TestMultiMMap]
[junit4]   2 java.lang.AssertionError: java.io.EOFException: seek past 
 EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
[junit4]   2at 
 __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D]:0)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:487)
[junit4]   2at 
 org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
[junit4]   2 Caused by: java.io.EOFException: seek past EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.seek(ByteBufferIndexInput.java:504)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:257)
[junit4]   2... 2 more
[junit4]   2
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestMultiMMap 
 -Dtests.method=testCloneThreadSafety -Dtests.seed=F4E5EB2F2F531D5D 
 -Dtests.slow=true -Dtests.locale=da -Dtests.timezone=Europe/Zurich 
 -Dtests.asserts=true -Dtests.file.encoding=UTF-8
[junit4] ERROR   0.42s J1 | TestMultiMMap.testCloneThreadSafety 
[junit4] Throwable #1: 
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=587, name=Thread-428, 
 state=RUNNABLE, group=TGRP-TestMultiMMap]
[junit4]at 
 __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D:7603B17ACE521C97]:0)
[junit4] Caused by: java.lang.AssertionError: java.io.EOFException: 
 seek past EOF: 
 MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
[junit4]at 
 __randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D]:0)
[junit4]at 
 

[jira] [Commented] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706779#comment-14706779
 ] 

Robert Muir commented on LUCENE-6745:
-

{quote}
Instead, I think the bug is in BKDDVFormat, because it's doing real stuff 
with the original IndexInput it opened, instead of always using clones to do so 
...
{quote}

In my opinion the API is trappy here. There are two ways i see this currently 
done today:
* codec opens an immutable IndexInput, clone()s off that. Its ok without sync 
because it does not modify the original.
* codec protects clone() with sync.

Long term, maybe we should explore enforcing the first one with the API to 
prevent crazy stuff. In other words, Directory would return a Handle or 
Descriptor, that represents the actual FD opened. And it really has no 
conceptual state, you cannot really do anything with it except close it and get 
IndexInputs from it.

I do hate the additional abstraction, however the more I think about it, the 
more I think its better: it would just be representing what actually happens 
today, but without the thread safety bugs of using IndexInput for both purposes.

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Ant/JUnit/Bash bug with RTL languages?

2015-08-21 Thread Robert Muir
Yes exactly, Benson is correct. Look at an arabic text file with cat
on your same console and see if its correct: usually its not.

On Fri, Aug 21, 2015 at 10:13 AM, Benson Margulies
bimargul...@gmail.com wrote:
 Isn't this all about how your console does the Unicode bidi algo, and
 not about anything in the code?


 On Fri, Aug 21, 2015 at 10:12 AM, Mike Drob mad...@cloudera.com wrote:
 Hello,

 I noticed that when running tests, if the language selected is RTL then the
 JUnit says hello output is backwards. However, if I copy the output and
 try to paste it into firefox or gedit then the text is properly
 right-to-left.

 For example, when selecting hebrew, on my system it prints
 JUnit4 says [shin-lamed-vav-mem] instead of starting with [shin] on the
 right.

 This shouldn't be a high priority, since the tests themselves still pass,
 but I was wondering if that's something that we can fix or if the error is
 in a lower level - like the junit libs, or maybe even bash. Anybody have any
 ideas?

 Mike

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6756) Give MatchAllDocsQuery a dedicated BulkScorer

2015-08-21 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6756:


 Summary: Give MatchAllDocsQuery a dedicated BulkScorer
 Key: LUCENE-6756
 URL: https://issues.apache.org/jira/browse/LUCENE-6756
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


MatchAllDocsQuery currently uses the default BulkScorer, which creates a Scorer 
and iterates over matching doc IDs up to NO_MORE_DOCS. I tried to build a 
dedicated BulkScorer, which seemed to help remove abstractions as it helped 
improve throughput by a ~2x factor with simple collectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7955) Auto create .system collection on first start if it does not exist

2015-08-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706713#comment-14706713
 ] 

Jan Høydahl commented on SOLR-7955:
---

Ok, understand that blob store is just a generic key/value store and knows 
nothing about the data. So any smartness here would need to be added on a layer 
above. Let's defer that to another jira.

 Auto create .system collection on first start if it does not exist
 --

 Key: SOLR-7955
 URL: https://issues.apache.org/jira/browse/SOLR-7955
 Project: Solr
  Issue Type: Improvement
Reporter: Jan Høydahl

 Why should a user need to create the {{.system}} collection manually? It 
 would simplify instructions related to BLOB store if user could assume it is 
 always there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Ant/JUnit/Bash bug with RTL languages?

2015-08-21 Thread Mike Drob
Hello,

I noticed that when running tests, if the language selected is RTL then the
JUnit says hello output is backwards. However, if I copy the output and
try to paste it into firefox or gedit then the text is properly
right-to-left.

For example, when selecting hebrew, on my system it prints
JUnit4 says [shin-lamed-vav-mem] instead of starting with [shin] on the
right.

This shouldn't be a high priority, since the tests themselves still pass,
but I was wondering if that's something that we can fix or if the error is
in a lower level - like the junit libs, or maybe even bash. Anybody have
any ideas?

Mike


[jira] [Commented] (SOLR-7734) MapReduce Indexer can error when using collection

2015-08-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706781#comment-14706781
 ] 

Mike Drob commented on SOLR-7734:
-

The patch applied cleanly to branch_5x for me, and the tests ran without issue. 
Is there something specific I can check?

 MapReduce Indexer can error when using collection
 -

 Key: SOLR-7734
 URL: https://issues.apache.org/jira/browse/SOLR-7734
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
Reporter: Mike Drob
Assignee: Gregory Chanan
 Fix For: Trunk, 5.4

 Attachments: SOLR-7734.patch, SOLR-7734.patch, SOLR-7734.patch, 
 SOLR-7734.patch, SOLR-7734.patch, SOLR-7734.patch


 When running the MapReduceIndexerTool, it will usually pull a 
 {{solrconfig.xml}} from ZK for the collection that it is running against. 
 This can be problematic for several reasons:
 * Performance: The configuration in ZK will likely have several query 
 handlers, and lots of other components that don't make sense in an 
 indexing-only use of EmbeddedSolrServer (ESS).
 * Classpath Resources: If the Solr services are using some kind of additional 
 service (such as Sentry for auth) then the indexer will not have access to 
 the necessary configurations without the user jumping through several hoops.
 * Distinct Configuration Needs: Enabling Soft Commits on the ESS doesn't make 
 sense. There's other configurations that 
 * Update Chain Behaviours: I'm under the impression that UpdateChains may 
 behave differently in ESS than a SolrCloud cluster. Is it safe to depend on 
 consistent behaviour here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706787#comment-14706787
 ] 

Michael McCandless commented on LUCENE-6745:


+1 for a separate abstraction making this convention strongly typed.

For this issue I'll just revert the commit, and fix the BKD/RangeTree producers.

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706798#comment-14706798
 ] 

Mark Miller commented on SOLR-7951:
---

Thanks for looking at this!

I suspect the problem is around 

{code}
  // we retry on 404 or 403 or 503 or 500
  // unless it's an update - then we only retry on connect exception
{code}

Do you know what the http code for the response was? Unfortunately, I'm not 
sure we can easily differentiate between a response we should retry or that is 
likely to get the same result on a retry.

If this is the case, the above patch looks like a possible solution, but we 
probably want to add some comments to help future devs understand what happened 
here.

We also want to make sure we are covering the following case correctly:

{code}
 catch (SolrServerException e) {
  Throwable rootCause = e.getRootCause();
  if (!isUpdate  rootCause instanceof IOException) {
ex = (!isZombie) ? addZombie(client, e) : e;
  } else if (isUpdate  rootCause instanceof ConnectException) {
ex = (!isZombie) ? addZombie(client, e) : e;
  } else {
throw e;
  }
{code}

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6756) Give MatchAllDocsQuery a dedicated BulkScorer

2015-08-21 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6756:
-
Attachment: MABench.java
LUCENE-6756.patch

Here are a patch and the simplistic/non-realistic/terrible benchmark I used.

 Give MatchAllDocsQuery a dedicated BulkScorer
 -

 Key: LUCENE-6756
 URL: https://issues.apache.org/jira/browse/LUCENE-6756
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6756.patch, MABench.java


 MatchAllDocsQuery currently uses the default BulkScorer, which creates a 
 Scorer and iterates over matching doc IDs up to NO_MORE_DOCS. I tried to 
 build a dedicated BulkScorer, which seemed to help remove abstractions as it 
 helped improve throughput by a ~2x factor with simple collectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6745:
---
Attachment: LUCENE-6745.patch

Patch, reverting the original commit, and fixing BKD to always clone before 
using the original IndexInput.

I'll open a new issue to make this strongly typed.

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch, LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API

2015-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706749#comment-14706749
 ] 

Mark Miller commented on SOLR-7789:
---

I still cant apply this patch cleanly against 1696834. Do you have an svn 
version of the patch?

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reopened LUCENE-6745:


Reopening this, I think we should revert it:

I dug into this test failure: 
http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2604/

{noformat}
[junit4] Suite: org.apache.lucene.store.TestMultiMMap
   [junit4]   2 aug 21, 2015 9:50:09 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[Thread-428,5,TGRP-TestMultiMMap]
   [junit4]   2 java.lang.AssertionError: java.io.EOFException: seek past EOF: 
MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
   [junit4]   2at 
__randomizedtesting.SeedInfo.seed([F4E5EB2F2F531D5D]:0)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:487)
   [junit4]   2at 
org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
   [junit4]   2 Caused by: java.io.EOFException: seek past EOF: 
MMapIndexInput(path=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/build/core/test/J1/temp/lucene.store.TestMultiMMap_F4E5EB2F2F531D5D-001/tempDir-005/randombytes)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.seek(ByteBufferIndexInput.java:504)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:257)
   [junit4]   2... 2 more
   [junit4]   2 
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestMultiMMap 
-Dtests.method=testCloneThreadSafety -Dtests.seed=F4E5EB2F2F531D5D 
-Dtests.slow=true -Dtests.locale=da -Dtests.timezone=Europe/Zurich 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
{noformat}

The failure is easy to reproduce, and indeed MMap's IndexInput.clone is not 
thread safe.

But thinking about it more ... I think this is too much to expect from 
Directory impls.

Instead, I think the bug is in BKDDVFormat, because it's doing real stuff 
with the original IndexInput it opened, instead of always using clones to do so 
...

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6757) Directory.openInput should not directly return IndexInput

2015-08-21 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6757:
--

 Summary: Directory.openInput should not directly return IndexInput
 Key: LUCENE-6757
 URL: https://issues.apache.org/jira/browse/LUCENE-6757
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless


Spinoff from LUCENE-6745 suggested by [~rcmuir].

It's dangerous today that Directory.openInput returns an IndexInput which you 
can then use for IO but also clone so other threads can do thread-private IO.

We could instead make this strongly typed, e.g. Directory.openInput returns a 
thingy (Handle, Descriptor, something) whose sole purpose is to 1) produce 
IndexInput for thread-private use, and 2) close.

In the meantime, we could add some simple asserts to MDW to detect if the 
original IndexInput is ever use for anything but cloning, when other threads 
have cloned / do clone in the future.  I'll explore that first ... it's a start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706808#comment-14706808
 ] 

Michael McCandless commented on LUCENE-6745:


I opened LUCENE-6757 for the longer term fix.

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch, LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706819#comment-14706819
 ] 

Mark Miller commented on SOLR-7951:
---

For the new test, isn't this particular cause still supposed to actually say No 
live SolrServers available to handle this request?

The root problem is a socket timeout exception to that server in the test - 
when we have connection problems, we try the other server options and finally 
return 'no live solr servers' with the root cause.

We should avoid that message only when the problem is not connection related.

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7955) Auto create .system collection on first start if it does not exist

2015-08-21 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706680#comment-14706680
 ] 

Noble Paul commented on SOLR-7955:
--

bq.What if admin chooses to upload a bunch of jars to blob store right after 
install, and then someone else, perhaps an application, will later create a 
bunch of collections. Then the .system collection must exist independent of 
collection CREATE requests.

If admin wants to upload a bunch of jars before collection CREATE , go ahead 
and create the {{.system}} collection first and upload all jars 

bq.Likewise, it would be nice if a collection could be created with a flag 
bind all gobal runtime libs,

I don't think you understand the current feature fully. 

There is nothing called all runtime libs . blob store has more things than 
just jars. And there are many versions of each jars. User MUST specify which 
version of the jar he wants to use. 
The idea of latest is NOT possible , it can mean that the same collection in 
two different nodes may run different versions of a library.



 Auto create .system collection on first start if it does not exist
 --

 Key: SOLR-7955
 URL: https://issues.apache.org/jira/browse/SOLR-7955
 Project: Solr
  Issue Type: Improvement
Reporter: Jan Høydahl

 Why should a user need to create the {{.system}} collection manually? It 
 would simplify instructions related to BLOB store if user could assume it is 
 always there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_51) - Build # 13946 - Failure!

2015-08-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13946/
Java: 32bit/jdk1.8.0_51 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.store.TestMultiMMap.testCloneThreadSafety

Error Message:
Captured an uncaught exception in thread: Thread[id=424, name=Thread-343, 
state=RUNNABLE, group=TGRP-TestMultiMMap]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=424, name=Thread-343, state=RUNNABLE, 
group=TGRP-TestMultiMMap]
at 
__randomizedtesting.SeedInfo.seed([540669469019BF76:D6E033137118BEBC]:0)
Caused by: java.lang.AssertionError: java.io.EOFException: seek past EOF: 
MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
at __randomizedtesting.SeedInfo.seed([540669469019BF76]:0)
at 
org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
at 
org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:488)
at 
org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
Caused by: java.io.EOFException: seek past EOF: 
MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
at 
org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
at 
org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.seek(ByteBufferIndexInput.java:505)
at 
org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:257)
... 2 more




Build Log:
[...truncated 694 lines...]
   [junit4] Suite: org.apache.lucene.store.TestMultiMMap
   [junit4]   2 aug 21, 2015 11:45:11 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[Thread-343,5,TGRP-TestMultiMMap]
   [junit4]   2 java.lang.AssertionError: java.io.EOFException: seek past EOF: 
MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
   [junit4]   2at 
__randomizedtesting.SeedInfo.seed([540669469019BF76]:0)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:488)
   [junit4]   2at 
org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
   [junit4]   2 Caused by: java.io.EOFException: seek past EOF: 
MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.seek(ByteBufferIndexInput.java:505)
   [junit4]   2at 
org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:257)
   [junit4]   2... 2 more
   [junit4]   2 
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestMultiMMap 
-Dtests.method=testCloneThreadSafety -Dtests.seed=540669469019BF76 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=sk 
-Dtests.timezone=America/Fortaleza -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   1.01s J2 | TestMultiMMap.testCloneThreadSafety 
   [junit4] Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=424, name=Thread-343, state=RUNNABLE, 
group=TGRP-TestMultiMMap]
   [junit4]at 
__randomizedtesting.SeedInfo.seed([540669469019BF76:D6E033137118BEBC]:0)
   [junit4] Caused by: java.lang.AssertionError: java.io.EOFException: 
seek past EOF: 
MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
   [junit4]at 
__randomizedtesting.SeedInfo.seed([540669469019BF76]:0)
   [junit4]at 
org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
   [junit4]at 
org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:488)
   [junit4]at 
org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
   [junit4] Caused by: java.io.EOFException: seek past EOF: 

[jira] [Updated] (LUCENE-6747) FingerprintFilter - a TokenFilter for clustering/linking purposes

2015-08-21 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-6747:
-
Attachment: fingerprintv3.patch

Updated patch - removed instanceof check and added entry to Changes.txt.

Will commit to trunk and 5.x in a day or two if there's no objections

 FingerprintFilter - a TokenFilter for clustering/linking purposes
 -

 Key: LUCENE-6747
 URL: https://issues.apache.org/jira/browse/LUCENE-6747
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Mark Harwood
Priority: Minor
 Attachments: fingerprintv1.patch, fingerprintv2.patch, 
 fingerprintv3.patch


 A TokenFilter that emits a single token which is a sorted, de-duplicated set 
 of the input tokens.
 This approach to normalizing text is used in tools like OpenRefine[1] and 
 elsewhere [2] to help in clustering or linking texts.
 The implementation proposed here has a an upper limit on the size of the 
 combined token which is output.
 [1] https://github.com/OpenRefine/OpenRefine/wiki/Clustering-In-Depth
 [2] 
 https://rajmak.wordpress.com/2013/04/27/clustering-text-map-reduce-in-python/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706862#comment-14706862
 ] 

ASF subversion and git services commented on LUCENE-6745:
-

Commit 1697010 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1697010 ]

LUCENE-6745: IndexInput.clone is not thread-safe; fix BKD/RangeTree to respect 
that

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch, LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7775) support SolrCloud collection as fromIndex param in query-time join

2015-08-21 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706902#comment-14706902
 ] 

Timothy Potter commented on SOLR-7775:
--

Sorry for the delay [~mkhludnev] on getting back to you on this ... patch looks 
good! +1 for commit

 support SolrCloud collection as fromIndex param in query-time join
 --

 Key: SOLR-7775
 URL: https://issues.apache.org/jira/browse/SOLR-7775
 Project: Solr
  Issue Type: Sub-task
  Components: query parsers
Reporter: Mikhail Khludnev
Assignee: Mikhail Khludnev
 Fix For: 5.3

 Attachments: SOLR-7775.patch, SOLR-7775.patch


 it's allusion to SOLR-4905, will be addressed right after SOLR-6234



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6711) Instead of docCount(), maxDoc() is used for numberOfDocuments in SimilarityBase

2015-08-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706980#comment-14706980
 ] 

Robert Muir commented on LUCENE-6711:
-

I dont think its a bug with this.

Likely the typical bugs from crappy useless querynorm, and exposed by shaking 
things up.

 Instead of docCount(), maxDoc() is used for numberOfDocuments in 
 SimilarityBase
 ---

 Key: LUCENE-6711
 URL: https://issues.apache.org/jira/browse/LUCENE-6711
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Reporter: Ahmet Arslan
Assignee: Robert Muir
Priority: Minor
 Fix For: Trunk

 Attachments: LUCENE-6711.patch, LUCENE-6711.patch, LUCENE-6711.patch, 
 LUCENE-6711.patch


 {{SimilarityBase.java}} has the following line :
 {code}
  long numberOfDocuments = collectionStats.maxDoc();
 {code}
 It seems like {{collectionStats.docCount()}}, which returns the total number 
 of documents that have at least one term for this field, is more appropriate 
 statistics here. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Ant/JUnit/Bash bug with RTL languages?

2015-08-21 Thread Mike Drob
Yea, I'm fully ready to hear that it's an issue with bash. Didn't mean to
cast any aspersions on the test framework, mostly was curious if anybody
had ever thought about this before.

On Fri, Aug 21, 2015 at 9:13 AM, Benson Margulies bimargul...@gmail.com
wrote:

 Isn't this all about how your console does the Unicode bidi algo, and
 not about anything in the code?


 On Fri, Aug 21, 2015 at 10:12 AM, Mike Drob mad...@cloudera.com wrote:
  Hello,
 
  I noticed that when running tests, if the language selected is RTL then
 the
  JUnit says hello output is backwards. However, if I copy the output and
  try to paste it into firefox or gedit then the text is properly
  right-to-left.
 
  For example, when selecting hebrew, on my system it prints
  JUnit4 says [shin-lamed-vav-mem] instead of starting with [shin] on
 the
  right.
 
  This shouldn't be a high priority, since the tests themselves still pass,
  but I was wondering if that's something that we can fix or if the error
 is
  in a lower level - like the junit libs, or maybe even bash. Anybody have
 any
  ideas?
 
  Mike

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: Ant/JUnit/Bash bug with RTL languages?

2015-08-21 Thread Mike Drob
Oh, those welcome messages don't correlate to the random locale used for
testing? Moving on, then...

Thanks for the info, Dawid, Robert, and Benson.

On Fri, Aug 21, 2015 at 9:44 AM, Dawid Weiss dawid.we...@gmail.com wrote:


 https://github.com/carrotsearch/randomizedtesting/blob/master/junit4-ant/src/main/java/com/carrotsearch/ant/tasks/junit4/JUnit4.java#L137-L159

 I think it's correct in the source. Must be an issue with the console.

 Dawid


 On Fri, Aug 21, 2015 at 4:22 PM, Robert Muir rcm...@gmail.com wrote:
  Yes exactly, Benson is correct. Look at an arabic text file with cat
  on your same console and see if its correct: usually its not.
 
  On Fri, Aug 21, 2015 at 10:13 AM, Benson Margulies
  bimargul...@gmail.com wrote:
  Isn't this all about how your console does the Unicode bidi algo, and
  not about anything in the code?
 
 
  On Fri, Aug 21, 2015 at 10:12 AM, Mike Drob mad...@cloudera.com
 wrote:
  Hello,
 
  I noticed that when running tests, if the language selected is RTL
 then the
  JUnit says hello output is backwards. However, if I copy the output
 and
  try to paste it into firefox or gedit then the text is properly
  right-to-left.
 
  For example, when selecting hebrew, on my system it prints
  JUnit4 says [shin-lamed-vav-mem] instead of starting with [shin]
 on the
  right.
 
  This shouldn't be a high priority, since the tests themselves still
 pass,
  but I was wondering if that's something that we can fix or if the
 error is
  in a lower level - like the junit libs, or maybe even bash. Anybody
 have any
  ideas?
 
  Mike
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: 4.10 no longer buildable from source?

2015-08-21 Thread Yonik Seeley
Hmmm, OK, I blew away my ivy cache and tried again, and it worked!

-Yonik


On Fri, Aug 21, 2015 at 12:04 PM, Erick Erickson
erickerick...@gmail.com wrote:
 Worked fine on a fresh checkout for me with Java 1.7

 On Fri, Aug 21, 2015 at 8:41 AM, Yonik Seeley ysee...@gmail.com wrote:
 Is it something specific to my setup, or a general issue now?
 -Yonik


 resolve:
 [ivy:retrieve]
 [ivy:retrieve] :: problems summary ::
 [ivy:retrieve]  WARNINGS
 [ivy:retrieve] [FAILED ]
 org.restlet.jee#org.restlet.ext.servlet;2.1.1!org.restlet.ext.servlet.jar:
  (0ms)
 [ivy:retrieve]  shared: tried
 [ivy:retrieve]
 /Users/yonik/.ivy2/shared/org.restlet.jee/org.restlet.ext.servlet/2.1.1/jars/org.restlet.ext.servlet.jar
 [ivy:retrieve]  public: tried
 [ivy:retrieve]
 http://repo1.maven.org/maven2/org/restlet/jee/org.restlet.ext.servlet/2.1.1/org.restlet.ext.servlet-2.1.1.jar
 [ivy:retrieve] ::
 [ivy:retrieve] ::  FAILED DOWNLOADS::
 [ivy:retrieve] :: ^ see resolution messages for details  ^ ::
 [ivy:retrieve] ::
 [ivy:retrieve] ::
 org.restlet.jee#org.restlet.ext.servlet;2.1.1!org.restlet.ext.servlet.jar
 [ivy:retrieve] ::
 [ivy:retrieve]
 [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

 BUILD FAILED

 /opt/code/lusolr410/build.xml:119: The following error occurred while
 executing this line:
 /opt/code/lusolr410/solr/common-build.xml:365: The following error
 occurred while executing this line:
 /opt/code/lusolr410/solr/core/build.xml:65: impossible to resolve 
 dependencies:
 resolve failed - see output for details

 Total time: 39 seconds


 -Yonik

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_51) - Build # 13946 - Failure!

2015-08-21 Thread Michael McCandless
This is LUCENE-6745 ... I'm committing fix shortly.

Mike McCandless

http://blog.mikemccandless.com


On Fri, Aug 21, 2015 at 10:49 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13946/
 Java: 32bit/jdk1.8.0_51 -server -XX:+UseConcMarkSweepGC

 1 tests failed.
 FAILED:  org.apache.lucene.store.TestMultiMMap.testCloneThreadSafety

 Error Message:
 Captured an uncaught exception in thread: Thread[id=424, name=Thread-343, 
 state=RUNNABLE, group=TGRP-TestMultiMMap]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=424, name=Thread-343, state=RUNNABLE, 
 group=TGRP-TestMultiMMap]
 at 
 __randomizedtesting.SeedInfo.seed([540669469019BF76:D6E033137118BEBC]:0)
 Caused by: java.lang.AssertionError: java.io.EOFException: seek past EOF: 
 MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
 at __randomizedtesting.SeedInfo.seed([540669469019BF76]:0)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
 at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:488)
 at 
 org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
 Caused by: java.io.EOFException: seek past EOF: 
 MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
 at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.seek(ByteBufferIndexInput.java:505)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:257)
 ... 2 more




 Build Log:
 [...truncated 694 lines...]
[junit4] Suite: org.apache.lucene.store.TestMultiMMap
[junit4]   2 aug 21, 2015 11:45:11 AM 
 com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
  uncaughtException
[junit4]   2 WARNING: Uncaught exception in thread: 
 Thread[Thread-343,5,TGRP-TestMultiMMap]
[junit4]   2 java.lang.AssertionError: java.io.EOFException: seek past 
 EOF: 
 MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
[junit4]   2at 
 __randomizedtesting.SeedInfo.seed([540669469019BF76]:0)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:488)
[junit4]   2at 
 org.apache.lucene.store.BaseDirectoryTestCase$1.run(BaseDirectoryTestCase.java:1200)
[junit4]   2 Caused by: java.io.EOFException: seek past EOF: 
 MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.seek(ByteBufferIndexInput.java:505)
[junit4]   2at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:257)
[junit4]   2... 2 more
[junit4]   2
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestMultiMMap 
 -Dtests.method=testCloneThreadSafety -Dtests.seed=540669469019BF76 
 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=sk 
 -Dtests.timezone=America/Fortaleza -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
[junit4] ERROR   1.01s J2 | TestMultiMMap.testCloneThreadSafety 
[junit4] Throwable #1: 
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=424, name=Thread-343, state=RUNNABLE, 
 group=TGRP-TestMultiMMap]
[junit4]at 
 __randomizedtesting.SeedInfo.seed([540669469019BF76:D6E033137118BEBC]:0)
[junit4] Caused by: java.lang.AssertionError: java.io.EOFException: 
 seek past EOF: 
 MMapIndexInput(path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/core/test/J2/temp/lucene.store.TestMultiMMap_540669469019BF76-001/tempDir-002/randombytes)
[junit4]at 
 __randomizedtesting.SeedInfo.seed([540669469019BF76]:0)
[junit4]at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:259)
[junit4]at 
 org.apache.lucene.store.ByteBufferIndexInput$MultiBufferImpl.clone(ByteBufferIndexInput.java:488)
[junit4]at 
 

[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Elaine Cario (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706872#comment-14706872
 ] 

Elaine Cario commented on SOLR-7951:


I was able to get an environment to test this on a little sooner, although it 
was a 4.10 environment (issue was reproducible there).  (Needed to manually 
apply the change, as LBHttpSolrServer was used in 4.10, later re-factored to 
LBHttpSolrServer client in 5.x).   The issue still occurred after the 
modification, though, so I need to make one additional change in both the 
conditions:

...else if (ex instanceof *SolrException*)...   // (was SolrServerException in 
original patch)

...as the HttpSolrServer.RemoteSolrException which was being thrown is a 
SolrException, not a SolrServerException. Then it worked as expected.

I'm attaching a patch for 4.10.x with the corrected condition 
(SOLR-7951-4.x.patch)


 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



4.10 no longer buildable from source?

2015-08-21 Thread Yonik Seeley
Is it something specific to my setup, or a general issue now?
-Yonik


resolve:
[ivy:retrieve]
[ivy:retrieve] :: problems summary ::
[ivy:retrieve]  WARNINGS
[ivy:retrieve] [FAILED ]
org.restlet.jee#org.restlet.ext.servlet;2.1.1!org.restlet.ext.servlet.jar:
 (0ms)
[ivy:retrieve]  shared: tried
[ivy:retrieve]
/Users/yonik/.ivy2/shared/org.restlet.jee/org.restlet.ext.servlet/2.1.1/jars/org.restlet.ext.servlet.jar
[ivy:retrieve]  public: tried
[ivy:retrieve]
http://repo1.maven.org/maven2/org/restlet/jee/org.restlet.ext.servlet/2.1.1/org.restlet.ext.servlet-2.1.1.jar
[ivy:retrieve] ::
[ivy:retrieve] ::  FAILED DOWNLOADS::
[ivy:retrieve] :: ^ see resolution messages for details  ^ ::
[ivy:retrieve] ::
[ivy:retrieve] ::
org.restlet.jee#org.restlet.ext.servlet;2.1.1!org.restlet.ext.servlet.jar
[ivy:retrieve] ::
[ivy:retrieve]
[ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

BUILD FAILED

/opt/code/lusolr410/build.xml:119: The following error occurred while
executing this line:
/opt/code/lusolr410/solr/common-build.xml:365: The following error
occurred while executing this line:
/opt/code/lusolr410/solr/core/build.xml:65: impossible to resolve dependencies:
resolve failed - see output for details

Total time: 39 seconds


-Yonik

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6758) Adding a SHOULD clause to a BQ over an empty field clears the score when using DefaultSimilarity

2015-08-21 Thread Terry Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Smith updated LUCENE-6758:

Attachment: LUCENE-6758.patch

Run this unit test a few times and you'll hit a failure when DefaultSimilarity 
is picked.

The method testBQHitOrEmpty() will fail because the score is zero. It's friend 
testBQHitOrMiss() has a non-zero score.

The difference between the two is that the field empty is unused, whereas the 
field test has one token (hit).


 Adding a SHOULD clause to a BQ over an empty field clears the score when 
 using DefaultSimilarity
 

 Key: LUCENE-6758
 URL: https://issues.apache.org/jira/browse/LUCENE-6758
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk
Reporter: Terry Smith
 Attachments: LUCENE-6758.patch


 Patch with unit test to show the bug will be attached.
 I've narrowed this change in behavior with git bisect to the following commit:
 {noformat}
 commit 698b4b56f0f2463b21c9e3bc67b8b47d635b7d1f
 Author: Robert Muir rm...@apache.org
 Date:   Thu Aug 13 17:37:15 2015 +
 LUCENE-6711: Use CollectionStatistics.docCount() for IDF and average 
 field length computations
 
 git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1695744 
 13f79535-47bb-0310-9956-ffa450edef68
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6756) Give MatchAllDocsQuery a dedicated BulkScorer

2015-08-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706873#comment-14706873
 ] 

Robert Muir commented on LUCENE-6756:
-

+1

 Give MatchAllDocsQuery a dedicated BulkScorer
 -

 Key: LUCENE-6756
 URL: https://issues.apache.org/jira/browse/LUCENE-6756
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6756.patch, MABench.java


 MatchAllDocsQuery currently uses the default BulkScorer, which creates a 
 Scorer and iterates over matching doc IDs up to NO_MORE_DOCS. I tried to 
 build a dedicated BulkScorer, which seemed to help remove abstractions as it 
 helped improve throughput by a ~2x factor with simple collectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Elaine Cario (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elaine Cario updated SOLR-7951:
---
Attachment: SOLR-7951-4.x.patch

Patch applies to LBHttpSolrServer in 4.x line.  Similar change can be made to 
5.x LBHttpSolrClient

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951-4.x.patch, SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.3-Linux (64bit/jdk1.7.0_80) - Build # 112 - Failure!

2015-08-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.3-Linux/112/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestRebalanceLeaders.test

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:40220, 
http://127.0.0.1:52815, http://127.0.0.1:37870, http://127.0.0.1:35749, 
http://127.0.0.1:52251]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:40220, http://127.0.0.1:52815, 
http://127.0.0.1:37870, http://127.0.0.1:35749, http://127.0.0.1:52251]
at 
__randomizedtesting.SeedInfo.seed([E08398054DDBE675:68D7A7DFE3278B8D]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:281)
at 
org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:108)
at 
org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6756) Give MatchAllDocsQuery a dedicated BulkScorer

2015-08-21 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706850#comment-14706850
 ] 

Adrien Grand commented on LUCENE-6756:
--

I added a MatchAll task to wikimedium1m and hotspot looks happy:

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
  Fuzzy2  103.55 (32.6%)   95.61 (35.5%)   
-7.7% ( -57% -   89%)
  Fuzzy1  139.81 (13.1%)  132.03 (17.0%)   
-5.6% ( -31% -   28%)
 Prefix3  374.46  (8.7%)  368.62  (7.4%)   
-1.6% ( -16% -   15%)
   OrHighLow  322.32  (7.0%)  320.66  (5.9%)   
-0.5% ( -12% -   13%)
   OrHighMed  257.31  (8.7%)  256.59  (4.7%)   
-0.3% ( -12% -   14%)
  OrHighHigh  202.24  (8.1%)  201.80  (6.2%)   
-0.2% ( -13% -   15%)
  HighPhrase  155.66  (4.3%)  155.48  (5.2%)   
-0.1% (  -9% -9%)
 LowSpanNear  200.83  (5.5%)  200.68  (4.5%)   
-0.1% (  -9% -   10%)
  AndHighLow 1806.85  (5.2%) 1806.05  (8.9%)   
-0.0% ( -13% -   14%)
HighTerm  573.21  (7.8%)  573.11  (6.6%)   
-0.0% ( -13% -   15%)
 LowSloppyPhrase  132.99  (4.6%)  132.98  (5.7%)   
-0.0% (  -9% -   10%)
 AndHighHigh  401.82  (4.2%)  402.76  (4.3%)
0.2% (  -7% -9%)
HighSloppyPhrase  271.61  (5.7%)  273.46  (7.3%)
0.7% ( -11% -   14%)
HighSpanNear  107.11  (6.2%)  107.85  (5.2%)
0.7% ( -10% -   12%)
   MedPhrase  186.57  (4.5%)  187.88  (4.9%)
0.7% (  -8% -   10%)
   LowPhrase  402.46  (4.4%)  406.53  (3.5%)
1.0% (  -6% -9%)
 MedSloppyPhrase  233.49  (5.0%)  236.66  (3.4%)
1.4% (  -6% -   10%)
 MedTerm 1278.37  (8.9%) 1302.62  (6.4%)
1.9% ( -12% -   18%)
Wildcard  339.31  (8.8%)  346.33  (6.5%)
2.1% ( -12% -   19%)
 Respell  152.28  (9.2%)  155.51  (8.8%)
2.1% ( -14% -   22%)
  AndHighMed  396.54  (8.1%)  407.13  (3.7%)
2.7% (  -8% -   15%)
 MedSpanNear  565.97  (6.9%)  581.61  (5.3%)
2.8% (  -8% -   16%)
 LowTerm 3143.46 (14.2%) 3244.12  (8.8%)
3.2% ( -17% -   30%)
  IntNRQ   90.11 (11.4%)   93.16  (8.0%)
3.4% ( -14% -   25%)
MatchAll  117.18  (3.7%)  211.95 (30.9%)   
80.9% (  44% -  119%)
{noformat}

The fuzzy queries are a bit off but I see a lot of variance with these queries 
anyway, even without the change.

 Give MatchAllDocsQuery a dedicated BulkScorer
 -

 Key: LUCENE-6756
 URL: https://issues.apache.org/jira/browse/LUCENE-6756
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6756.patch, MABench.java


 MatchAllDocsQuery currently uses the default BulkScorer, which creates a 
 Scorer and iterates over matching doc IDs up to NO_MORE_DOCS. I tried to 
 build a dedicated BulkScorer, which seemed to help remove abstractions as it 
 helped improve throughput by a ~2x factor with simple collectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706864#comment-14706864
 ] 

ASF subversion and git services commented on LUCENE-6745:
-

Commit 1697011 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1697011 ]

LUCENE-6745: IndexInput.clone is not thread-safe; fix BKD/RangeTree to respect 
that

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch, LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6745.

Resolution: Fixed

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch, LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6758) Adding a SHOULD clause to a BQ over an empty field clears the score when using DefaultSimilarity

2015-08-21 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706963#comment-14706963
 ] 

Terry Smith commented on LUCENE-6758:
-

Explain output for the failing query (testBQHitOrEmpty):

{noformat}
0.0 = product of:
  0.0 = sum of:
0.0 = weight(test:hit in 0) [DefaultSimilarity], result of:
  0.0 = score(doc=0,freq=1.0), product of:
0.0 = queryWeight, product of:
  0.30685282 = idf(docFreq=1, docCount=1)
  0.0 = queryNorm
0.30685282 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  0.30685282 = idf(docFreq=1, docCount=1)
  1.0 = fieldNorm(doc=0)
  0.5 = coord(1/2)
{noformat}

Explain output for the variant against a populated field  (testBQHitOrMiss):

{noformat}
0.04500804 = product of:
  0.09001608 = sum of:
0.09001608 = weight(test:hit in 0) [DefaultSimilarity], result of:
  0.09001608 = score(doc=0,freq=1.0), product of:
0.29335263 = queryWeight, product of:
  0.30685282 = idf(docFreq=1, docCount=1)
  0.9560043 = queryNorm
0.30685282 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  0.30685282 = idf(docFreq=1, docCount=1)
  1.0 = fieldNorm(doc=0)
  0.5 = coord(1/2)
{noformat}



 Adding a SHOULD clause to a BQ over an empty field clears the score when 
 using DefaultSimilarity
 

 Key: LUCENE-6758
 URL: https://issues.apache.org/jira/browse/LUCENE-6758
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk
Reporter: Terry Smith
 Attachments: LUCENE-6758.patch


 Patch with unit test to show the bug will be attached.
 I've narrowed this change in behavior with git bisect to the following commit:
 {noformat}
 commit 698b4b56f0f2463b21c9e3bc67b8b47d635b7d1f
 Author: Robert Muir rm...@apache.org
 Date:   Thu Aug 13 17:37:15 2015 +
 LUCENE-6711: Use CollectionStatistics.docCount() for IDF and average 
 field length computations
 
 git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1695744 
 13f79535-47bb-0310-9956-ffa450edef68
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6711) Instead of docCount(), maxDoc() is used for numberOfDocuments in SimilarityBase

2015-08-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706970#comment-14706970
 ] 

Hoss Man commented on LUCENE-6711:
--

possible bug identified by Terry Smith in LUCENE-6758

 Instead of docCount(), maxDoc() is used for numberOfDocuments in 
 SimilarityBase
 ---

 Key: LUCENE-6711
 URL: https://issues.apache.org/jira/browse/LUCENE-6711
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Reporter: Ahmet Arslan
Assignee: Robert Muir
Priority: Minor
 Fix For: Trunk

 Attachments: LUCENE-6711.patch, LUCENE-6711.patch, LUCENE-6711.patch, 
 LUCENE-6711.patch


 {{SimilarityBase.java}} has the following line :
 {code}
  long numberOfDocuments = collectionStats.maxDoc();
 {code}
 It seems like {{collectionStats.docCount()}}, which returns the total number 
 of documents that have at least one term for this field, is more appropriate 
 statistics here. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_51) - Build # 5051 - Failure!

2015-08-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5051/
Java: 64bit/jdk1.8.0_51 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=8909, 
name=SocketProxy-Response-51477:51704, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8909, name=SocketProxy-Response-51477:51704, 
state=RUNNABLE, group=TGRP-HttpPartitionTest]
at 
__randomizedtesting.SeedInfo.seed([D8F4B9C6662308ED:50A0861CC8DF6515]:0)
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([D8F4B9C6662308ED]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)




Build Log:
[...truncated 10440 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest_D8F4B9C6662308ED-001\init-core-data-001
   [junit4]   2 1284324 INFO  
(SUITE-HttpPartitionTest-seed#[D8F4B9C6662308ED]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /ir_/hr
   [junit4]   2 1284328 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 1284328 INFO  (Thread-2798) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 1284328 INFO  (Thread-2798) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2 1284429 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.ZkTestServer start zk server on port:51422
   [junit4]   2 1284429 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 1284430 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 1284433 INFO  (zkCallback-1647-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@511ca950 
name:ZooKeeperConnection Watcher:127.0.0.1:51422 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 1284433 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 1284433 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 1284433 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 1284436 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 
o.a.z.s.NIOServerCnxn caught end of stream exception
   [junit4]   2 EndOfStreamException: Unable to read additional data from 
client sessionid 0x14f51117ca5, likely client has closed socket
   [junit4]   2at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
   [junit4]   2at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
   [junit4]   2at java.lang.Thread.run(Thread.java:745)
   [junit4]   2 1284437 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 1284439 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 1284440 INFO  (zkCallback-1648-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@16660d81 
name:ZooKeeperConnection Watcher:127.0.0.1:51422/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 1284440 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 1284440 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 1284440 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1
   [junit4]   2 1284443 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1/shards
   [junit4]   2 1284445 INFO  
(TEST-HttpPartitionTest.test-seed#[D8F4B9C6662308ED]) [] 
o.a.s.c.c.SolrZkClient makePath: 

Re: 4.10 no longer buildable from source?

2015-08-21 Thread Erick Erickson
Worked fine on a fresh checkout for me with Java 1.7

On Fri, Aug 21, 2015 at 8:41 AM, Yonik Seeley ysee...@gmail.com wrote:
 Is it something specific to my setup, or a general issue now?
 -Yonik


 resolve:
 [ivy:retrieve]
 [ivy:retrieve] :: problems summary ::
 [ivy:retrieve]  WARNINGS
 [ivy:retrieve] [FAILED ]
 org.restlet.jee#org.restlet.ext.servlet;2.1.1!org.restlet.ext.servlet.jar:
  (0ms)
 [ivy:retrieve]  shared: tried
 [ivy:retrieve]
 /Users/yonik/.ivy2/shared/org.restlet.jee/org.restlet.ext.servlet/2.1.1/jars/org.restlet.ext.servlet.jar
 [ivy:retrieve]  public: tried
 [ivy:retrieve]
 http://repo1.maven.org/maven2/org/restlet/jee/org.restlet.ext.servlet/2.1.1/org.restlet.ext.servlet-2.1.1.jar
 [ivy:retrieve] ::
 [ivy:retrieve] ::  FAILED DOWNLOADS::
 [ivy:retrieve] :: ^ see resolution messages for details  ^ ::
 [ivy:retrieve] ::
 [ivy:retrieve] ::
 org.restlet.jee#org.restlet.ext.servlet;2.1.1!org.restlet.ext.servlet.jar
 [ivy:retrieve] ::
 [ivy:retrieve]
 [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

 BUILD FAILED

 /opt/code/lusolr410/build.xml:119: The following error occurred while
 executing this line:
 /opt/code/lusolr410/solr/common-build.xml:365: The following error
 occurred while executing this line:
 /opt/code/lusolr410/solr/core/build.xml:65: impossible to resolve 
 dependencies:
 resolve failed - see output for details

 Total time: 39 seconds


 -Yonik

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6758) Adding a SHOULD clause to a BQ over an empty field clears the score when using DefaultSimilarity

2015-08-21 Thread Terry Smith (JIRA)
Terry Smith created LUCENE-6758:
---

 Summary: Adding a SHOULD clause to a BQ over an empty field clears 
the score when using DefaultSimilarity
 Key: LUCENE-6758
 URL: https://issues.apache.org/jira/browse/LUCENE-6758
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk
Reporter: Terry Smith


Patch with unit test to show the bug will be attached.

I've narrowed this change in behavior with git bisect to the following commit:

{noformat}
commit 698b4b56f0f2463b21c9e3bc67b8b47d635b7d1f
Author: Robert Muir rm...@apache.org
Date:   Thu Aug 13 17:37:15 2015 +

LUCENE-6711: Use CollectionStatistics.docCount() for IDF and average field 
length computations

git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1695744 
13f79535-47bb-0310-9956-ffa450edef68
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6752) include Math.random() into forbiddenAPI

2015-08-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-6752.
---
Resolution: Fixed

 include Math.random() into forbiddenAPI
 ---

 Key: LUCENE-6752
 URL: https://issues.apache.org/jira/browse/LUCENE-6752
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Andrei Beliakov
Assignee: Uwe Schindler
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6752.patch


 Math.random() should be included into forbiddenAPI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API

2015-08-21 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707411#comment-14707411
 ] 

Gregory Chanan commented on SOLR-7789:
--

Looks like there are some small semantic conflicts with SOLR-6760, I'm working 
to address.

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7955) Auto create .system collection on first start if it does not exist

2015-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707462#comment-14707462
 ] 

Mark Miller commented on SOLR-7955:
---

Personally, I like the idea that the majority of people not using this feature 
don't have to deal with this internal collection. If they decide they want to 
use the blob api, they create the collection - simple.

 Auto create .system collection on first start if it does not exist
 --

 Key: SOLR-7955
 URL: https://issues.apache.org/jira/browse/SOLR-7955
 Project: Solr
  Issue Type: Improvement
Reporter: Jan Høydahl

 Why should a user need to create the {{.system}} collection manually? It 
 would simplify instructions related to BLOB store if user could assume it is 
 always there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6752) include Math.random() into forbiddenAPI

2015-08-21 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707302#comment-14707302
 ] 

Dawid Weiss commented on LUCENE-6752:
-

Looks good. +1

 include Math.random() into forbiddenAPI
 ---

 Key: LUCENE-6752
 URL: https://issues.apache.org/jira/browse/LUCENE-6752
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Andrei Beliakov
Assignee: Uwe Schindler
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6752.patch


 Math.random() should be included into forbiddenAPI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6629) Watch /collections zk node on all nodes

2015-08-21 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-6629:
-
Attachment: SOLR-6629.patch

Removed lazy collection caching behavior.  All tests pass on trunk for me.

 Watch /collections zk node on all nodes
 ---

 Key: SOLR-6629
 URL: https://issues.apache.org/jira/browse/SOLR-6629
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Noble Paul
 Fix For: Trunk

 Attachments: SOLR-6629.patch, SOLR-6629.patch, SOLR-6629.patch, 
 SOLR-6629.patch


 The main clusterstate.json is refreshed/used as a poor substitute for 
 informing all nodes about new or deleted collections even when the collection 
 being created or deleted has state format  1. When we move away from state 
 format 1 then we should do away with this workaround and start watching the 
 /collections zk node on all nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Elaine Cario (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707402#comment-14707402
 ] 

Elaine Cario commented on SOLR-7951:


[~eribeiro], Well, you got me there, I was asking myself the same question, 
because I had it both ways (returning SolrException, and casting it to 
SolrServerException), and I couldn't figure out why the compiler or the runtime 
 wasn't complaining, so I decided to go with the cast in case something 
downstream might complain about SolrException coming out of a method that threw 
a SolrServerException. However, as I've pondered this, I think it is because 
SolrException (or RemoteSolrException) are descended from RuntimeException, so 
you can throw those without needing to declare them. So I think you are right, 
it is more proper to not cast it. I will test this some more next week to be 
absolutely sure.  I can also add appropriate comments around this.  

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951-4.x.patch, SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-21 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707457#comment-14707457
 ] 

Gregory Chanan commented on SOLR-6760:
--

The naming here conflicts a bit with what I was trying to accomplish with 
SOLR-7789, so I'm asking here for suggestions on how to resolve.  In SOLR-7789 
I am trying to accomplish the following:
#1 Add another OverseerMessageHandler (OverseerConfigSetMessageHandler) to 
handle ConfigSet-related operations.
#2 From the perspective of the non-overseer (i.e. the ConfigSetsHandler), it 
looks like the operations are written to a separate queue from the collection 
queue, i.e.  getOverseerConfigSetQueue()
#3 Since the ConfigSet operations are most likely rare and fast, it made sense 
to just use the existing collections queue under the covers and handle the 
dispatch separately.  The naming here breaks the illusion of #2, i.e. if I 
return an OverseerCollectionQueue it's pretty obvious to the non-overseer 
what's going on.

So, here's my plan:
Short term: rename OverseerCollectionQueue to something more 
generic...DistributedTaskQueue?  DistributedAsyncAwareQueue?  There's nothing 
in there that is actually collection specific (which is why it works for the 
ConfigSet operations)

Longer term:  I have some more suggestions for the queue interface in 
SOLR-7789.  For example, on the insertion side the queue should be ZKNodeProps 
based rather than byte [] based so we can return different queue types that 
understand the semantics of the operations being inserted (hard to do that with 
a byte []).  In particular, I want to prefix all operation names to the 
ConfigSetQueue with configsets: automatic to simplify the dispatching to the 
correct OverseerMessageHandler.  The ConfigSetsHandler needs to do this now (so 
sort of breaks the illusion of #2) because of the interface.  There's probably 
a lot more room to break things up for client vs processing side as well -- 
i.e. why does the CollectionsHandler / ConfigSetsHandler get access to an 
object that lets it remove something from the queue?

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.4

 Attachments: SOLR-6760-branch_5x.patch, SOLR-6760.patch, 
 SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6752) include Math.random() into forbiddenAPI

2015-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707357#comment-14707357
 ] 

ASF subversion and git services commented on LUCENE-6752:
-

Commit 1697052 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1697052 ]

Merged revision(s) 1697050 from lucene/dev/trunk:
LUCENE-6752: Add Math#random() to forbiddenapis

 include Math.random() into forbiddenAPI
 ---

 Key: LUCENE-6752
 URL: https://issues.apache.org/jira/browse/LUCENE-6752
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Andrei Beliakov
Assignee: Uwe Schindler
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6752.patch


 Math.random() should be included into forbiddenAPI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7789) Introduce a ConfigSet management API

2015-08-21 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-7789:
-
Attachment: SOLR-7789.patch

Generated an svn patch, let me know if this works, Mark.

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707412#comment-14707412
 ] 

Michael McCandless commented on LUCENE-6699:


Another failure:

{noformat}
   [junit4] Suite: org.apache.lucene.bkdtree3d.TestGeo3DPointField
   [junit4]   2 Ogos 21, 2015 8:29:29 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[T3,5,TGRP-TestGeo3DPointField]
   [junit4]   2 java.lang.AssertionError: T3: iter=341 id=70337 docID=417 
lat=-0.002164069780096702 lon=0.007505617500830066 expected false but got: true 
deleted?=false
   [junit4]   2   point1=[lat=-0.002164069780096702, 
lon=0.007505617500830066], iswithin=false
   [junit4]   2   point2=[X=1.0010882593761607, Y=0.007513926205930265, 
Z=-0.0021664888729185277], iswithin=false
   [junit4]   2   query=PointInGeo3DShapeQuery: field=point:PlanetModel: 
PlanetModel.WGS84 Shape: GeoCircle: {planetmodel=PlanetModel.WGS84, 
center=[lat=-0.006450320645814321, lon=0.004660694205115142], 
radius=0.00489710732634323(0.28058358162206176)}
   [junit4]   2at 
__randomizedtesting.SeedInfo.seed([9CF59027DCD28E6D]:0)
   [junit4]   2at org.junit.Assert.fail(Assert.java:93)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4._run(TestGeo3DPointField.java:624)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4.run(TestGeo3DPointField.java:520)
   [junit4]   2 
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPointField 
-Dtests.method=testRandomMedium -Dtests.seed=9CF59027DCD28E6D 
-Dtests.multiplier=5 -Dtests.slow=true 
-Dtests.linedocsfile=/lucenedata/hudson.enwiki.random.lines.txt.fixed 
-Dtests.locale=ms -Dtests.timezone=Africa/Nouakchott -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   9.55s | TestGeo3DPointField.testRandomMedium 
   [junit4] Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=50, name=T3, state=RUNNABLE, group=TGRP-Test
{noformat}

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API

2015-08-21 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707459#comment-14707459
 ] 

Gregory Chanan commented on SOLR-7789:
--

Commented in SOLR-6760 with a plan for how to address: 
https://issues.apache.org/jira/browse/SOLR-6760?focusedCommentId=14707457page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14707457

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6752) include Math.random() into forbiddenAPI

2015-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707337#comment-14707337
 ] 

ASF subversion and git services commented on LUCENE-6752:
-

Commit 1697050 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1697050 ]

LUCENE-6752: Add Math#random() to forbiddenapis

 include Math.random() into forbiddenAPI
 ---

 Key: LUCENE-6752
 URL: https://issues.apache.org/jira/browse/LUCENE-6752
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Andrei Beliakov
Assignee: Uwe Schindler
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6752.patch


 Math.random() should be included into forbiddenAPI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7955) Auto create .system collection on first start if it does not exist

2015-08-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707362#comment-14707362
 ] 

Shawn Heisey edited comment on SOLR-7955 at 8/21/15 7:51 PM:
-

Devil's advocate thoughts:

If this is auto-created when the first node is started, then it will be a 
non-redundant collection.  If we wait for two nodes, then a single-node test 
install will not have it.  If we automatically add replicas as new nodes are 
brought up, the user might be very unhappy with that decision.

I do like the idea of creating this collection automatically, but its behavior 
must be configurable, with sensible defaults.  Any actions taken (or explicitly 
NOT taken) should probably be logged at WARN so that someone looking in the 
logging tab of the admin UI can see them.



was (Author: elyograg):
Devil's advocate thoughts:

If this is auto-created when the first node is started, then it will be a 
non-redundant collection.  If we wait for two nodes, then a single-host test 
install will not have it.  If we automatically add replicas as new nodes are 
brought up, the user might be very unhappy with that decision.

I do like the idea of creating this collection automatically, but its behavior 
must be configurable, with sensible defaults.  Any actions taken (or explicitly 
NOT taken) should probably be logged at WARN so that someone looking in the 
logging tab of the admin UI can see them.


 Auto create .system collection on first start if it does not exist
 --

 Key: SOLR-7955
 URL: https://issues.apache.org/jira/browse/SOLR-7955
 Project: Solr
  Issue Type: Improvement
Reporter: Jan Høydahl

 Why should a user need to create the {{.system}} collection manually? It 
 would simplify instructions related to BLOB store if user could assume it is 
 always there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7955) Auto create .system collection on first start if it does not exist

2015-08-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707362#comment-14707362
 ] 

Shawn Heisey commented on SOLR-7955:


Devil's advocate thoughts:

If this is auto-created when the first node is started, then it will be a 
non-redundant collection.  If we wait for two nodes, then a single-host test 
install will not have it.  If we automatically add replicas as new nodes are 
brought up, the user might be very unhappy with that decision.

I do like the idea of creating this collection automatically, but its behavior 
must be configurable, with sensible defaults.  Any actions taken (or explicitly 
NOT taken) should probably be logged at WARN so that someone looking in the 
logging tab of the admin UI can see them.


 Auto create .system collection on first start if it does not exist
 --

 Key: SOLR-7955
 URL: https://issues.apache.org/jira/browse/SOLR-7955
 Project: Solr
  Issue Type: Improvement
Reporter: Jan Høydahl

 Why should a user need to create the {{.system}} collection manually? It 
 would simplify instructions related to BLOB store if user could assume it is 
 always there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7795) Fold Interval Faceting into Range Faceting

2015-08-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-7795:

Attachment: SOLR-7795.patch

Thanks for the update, and sorry for the late response [~liahsuan]. I'm 
uploading a new patch with your changes and a couple of minor changes I did. I 
think the patch is mostly ready, but I still have to check how can this affect 
the integration with pivot faceting. 

If anyone has any comments on the API, please let me know:
The requests will look like this:
{noformat}
facet=true
facet.range=price
facet.range.start=0
facet.range.end=50
facet.range.gap=10
rows=0
f.price.facet.range.set=[0,2]
wt=json
{noformat}

and the response like:
{noformat}
facet_counts:{
facet_queries:{},
facet_fields:{},
facet_dates:{},
facet_ranges:{
  price:{
counts:[
  0.0,0,
  10.0,0,
  20.0,0,
  30.0,0,
  40.0,0],
gap:10.0,
start:0.0,
end:50.0,
intervals:{
  [0,2]:0}}},
facet_intervals:{},
facet_heatmaps:{}}}
{noformat}

 Fold Interval Faceting into Range Faceting
 --

 Key: SOLR-7795
 URL: https://issues.apache.org/jira/browse/SOLR-7795
 Project: Solr
  Issue Type: Task
Reporter: Tomás Fernández Löbbe
 Fix For: 5.3, Trunk

 Attachments: SOLR-7795.patch


 Now that range faceting supports a filter and a dv method, and that 
 interval faceting is supported on fields with {{docValues=false}}, I think we 
 should make it so that interval faceting is just a different way of 
 specifying ranges in range faceting, allowing users to indicate specific 
 ranges.
 I propose we use the same syntax for intervals, but under the range 
 parameter family:
 {noformat}
 facet.range=price
 f.price.facet.range.set=[0,10]
 f.price.facet.range.set=(10,100]
 {noformat}
 The counts for those ranges would come in the response also inside of the 
 range_facets section. I'm not sure if it's better to include the ranges in 
 the counts section, or in a different section (intervals?sets?buckets?). 
 I'm open to suggestions. 
 {code}
 facet_ranges:{
   price:{
 counts:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 or…
 {code}
 facet_ranges:{
   price:{
 intervals:[
   [0,10],3,
   (10,100],2]
}
 }
 {code}
 We should support people specifying both things on the same field.
 Once this is done, interval faceting could be deprecated, as all it's 
 functionality is now possible through range queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API

2015-08-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707386#comment-14707386
 ] 

Mark Miller commented on SOLR-7789:
---

Clean apply, thanks!

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.3 - Build # 12 - Still Failing

2015-08-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.3/12/

3 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload

Error Message:
expected:[{indexVersion=1440187414141,generation=2,filelist=[_6t.fdt, _6t.fdx, 
_6t.fnm, _6t.nvd, _6t.nvm, _6t.si, _6t_Memory_0.ram, _6z.cfe, _6z.cfs, _6z.si, 
_70.cfe, _70.cfs, _70.si, _71.cfe, _71.cfs, _71.si, _72.cfe, _72.cfs, _72.si, 
_73.cfe, _73.cfs, _73.si, segments_2]}] but 
was:[{indexVersion=1440187414141,generation=3,filelist=[_6t.fdt, _6t.fdx, 
_6t.fnm, _6t.nvd, _6t.nvm, _6t.si, _6t_Memory_0.ram, _74.cfe, _74.cfs, _74.si, 
segments_3]}, {indexVersion=1440187414141,generation=2,filelist=[_6t.fdt, 
_6t.fdx, _6t.fnm, _6t.nvd, _6t.nvm, _6t.si, _6t_Memory_0.ram, _6z.cfe, _6z.cfs, 
_6z.si, _70.cfe, _70.cfs, _70.si, _71.cfe, _71.cfs, _71.si, _72.cfe, _72.cfs, 
_72.si, _73.cfe, _73.cfs, _73.si, segments_2]}]

Stack Trace:
java.lang.AssertionError: 
expected:[{indexVersion=1440187414141,generation=2,filelist=[_6t.fdt, _6t.fdx, 
_6t.fnm, _6t.nvd, _6t.nvm, _6t.si, _6t_Memory_0.ram, _6z.cfe, _6z.cfs, _6z.si, 
_70.cfe, _70.cfs, _70.si, _71.cfe, _71.cfs, _71.si, _72.cfe, _72.cfs, _72.si, 
_73.cfe, _73.cfs, _73.si, segments_2]}] but 
was:[{indexVersion=1440187414141,generation=3,filelist=[_6t.fdt, _6t.fdx, 
_6t.fnm, _6t.nvd, _6t.nvm, _6t.si, _6t_Memory_0.ram, _74.cfe, _74.cfs, _74.si, 
segments_3]}, {indexVersion=1440187414141,generation=2,filelist=[_6t.fdt, 
_6t.fdx, _6t.fnm, _6t.nvd, _6t.nvm, _6t.si, _6t_Memory_0.ram, _6z.cfe, _6z.cfs, 
_6z.si, _70.cfe, _70.cfs, _70.si, _71.cfe, _71.cfs, _71.si, _72.cfe, _72.cfs, 
_72.si, _73.cfe, _73.cfs, _73.si, segments_2]}]
at 
__randomizedtesting.SeedInfo.seed([ABBC68D25DC3779C:8E6B73E22D8B799F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload(TestReplicationHandler.java:1138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 

[jira] [Commented] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-08-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707470#comment-14707470
 ] 

Mike Drob commented on SOLR-7928:
-

Robert, thanks for looking. I'll put up a new patch without subclassing 
shortly. I can also make CheckIndex final in my patch, or save that for a 
separate issue to minimize/contain the changeset.

As I'm working on this, it looks like I'm going to have to increase the 
visibility on a bunch of things in CheckIndex to avoid code duplication. Let me 
know if you think this is going to be a problem.

 Improve CheckIndex to work against HdfsDirectory
 

 Key: SOLR-7928
 URL: https://issues.apache.org/jira/browse/SOLR-7928
 Project: Solr
  Issue Type: New Feature
  Components: hdfs
Reporter: Mike Drob
 Fix For: Trunk, 5.4

 Attachments: SOLR-7928.patch


 CheckIndex is very useful for testing an index for corruption. However, it 
 can only work with an index on an FSDirectory, meaning that if you need to 
 check an Hdfs Index, then you have to download it to local disk (which can be 
 very large).
 We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6697) Use 1D KD tree for alternative to postings based numeric range filters

2015-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707297#comment-14707297
 ] 

ASF subversion and git services commented on LUCENE-6697:
-

Commit 1697046 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1697046 ]

LUCENE-6697: never use the original IndexInput for real IO

 Use 1D KD tree for alternative to postings based numeric range filters
 --

 Key: LUCENE-6697
 URL: https://issues.apache.org/jira/browse/LUCENE-6697
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.3, Trunk

 Attachments: LUCENE-6697.patch, LUCENE-6697.patch, LUCENE-6697.patch


 Today Lucene uses postings to index a numeric value at multiple
 precision levels for fast range searching.  It's somewhat costly: each
 numeric value is indexed with multiple terms (4 terms by default)
 ... I think a dedicated 1D BKD tree should be more compact and perform
 better.
 It should also easily generalize beyond 64 bits to arbitrary byte[],
 e.g. for LUCENE-5596, but I haven't explored that here.
 A 1D BKD tree just sorts all values, and then indexes adjacent leaf
 blocks of size 512-1024 (by default) values per block, and their
 docIDs, into a fully balanced binary tree.  Building the range filter
 is then just a recursive walk through this tree.
 It's the same structure we use for 2D lat/lon BKD tree, just with 1D
 instead.  I implemented it as a DocValuesFormat that also writes the
 numeric tree on the side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707313#comment-14707313
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1697048 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1697048 ]

LUCENE-6699: throw IllegalArgExc for too-small GeoCircles; increase 
MINIMUM_RESOLUTION again

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Ant/JUnit/Bash bug with RTL languages?

2015-08-21 Thread Dawid Weiss
 Oh, those welcome messages don't correlate to the random locale used for
 testing? Moving on, then...

No, they don't. They're actually printed from a different (master)
JVM; the locale you pass for tests execution is propagated to forked
JVM(s).

The welcome message is a tiny joke. It is derived and fully
reproducible based on the master seed (so for the same master seed
you'll get exactly the same welcome message). And I thought it'd be a
nice test of the console's support for exotic unicode languages...
maybe it wasn't the best idea.

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6689) Odd analysis problem with WDF, appears to be triggered by preceding analysis components

2015-08-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707408#comment-14707408
 ] 

Shawn Heisey commented on LUCENE-6689:
--

I have simplified the analysis chain, removing the ICU tokenizer and replacing 
it with the whitespace tokenizer.  The root problem appears to be an 
interaction between PatternReplaceFilter and WordDelimiterFilter.

With the following Solr analysis chain, an indexed value of aaa-bbb: ccc will 
not be found by a phrase search of aaa bbb because the positions on the two 
query terms don't match what's in the index.  The positions go wrong on the 
WordDelimiterFilter step.

{code}
fieldType name=genText2 class=solr.TextField sortMissingLast=true 
positionIncrementGap=100
  analyzer
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.PatternReplaceFilterFactory
  pattern=^(\p{Punct}*)(.*?)(\p{Punct}*)$
  replacement=$2
/
filter class=solr.WordDelimiterFilterFactory
  splitOnCaseChange=1
  splitOnNumerics=1
  stemEnglishPossessive=1
  generateWordParts=1
  generateNumberParts=1
  catenateWords=1
  catenateNumbers=1
  catenateAll=0
  preserveOriginal=1
/
  /analyzer
/fieldType
{code}

If I remove PRFF from the above chain, the problem goes away.  This filter is 
in the chain so that leading and trailing punctuation are removed from all 
terms, leaving punctuation inside the term for WDF to handle.

An additional problem with the analysis quoted above is that the aaabbb term 
is indexed at position 2.  I believe it should be at position 1.  This problem 
is also fixed by removing PRFF.


 Odd analysis problem with WDF, appears to be triggered by preceding analysis 
 components
 ---

 Key: LUCENE-6689
 URL: https://issues.apache.org/jira/browse/LUCENE-6689
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Shawn Heisey

 This problem shows up for me in Solr, but I believe the issue is down at the 
 Lucene level, so I've opened the issue in the LUCENE project.  We can move it 
 if necessary.
 I've boiled the problem down to this minimum Solr fieldType:
 {noformat}
 fieldType name=testType class=solr.TextField
 sortMissingLast=true positionIncrementGap=100
   analyzer type=index
 tokenizer
 class=org.apache.lucene.analysis.icu.segmentation.ICUTokenizerFactory
 rulefiles=Latn:Latin-break-only-on-whitespace.rbbi/
 filter class=solr.PatternReplaceFilterFactory
   pattern=^(\p{Punct}*)(.*?)(\p{Punct}*)$
   replacement=$2
 /
 filter class=solr.WordDelimiterFilterFactory
   splitOnCaseChange=1
   splitOnNumerics=1
   stemEnglishPossessive=1
   generateWordParts=1
   generateNumberParts=1
   catenateWords=1
   catenateNumbers=1
   catenateAll=0
   preserveOriginal=1
 /
   /analyzer
   analyzer type=query
 tokenizer
 class=org.apache.lucene.analysis.icu.segmentation.ICUTokenizerFactory
 rulefiles=Latn:Latin-break-only-on-whitespace.rbbi/
 filter class=solr.PatternReplaceFilterFactory
   pattern=^(\p{Punct}*)(.*?)(\p{Punct}*)$
   replacement=$2
 /
 filter class=solr.WordDelimiterFilterFactory
   splitOnCaseChange=1
   splitOnNumerics=1
   stemEnglishPossessive=1
   generateWordParts=1
   generateNumberParts=1
   catenateWords=0
   catenateNumbers=0
   catenateAll=0
   preserveOriginal=0
 /
   /analyzer
 /fieldType
 {noformat}
 On Solr 4.7, if this type is given the input aaa-bbb: ccc then index 
 analysis puts aaa at term position 1 and bbb at term position 2.  This seems 
 perfectly reasonable to me.  In Solr 4.9, both terms end up at position 2.  
 This causes phrase queries which used to work to return zero hits.  The exact 
 text of the phrase query is in the original documents that match on 4.7.
 If the custom rbbi (which is included unmodified from the lucene icu analysis 
 source code) is not used, then the problem doesn't happen, because the 
 punctuation doesn't make it to the PRF.  If the PatternReplaceFilterFactory 
 is not present, then the problem doesn't happen.
 I can work around the problem by setting luceneMatchVersion to 4.7, but I 
 think the behavior is a bug, and I would rather not continue to use 4.7 
 analysis when I upgrade to 5.x, which I hope to do soon.
 Whether luceneMatchversion is LUCENE_47 or LUCENE_4_9, query analysis puts 
 aaa at term position 1 and bbb at term position 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-08-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707187#comment-14707187
 ] 

Robert Muir commented on SOLR-7928:
---

I'm against this. Its a mistake CheckIndex is not a final class.

Just use its API, please, no need to subclass.

{code}
  /** Create a new CheckIndex on the directory. */
  public CheckIndex(Directory dir) throws IOException {
{code}

 Improve CheckIndex to work against HdfsDirectory
 

 Key: SOLR-7928
 URL: https://issues.apache.org/jira/browse/SOLR-7928
 Project: Solr
  Issue Type: New Feature
  Components: hdfs
Reporter: Mike Drob
 Fix For: Trunk, 5.4

 Attachments: SOLR-7928.patch


 CheckIndex is very useful for testing an index for corruption. However, it 
 can only work with an index on an FSDirectory, meaning that if you need to 
 check an Hdfs Index, then you have to download it to local disk (which can be 
 very large).
 We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7948) MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1

2015-08-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707061#comment-14707061
 ] 

Shawn Heisey commented on SOLR-7948:


bq. By the way, there is a bug in httpclient 4.4.1

Is there an issue filed in Jira for this bug?  Has it been fixed in the 4.5 
version?


 MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1
 -

 Key: SOLR-7948
 URL: https://issues.apache.org/jira/browse/SOLR-7948
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
 Environment: OS:suse 11
 JDK:java version 1.7.0_65 
 Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
 Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
 HADOOP:hadoop 2.7.1 
 SOLR:5.2.1
Reporter: davidchiu
Assignee: Mark Miller

 When I used MapReduceIndexerTool of 5.2.1 to index files, I got follwoing 
 errors,but I used 4.9.0's MapReduceIndexerTool, it did work with hadoop 2.7.1.
 Exception ERROR as following:
 INFO  - 2015-08-20 11:44:45.155; [   ] org.apache.solr.hadoop.HeartBeater; 
 Heart beat reporting class is 
 org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
 INFO  - 2015-08-20 11:44:45.161; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Using this unpacked directory as 
 solr home: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.162; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Creating embedded Solr server with 
 solrHomeDir: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip,
  fs: 
 DFS[DFSClient[clientName=DFSClient_attempt_1440040092614_0004_r_01_0_1678264055_1,
  ugi=root (auth:SIMPLE)]], outputShardDir: 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.194; [   ] 
 org.apache.solr.core.SolrResourceLoader; new SolrResourceLoader for 
 directory: 
 '/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/'
 INFO  - 2015-08-20 11:44:45.206; [   ] org.apache.solr.hadoop.HeartBeater; 
 HeartBeat thread running
 INFO  - 2015-08-20 11:44:45.207; [   ] org.apache.solr.hadoop.HeartBeater; 
 Issuing heart beat for 1 threads
 INFO  - 2015-08-20 11:44:45.418; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Constructed instance information 
 solr.home 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
  
 (/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip),
  instance dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/,
  conf dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/conf/,
  writing index to solr.data.dir 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1/data,
  with permdir 
 hdfs://127.0.0.10:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.426; [   ] org.apache.solr.core.SolrXmlConfig; 
 Loading container configuration from 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/solr.xml
 INFO  - 2015-08-20 11:44:45.474; [   ] 
 org.apache.solr.core.CorePropertiesLocator; Config-defined core root 
 directory: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.503; [   ] org.apache.solr.core.CoreContainer; 
 New CoreContainer 1656436773
 INFO  - 2015-08-20 11:44:45.503; [   ] org.apache.solr.core.CoreContainer; 
 Loading cores into CoreContainer 
 [instanceDir=/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/]
 INFO  - 2015-08-20 11:44:45.503; [   ] 

[jira] [Comment Edited] (SOLR-7948) MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1

2015-08-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707061#comment-14707061
 ] 

Shawn Heisey edited comment on SOLR-7948 at 8/21/15 5:20 PM:
-

bq. By the way, there is a bug in httpclient 4.4.1

Is there an issue filed in Jira for this bug?  Has it been fixed in the 4.5 
version?

edit:  I don't see anything that looks like what you described in the 4.5 
release notes.



was (Author: elyograg):
bq. By the way, there is a bug in httpclient 4.4.1

Is there an issue filed in Jira for this bug?  Has it been fixed in the 4.5 
version?


 MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1
 -

 Key: SOLR-7948
 URL: https://issues.apache.org/jira/browse/SOLR-7948
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
 Environment: OS:suse 11
 JDK:java version 1.7.0_65 
 Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
 Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
 HADOOP:hadoop 2.7.1 
 SOLR:5.2.1
Reporter: davidchiu
Assignee: Mark Miller

 When I used MapReduceIndexerTool of 5.2.1 to index files, I got follwoing 
 errors,but I used 4.9.0's MapReduceIndexerTool, it did work with hadoop 2.7.1.
 Exception ERROR as following:
 INFO  - 2015-08-20 11:44:45.155; [   ] org.apache.solr.hadoop.HeartBeater; 
 Heart beat reporting class is 
 org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
 INFO  - 2015-08-20 11:44:45.161; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Using this unpacked directory as 
 solr home: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.162; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Creating embedded Solr server with 
 solrHomeDir: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip,
  fs: 
 DFS[DFSClient[clientName=DFSClient_attempt_1440040092614_0004_r_01_0_1678264055_1,
  ugi=root (auth:SIMPLE)]], outputShardDir: 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.194; [   ] 
 org.apache.solr.core.SolrResourceLoader; new SolrResourceLoader for 
 directory: 
 '/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/'
 INFO  - 2015-08-20 11:44:45.206; [   ] org.apache.solr.hadoop.HeartBeater; 
 HeartBeat thread running
 INFO  - 2015-08-20 11:44:45.207; [   ] org.apache.solr.hadoop.HeartBeater; 
 Issuing heart beat for 1 threads
 INFO  - 2015-08-20 11:44:45.418; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Constructed instance information 
 solr.home 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
  
 (/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip),
  instance dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/,
  conf dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/conf/,
  writing index to solr.data.dir 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1/data,
  with permdir 
 hdfs://127.0.0.10:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.426; [   ] org.apache.solr.core.SolrXmlConfig; 
 Loading container configuration from 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/solr.xml
 INFO  - 2015-08-20 11:44:45.474; [   ] 
 org.apache.solr.core.CorePropertiesLocator; Config-defined core root 
 directory: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.503; [   ] org.apache.solr.core.CoreContainer; 
 New CoreContainer 1656436773
 INFO  - 2015-08-20 11:44:45.503; [   ] 

[jira] [Commented] (SOLR-7928) Improve CheckIndex to work against HdfsDirectory

2015-08-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707081#comment-14707081
 ] 

Mike Drob commented on SOLR-7928:
-

Bump.

 Improve CheckIndex to work against HdfsDirectory
 

 Key: SOLR-7928
 URL: https://issues.apache.org/jira/browse/SOLR-7928
 Project: Solr
  Issue Type: New Feature
  Components: hdfs
Reporter: Mike Drob
 Fix For: Trunk, 5.4

 Attachments: SOLR-7928.patch


 CheckIndex is very useful for testing an index for corruption. However, it 
 can only work with an index on an FSDirectory, meaning that if you need to 
 check an Hdfs Index, then you have to download it to local disk (which can be 
 very large).
 We should have a way to natively check index on hdfs for corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7789) Introduce a ConfigSet management API

2015-08-21 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707076#comment-14707076
 ] 

Gregory Chanan commented on SOLR-7789:
--

Sorry about that, I don't know what the issue is.  I'll generate an svn version.

I'm not sure how much we care about this, but one issue I realized with the 
patch is that it is a little aggresive about cleaning up failed CREATE 
attempts.  If someone concurrently is using the zkcli to add a config and using 
CREATE via the API (so exclusivity checking doesn't apply), the following can 
happen:
- CREATE call checks that config doesn't exist
- zkcli adds config
- CREATE tries to create and fails
- CREATE removes config as part of failure cleanup

First, we should recommend that people don't use the zkcli and the ConfigSET 
API concurrently (I would argue people shouldn't use zkcli at all).  But we 
could be a little smarter about this case, e.g. track if the CREATE call 
actually wrote anything and only clean up if something was actually written.

 Introduce a ConfigSet management API
 

 Key: SOLR-7789
 URL: https://issues.apache.org/jira/browse/SOLR-7789
 Project: Solr
  Issue Type: New Feature
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7789.patch, SOLR-7789.patch


 SOLR-5955 describes a feature to automatically create a ConfigSet, based on 
 another one, from a collection API call (i.e. one step collection creation).  
 Discussion there yielded SOLR-7742, Immutable ConfigSet support.  To close 
 the loop, we need support for a ConfigSet management API.
 The simplest ConfigSet API could have one operation:
 create a new config set, based on an existing one, possible modifying the 
 ConfigSet properties.  Note you need to be able to modify the ConfigSet 
 properties at creation time because otherwise Immutable could not be changed.
 Another logical operation to support is ConfigSet deletion; that may be more 
 complicated to implement than creation because you need to handle the case 
 where a collection is already using the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6752) include Math.random() into forbiddenAPI

2015-08-21 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707098#comment-14707098
 ] 

Mikhail Khludnev commented on LUCENE-6752:
--

+1.

Do you like me to commit? 

 include Math.random() into forbiddenAPI
 ---

 Key: LUCENE-6752
 URL: https://issues.apache.org/jira/browse/LUCENE-6752
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Andrei Beliakov
Assignee: Uwe Schindler
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6752.patch


 Math.random() should be included into forbiddenAPI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707205#comment-14707205
 ] 

Edward Ribeiro edited comment on SOLR-7951 at 8/21/15 6:23 PM:
---

Cool, [~ecario]! :) Only thing I've got confused about the new patch is that it 
still cast to SolrServerException. *Shouldn't it be casted to SolrException 
now?*

{code}
} else if (ex instanceof SolrException) {
  throw (SolrServerException) ex;
{code}

Also, it would be nice to document this as [~markrmil...@gmail.com] suggested 
above too for future devs.

Mark, let us know if you think this patch is okay, please?

ps: So, the test I included was flawed. I wish we had a test to validate this 
fix. Any ideas?


was (Author: eribeiro):
Cool, [~ecario]! :) Only thing I've got confused about the new patch is that it 
still cast to SolrServerException. Shouldn't it be casted to SolrException now?

{{quote}}
+} else if (ex instanceof SolrException) {
+  throw (SolrServerException) ex;
{{quote}}

Also, it would be nice to document this as [~markrmil...@gmail.com] suggested 
above too for future devs.

Mark, let us know if you think this patch is okay, please?

ps: So, the test I included was flawed. I wish we had a test to validate this 
fix. Any ideas?

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951-4.x.patch, SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707205#comment-14707205
 ] 

Edward Ribeiro commented on SOLR-7951:
--

Cool, [~ecario]! :) Only thing I've got confused about the new patch is that it 
still cast to SolrServerException. Shouldn't it be casted to SolrException now?

{{quote}}
+} else if (ex instanceof SolrException) {
+  throw (SolrServerException) ex;
{{quote}}

Also, it would be nice to document this as [~markrmil...@gmail.com] suggested 
above too for future devs.

Mark, let us know if you think this patch is okay, please?

ps: So, the test I included was flawed. I wish we had a test to validate this 
fix. Any ideas?

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951-4.x.patch, SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7746) Ping requests stopped working with distrib=true in Solr 5.2.1

2015-08-21 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707039#comment-14707039
 ] 

Michael Sun commented on SOLR-7746:
---

Thanks [~gchanan] for reviewing. Here is my answers.

1. Will add space and other formatting.
2. qt parameter is no longer used at that point since delegation is completed. 
It doesn't matter if qt is removed or set to default handler.
3. The logic is correct for non-IS_SHARD use case and it adds OK status to rsp 
with the ! in last if condition.




 Ping requests stopped working with distrib=true in Solr 5.2.1
 -

 Key: SOLR-7746
 URL: https://issues.apache.org/jira/browse/SOLR-7746
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.2.1
Reporter: Alexey Serba
 Attachments: SOLR-7746.patch, SOLR-7746.patch, SOLR-7746.patch


 {noformat:title=steps to reproduce}
 # start 1 node SolrCloud cluster
 sh ./bin/solr -c -p 
 # create a test collection (we won’t use it, but I just want to it to load 
 solr configs to Zk)
 ./bin/solr create_collection -c test -d sample_techproducts_configs -p 
 # create another test collection with 2 shards
 curl 
 'http://localhost:/solr/admin/collections?action=CREATEname=test2numShards=2replicationFactor=1maxShardsPerNode=2collection.configName=test'
 # try distrib ping request
 curl 
 'http://localhost:/solr/test2/admin/ping?wt=jsondistrib=trueindent=true'
 ...
   error:{
 msg:Ping query caused exception: Error from server at 
 http://192.168.59.3:/solr/test2_shard2_replica1: Cannot execute the 
 PingRequestHandler recursively
 ...
 {noformat}
 {noformat:title=Exception}
 2116962 [qtp599601600-13] ERROR org.apache.solr.core.SolrCore  [test2 shard2 
 core_node1 test2_shard2_replica1] – org.apache.solr.common.SolrException: 
 Cannot execute the PingRequestHandler recursively
   at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:246)
   at 
 org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:211)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6571) Javadoc error when run in private access level

2015-08-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707279#comment-14707279
 ] 

Michael McCandless commented on LUCENE-6571:


+1 to the FST fixes!  Thanks [~cpoerschke].

 Javadoc error when run in private access level
 --

 Key: LUCENE-6571
 URL: https://issues.apache.org/jira/browse/LUCENE-6571
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.2
Reporter: Cao Manh Dat
Assignee: Christine Poerschke
Priority: Trivial
 Attachments: LUCENE-6571.patch


 Javadoc error when run in private access level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7746) Ping requests stopped working with distrib=true in Solr 5.2.1

2015-08-21 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707066#comment-14707066
 ] 

Gregory Chanan commented on SOLR-7746:
--

on #3, sorry I didn't write that correctly.  Let me try to put it another way 
-- why not just simplify the code I listed above and get rid of the if/else 
handling.  Does anything break if you do that?

Also, it would be great to add a test to demonstrate the problem you are fixing 
so it doesn't show up again.  Ideally the test should demonstrate IS_SHARD and 
non-IS_SHARD cases and the IS_SHARD test case should fail without your patch 
applied.

 Ping requests stopped working with distrib=true in Solr 5.2.1
 -

 Key: SOLR-7746
 URL: https://issues.apache.org/jira/browse/SOLR-7746
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.2.1
Reporter: Alexey Serba
 Attachments: SOLR-7746.patch, SOLR-7746.patch, SOLR-7746.patch


 {noformat:title=steps to reproduce}
 # start 1 node SolrCloud cluster
 sh ./bin/solr -c -p 
 # create a test collection (we won’t use it, but I just want to it to load 
 solr configs to Zk)
 ./bin/solr create_collection -c test -d sample_techproducts_configs -p 
 # create another test collection with 2 shards
 curl 
 'http://localhost:/solr/admin/collections?action=CREATEname=test2numShards=2replicationFactor=1maxShardsPerNode=2collection.configName=test'
 # try distrib ping request
 curl 
 'http://localhost:/solr/test2/admin/ping?wt=jsondistrib=trueindent=true'
 ...
   error:{
 msg:Ping query caused exception: Error from server at 
 http://192.168.59.3:/solr/test2_shard2_replica1: Cannot execute the 
 PingRequestHandler recursively
 ...
 {noformat}
 {noformat:title=Exception}
 2116962 [qtp599601600-13] ERROR org.apache.solr.core.SolrCore  [test2 shard2 
 core_node1 test2_shard2_replica1] – org.apache.solr.common.SolrException: 
 Cannot execute the PingRequestHandler recursively
   at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:246)
   at 
 org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:211)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707209#comment-14707209
 ] 

Edward Ribeiro commented on SOLR-7951:
--

Yeah, you right. I couldn't think of a better test to include. :(

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951-4.x.patch, SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707210#comment-14707210
 ] 

Edward Ribeiro commented on SOLR-7951:
--

Yeah, you right. I couldn't think of a better test to include. :(

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951-4.x.patch, SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707205#comment-14707205
 ] 

Edward Ribeiro edited comment on SOLR-7951 at 8/21/15 6:42 PM:
---

Cool, [~ecario]! :) Only thing I've got confused about the new patch is that it 
still cast to SolrServerException. --Shouldn't it be casted to SolrException 
now?-- I see that the method signature throws SolrServerException, but I didn't 
get why it didn't throw ClassCastException... 

{code}
} else if (ex instanceof SolrException) {
  throw (SolrServerException) ex;
{code}

Also, it would be nice to document this as [~markrmil...@gmail.com] suggested 
above too for future devs.

Mark, let us know if you think this patch is okay, please? I mean, if it 
addresses your other  concerns with my original patch.

ps: So, the test I included was flawed. I wish we had a test to validate this 
fix. Any ideas?


was (Author: eribeiro):
Cool, [~ecario]! :) Only thing I've got confused about the new patch is that it 
still cast to SolrServerException. *Shouldn't it be casted to SolrException 
now?*

{code}
} else if (ex instanceof SolrException) {
  throw (SolrServerException) ex;
{code}

Also, it would be nice to document this as [~markrmil...@gmail.com] suggested 
above too for future devs.

Mark, let us know if you think this patch is okay, please?

ps: So, the test I included was flawed. I wish we had a test to validate this 
fix. Any ideas?

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor
 Attachments: SOLR-7951-4.x.patch, SOLR-7951.patch


 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6743) Allow Ivy lockStrategy to be overridden by system property.

2015-08-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707547#comment-14707547
 ] 

ASF subversion and git services commented on LUCENE-6743:
-

Commit 1697060 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1697060 ]

LUCENE-6743: Allow Ivy lockStrategy to be overridden by system property.

 Allow Ivy lockStrategy to be overridden by system property.
 ---

 Key: LUCENE-6743
 URL: https://issues.apache.org/jira/browse/LUCENE-6743
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6743.patch


 The current hard code lock strategy is imperfect and can fail under parallel 
 load. With Ivy 2.4 there is a better option in artifact-lock-nio. We should 
 allow the lock strategy to be overrideen like the resolutionCacheDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-21 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707562#comment-14707562
 ] 

Karl Wright commented on LUCENE-6699:
-

[~mikemccand]: It seems like you drilled down to a point which BKD thought 
should be in the resultset but which geo3d says should not.  How can this 
happen?  Specifically, what information from geo3d must be incorrect for this 
situation to occur?


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707594#comment-14707594
 ] 

Michael McCandless commented on LUCENE-6699:


Oh I see, the failure is opposite from before (expected=false but actual=true).

This can happen if we incorrectly got CELL_INSIDE_SHAPE (GeoArea.CONTAINS from 
geo3d) when we asked the shape to compare itself to an x,y,z rect.

I added a new assert to confirm this:

{noformat}
java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([9CF59027DCD28E6D]:0)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.addAll(BKD3DTreeReader.java:153)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:191)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:317)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:317)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:293)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:307)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:283)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:317)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:293)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:268)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:258)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:293)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:307)
at 
org.apache.lucene.bkdtree3d.BKD3DTreeReader.intersect(BKD3DTreeReader.java:115)
at 
org.apache.lucene.bkdtree3d.PointInGeo3DShapeQuery$1.scorer(PointInGeo3DShapeQuery.java:114)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:581)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:92)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:425)
at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4._run(TestGeo3DPointField.java:586)
at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4.run(TestGeo3DPointField.java:520)
{noformat}


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >