[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks

2015-05-29 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564367#comment-14564367
 ] 

Michael McCandless commented on LUCENE-6507:


Argh, thank you for fixing the HDFSLockFactory failure.

 NativeFSLock.close() can invalidate other locks
 ---

 Key: LUCENE-6507
 URL: https://issues.apache.org/jira/browse/LUCENE-6507
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Simon Willnauer
Priority: Blocker
 Fix For: 4.10.5, 5.2

 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, 
 LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, 
 LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch


 the lock API in Lucene is super trappy since the lock that we return form 
 this API must first be obtained and if we can't obtain it the lock should not 
 be closed since we might ie. close the underlying channel in the NativeLock 
 case which releases all lock for this file on some operating systems. I think 
 the makeLock method should try to obtain and only return a lock if we 
 successfully obtained it. Not sure if it's possible everywhere but we should 
 at least make the documentation clear here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5743) Faceting with BlockJoin support

2015-05-29 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated SOLR-5743:
---
Comment: was deleted

(was: We call this kind of requests which mix and match fields from different 
related entities a deep search. To handle such requests we need to create a 
composition of Boolean query which will provide linguistic matching and Block 
Join query which will allow to return top level document when match happened on 
nested document. This topic worth its own JIRA (or few of them). Here, we are 
focusing on faceting rather than matching. )

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
 SOLR-5743.patch, SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6508) Simplify Directory/lock api

2015-05-29 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6508:
--
Attachment: LUCENE-6508.patch

I like your proposal. I already worked with it and implemented the lock factory 
tester:

ant test-lock-factory passes successfully.

I also changed the javadocs a bit and renamed the ValidatingDirectoryWrapper to 
LockValidatingDirectoryWrapper. We have way too many validating wrappers, so we 
should have the term Lock in the name :-)

While implementing the lock stress tester, I noticed that it is now very hard 
to differentiate between an conventional I/O error and failure to obtain the 
lock. Maybe we should still preserve LockObtainFailed Exception. I am not so 
happy with having no Exception anymore that clearly states that the lock was 
not successfully obtained (also for users).

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5743) Faceting with BlockJoin support

2015-05-29 Thread Dr Oleg Savrasov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564401#comment-14564401
 ] 

Dr Oleg Savrasov commented on SOLR-5743:


We call this kind of requests which mix and match fields from different related 
entities a deep search. To handle such requests we need to create a 
composition of Boolean query which will provide linguistic matching and Block 
Join query which will allow to return top level document when match happened on 
nested document. This topic worth its own JIRA (or few of them). Here, we are 
focusing on faceting rather than matching. 

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
 SOLR-5743.patch, SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5743) Faceting with BlockJoin support

2015-05-29 Thread Dr Oleg Savrasov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564400#comment-14564400
 ] 

Dr Oleg Savrasov commented on SOLR-5743:


We call this kind of requests which mix and match fields from different related 
entities a deep search. To handle such requests we need to create a 
composition of Boolean query which will provide linguistic matching and Block 
Join query which will allow to return top level document when match happened on 
nested document. This topic worth its own JIRA (or few of them). Here, we are 
focusing on faceting rather than matching. 

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
 SOLR-5743.patch, SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7570) Config APIs should not modify the ConfigSet

2015-05-29 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564398#comment-14564398
 ] 

Alan Woodward commented on SOLR-7570:
-

bq. will the LOCAL changes make sense for SolrCloud mode?

I was thinking it might come in useful for things like PeerSync, or possibly 
per-core roles.

I'll change the collection-specific znode to conform with the existing setup.

Back-compatibility shouldn't be a problem, as existing installations will have 
their overlays read from the shared config, up until they make a change, at 
which point an overlay will be written to the collection config, which takes 
precedence.  I still need to work out how this works with ConfListeners though.

Changes shared between collections should be done through a different API, I 
think.  Something like the configset API being discussed on SOLR-5955 would be 
more appropriate for that.

 Config APIs should not modify the ConfigSet
 ---

 Key: SOLR-7570
 URL: https://issues.apache.org/jira/browse/SOLR-7570
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe
 Attachments: SOLR-7570.patch


 Originally discussed here: 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201505.mbox/%3CCAMJgJxSXCHxDzJs5-C-pKFDEBQD6JbgxB=-xp7u143ekmgp...@mail.gmail.com%3E
 The ConfigSet used to create a collection should be read-only. Changes made 
 via any of the Config APIs should only be applied to the collection where the 
 operation is done and no to other collections that may be using the same 
 ConfigSet. As discussed in the dev list: 
 When a collection is created we should have two things, an immutable part 
 (the ConfigSet) and a mutable part (configoverlay, generated schema, etc). 
 The ConfigSet will still be placed in ZooKeeper under /configs but the 
 mutable part should be placed under /collections/$COLLECTION_NAME/…
 [~romseygeek] suggested: 
 {quote}
 A nice way of doing it would be to make it part of the SolrResourceLoader 
 interface.  The ZK resource loader could check in the collection-specific 
 zknode first, and then under configs/, and we could add a writeResource() 
 method that writes to the collection-specific node as well.  Then all config 
 I/O goes via the resource loader, and we have a way of keeping certain parts 
 immutable.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7576) Implement RequestHandler in Javascript

2015-05-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564284#comment-14564284
 ] 

Noble Paul edited comment on SOLR-7576 at 5/29/15 7:49 AM:
---

 I have missed SOLR-5005
We will merge the work done in both the tickets and make this one

It also should have the security mechanisms which loading  executable code to 
Solr must adhere to. I'll add security to this before committing


was (Author: noble.paul):
 I have missed SOLR-5005
I'm mostly done with this. Planning to commit it soon

Do you think anything is missing  in this patch you wish to include.

The objective is not exactly to make  just a JS handler. The idea is to provide 
a comprehensive API set which the functional nature of Javascript can leverage 
on

It also should have the security mechanisms which loading  executable code to 
Solr must adhere to. I'll add security to this before committing

 Implement RequestHandler in Javascript
 --

 Key: SOLR-7576
 URL: https://issues.apache.org/jira/browse/SOLR-7576
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul
 Attachments: SOLR-7576.patch


 Solr now support dynamic loading (SOLR-7073) of components and it is secured 
 in SOLR-7126
 We can extend the same functionality with JS as well
 example of creating a RequestHandler 
 {code:javascript}
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 create-requesthandler : {name: jshandler ,
 class:solr.JSRequestHandler, 
 defaults: {
 js: myreqhandlerjs, //this is the name of the blob in .system collection
 version:3,
 sig:mW1Gwtz2QazjfVdrLFHfbGwcr8xzFYgUOLu68LHqWRDvLG0uLcy1McQ+AzVmeZFBf1yLPDEHBWJb5KXr8bdbHN/PYgUB1nsr9pk4EFyD9KfJ8TqeH/ijQ9waa/vjqyiKEI9U550EtSzruLVZ32wJ7smvV0fj2YYhrUaaPzOn9g0=
 }
  }  
 }'
 {code}
 To make this work
 * Solr should be started with {{-Denable.runtime.lib=true}}
 * The javascript must be loaded to the {{.system}} collection using the blob 
 store API
 * Configure the requesthandler with the JS blob name and version
 * Sign the javascript and configure the signature if security is enabled
 The {{JSRequestHandler}} is implicitly defined and it can be accessed by 
 hitting {{/js/jsname/version}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7576) Implement RequestHandler in Javascript

2015-05-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564284#comment-14564284
 ] 

Noble Paul commented on SOLR-7576:
--

 I have missed SOLR-5005
I'm mostly done with this. Planning to commit it soon

Do you think anything is missing  in this patch you wish to include.

The objective is not exactly to make  just a JS handler. The idea is to provide 
a comprehensive API set which the functional nature of Javascript can leverage 
on

It also should have the security mechanisms which loading  executable code to 
Solr must adhere to. I'll add security to this before committing

 Implement RequestHandler in Javascript
 --

 Key: SOLR-7576
 URL: https://issues.apache.org/jira/browse/SOLR-7576
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul
 Attachments: SOLR-7576.patch


 Solr now support dynamic loading (SOLR-7073) of components and it is secured 
 in SOLR-7126
 We can extend the same functionality with JS as well
 example of creating a RequestHandler 
 {code:javascript}
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 create-requesthandler : {name: jshandler ,
 class:solr.JSRequestHandler, 
 defaults: {
 js: myreqhandlerjs, //this is the name of the blob in .system collection
 version:3,
 sig:mW1Gwtz2QazjfVdrLFHfbGwcr8xzFYgUOLu68LHqWRDvLG0uLcy1McQ+AzVmeZFBf1yLPDEHBWJb5KXr8bdbHN/PYgUB1nsr9pk4EFyD9KfJ8TqeH/ijQ9waa/vjqyiKEI9U550EtSzruLVZ32wJ7smvV0fj2YYhrUaaPzOn9g0=
 }
  }  
 }'
 {code}
 To make this work
 * Solr should be started with {{-Denable.runtime.lib=true}}
 * The javascript must be loaded to the {{.system}} collection using the blob 
 store API
 * Configure the requesthandler with the JS blob name and version
 * Sign the javascript and configure the signature if security is enabled
 The {{JSRequestHandler}} is implicitly defined and it can be accessed by 
 hitting {{/js/jsname/version}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7605) TestCloudPivotFacet failures: Must not add duplicate PivotFacetValue with redundent inner value

2015-05-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564265#comment-14564265
 ] 

Hoss Man commented on SOLR-7605:


these reproduce for me currently, and i'll dig into them in the AM...

{noformat}
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/861/
Updating http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x at 
revision '2015-05-28T16:29:15.428 -0400'
At revision 1682323

[java-info] java version 1.7.0_72
[java-info] Java(TM) SE Runtime Environment (1.7.0_72-b14, Oracle Corporation)
[java-info] Java HotSpot(TM) 64-Bit Server VM (24.72-b04, Oracle Corporation)
[java-info] Test args: []

   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestCloudPivotFacet 
-Dtests.method=test -Dtests.seed=22F85D14F0CCB183 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=ar_MA -Dtests.timezone=IST -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
{noformat}

{noformat}
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12854/
Updating http://svn.apache.org/repos/asf/lucene/dev/trunk at revision 
'2015-05-28T09:12:47.737 +'
At revision 1682179

[java-info] java version 1.9.0-ea
[java-info] Java(TM) SE Runtime Environment (1.9.0-ea-b60, Oracle Corporation)
[java-info] Java HotSpot(TM) Server VM (1.9.0-ea-b60, Oracle Corporation)
[java-info] Test args: [-client -XX:+UseConcMarkSweepGC]

   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestCloudPivotFacet 
-Dtests.method=test -Dtests.seed=7A1923556F2286C2 -Dtests.multiplier=3 
-Dtests.slow=true -Dtests.locale=es_PE -Dtests.timezone=Australia/Currie 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
{noformat}

 TestCloudPivotFacet failures: Must not add duplicate PivotFacetValue with 
 redundent inner value
 ---

 Key: SOLR-7605
 URL: https://issues.apache.org/jira/browse/SOLR-7605
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man

 There have been two recent jenkins failures of TestCloudPivotFacet on both 5x 
 and trunk with the same underlying cause...
 {noformat}
 pProblem accessing /collection1/select. Reason:
 prejava.lang.AssertionError: Must not add duplicate PivotFacetValue 
 with redundent inner value/pre/p
 {noformat}
 ..digging through mail logs, it looks like there have been a handful of these 
 errors on different branches and os, with and w/o nightly, since April 1st of 
 this year.
 The two recent seeds i tried (on trunk and 5x) reproduce - details to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7605) TestCloudPivotFacet failures: Must not add duplicate PivotFacetValue with redundent inner value

2015-05-29 Thread Hoss Man (JIRA)
Hoss Man created SOLR-7605:
--

 Summary: TestCloudPivotFacet failures: Must not add duplicate 
PivotFacetValue with redundent inner value
 Key: SOLR-7605
 URL: https://issues.apache.org/jira/browse/SOLR-7605
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man


There have been two recent jenkins failures of TestCloudPivotFacet on both 5x 
and trunk with the same underlying cause...

{noformat}
pProblem accessing /collection1/select. Reason:
prejava.lang.AssertionError: Must not add duplicate PivotFacetValue with 
redundent inner value/pre/p
{noformat}

..digging through mail logs, it looks like there have been a handful of these 
errors on different branches and os, with and w/o nightly, since April 1st of 
this year.

The two recent seeds i tried (on trunk and 5x) reproduce - details to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564604#comment-14564604
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1682421 from [~thetaphi] in branch 'dev/branches/lucene6508'
[ https://svn.apache.org/r1682421 ]

LUCENE-6508: Create branch

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564606#comment-14564606
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1682422 from [~thetaphi] in branch 'dev/branches/lucene6508'
[ https://svn.apache.org/r1682422 ]

LUCENE-6508: Initial commit of Robert's and Uwe's code

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6371) Improve Spans payload collection

2015-05-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564691#comment-14564691
 ] 

Robert Muir commented on LUCENE-6371:
-

By the way, those two go together. Its hard to fix that slow query to do the 
right thing and pass along contexts from rewrite while having the leniency, too 
scary and undertested. 

At the same time, we don't want this scary SpanMultiTermQuery to corrupt all of 
our span apis. The optimization in question won't fix fundamental performance 
problems with it, and its basically the only one with a problem. 

These are the reasons why the original termcontext stuff for spans was a 
half-ass implementation, all those problems and very ugly as well, but 
relatively simple and contained. The half-ass implementation worked completely 
(seeks reduced from 2 to 1) for all normal span queries, just not the crazy 
SpanMultiTermQuery. So it was a tradeoff for simplicity, and works well for all 
the regular spans use cases, so they were more inline with e.g. 
sloppyphrasequery and so on.

 Improve Spans payload collection
 

 Key: LUCENE-6371
 URL: https://issues.apache.org/jira/browse/LUCENE-6371
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6371.patch, LUCENE-6371.patch, LUCENE-6371.patch, 
 LUCENE-6371.patch


 Spin off from LUCENE-6308, see the comments there from around 23 March 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6371) Improve Spans payload collection

2015-05-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564678#comment-14564678
 ] 

Robert Muir commented on LUCENE-6371:
-

{quote}
I think it's still useful though - I use it all the time!
{quote}

Yeah but its slow with no easy chance of ever being faster. There is no simple 
bitset rewrite here like there is for other multiterm queries. Additionally It 
has all the downsides of an enormous boolean query, but with proximity to boot: 
and this is very real, even simple stuff like 1-2 KB RAM consumption per term 
due to additional decompression buffers for prox.  Maybe in the future you 
could optionally index prefix terms, but I can't imagine merging proximity etc 
into a prefix-field for full-indexed-fields as a default, seems complicated and 
slow and space-consuming.

{quote}
It would be nice if you could restrict the number of SpanOr clauses it rewrites 
to, but that's a separate issue.
{quote}

+1, that is a great idea. We should really both do that and also add warnings 
to the javadocs about inefficiency. It has none today!

{quote}
If you really think that moving .getSpans() and .extractTerms() to SpanWeight 
doesn't gain anything, then I can back it out. But I think it does simplify the 
API and brings it more into line with our other standard queries. 
{quote}

I totally agree it has the value of consistency with other queries. But some of 
the APIs trying to do this are fairly complicated, yet at the same time still 
not really working: see below for more explanation.

{quote}
And I really don't see that exposing the termcontexts map on the SpanWeight 
constructor is any worse than exposing it directly in .getSpans(). In fact, I'd 
say that it's hiding it better - very few users of lucene are going to be 
looking at SpanWeights, as they're an implementation detail, but anyone using 
an IDE is going to be shown SpanQuery.getSpans() when they try and autocomplete 
on a SpanQuery object, and it's not something that most users need to worry 
about.
{quote}

Its actually terrible already: the motivation for this stuff being to try to 
speedup the turtle in question, SpanMultiTermQuery. The reason this stuff was 
exposed, is because it could bring some relief to such crazy queries, by only 
visiting each term in the term dictionary less than 3 times (rewrite, 
weight/idf, postings). But this was never quite right for two reasons:
* Leniency: We can't enforce we are doing the performant thing because creation 
of weight/idf uses extractTerms(). So the SpanTermWeight inside the exclude 
portion of a SpanNot suddenly sees an unexpected term it has no termstate for. 
Maybe patches here removed this problem, but forgot to fix the leniency in 
SpanTermWeight, as I see at least the code comment is gone.
* Incomplete: SpanMultiTermQueryWrapper still isn't reusing the termcontext 
from rewrite(), somehow passing it down to the rewritten-spans. So the whole 
ugly thing isn't even totally working, its just reducing the number of visits 
to the term dictionary from 3 down to 2, but it is stupid that it is not 1.


 Improve Spans payload collection
 

 Key: LUCENE-6371
 URL: https://issues.apache.org/jira/browse/LUCENE-6371
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6371.patch, LUCENE-6371.patch, LUCENE-6371.patch, 
 LUCENE-6371.patch


 Spin off from LUCENE-6308, see the comments there from around 23 March 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Moved] (SOLR-7606) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances

2015-05-29 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir moved LUCENE-6512 to SOLR-7606:
---

  Component/s: (was: modules/join)
Lucene Fields:   (was: New)
Affects Version/s: (was: 4.10.4)
   4.10.4
  Key: SOLR-7606  (was: LUCENE-6512)
  Project: Solr  (was: Lucene - Core)

 ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
 

 Key: SOLR-7606
 URL: https://issues.apache.org/jira/browse/SOLR-7606
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6512.patch


 I had a customer using BlockJoin with Solr. He executed a block join query 
 and the following appeared in Solr's logs:
 {noformat}
 28 May 2015 17:19:20  ERROR (SolrException.java:131) - 
 java.lang.ArrayIndexOutOfBoundsException: -1
 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149)
 at 
 org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
 at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
 at 
 org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433)
 at 
 org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at 
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
 at 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I debugged this stuff and found out when this happens:
 The last block of documents was not followed by a parent. If one of the child 
 documents without a parent at the end of the index match the inner query, 
 scorer calls nextSetBit() to find next parent document. This returns -1. 
 There is an assert afterwards that checks for -1, but in production code, 
 this is of course never executed.
 If the index has deletetions the false -1 is passed to acceptDocs and then 
 triggers the above problem.
 We should change the assert to another IllegalStateException() which is used 
 to notify the user if the orthogonality is broken. By that the user gets the 
 information that his index is broken and contains child documents without a 
 parent at the very end of a segment.
 I have seen this on 4.10.4. Maybe thats already fixed in 5.0, but I just open 
 this here for investigation. This was clearly a problem in the index, but due 
 to Solr's buggy implementation of parent/child documents (you have to set the 
 parent flag in contrast to 

[jira] [Commented] (SOLR-7606) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances

2015-05-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564466#comment-14564466
 ] 

Robert Muir commented on SOLR-7606:
---

Sorry, this is a solr bug.

-1 to changing this query, to check that you abused things wrongly at indexing 
time. I'm sorry, this is not the job of lucene's queries, and we will slow them 
for everyone by doing this stuff. 

We gotta draw a line and let queries be fast, that is why we do indexing at all!

 ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
 

 Key: SOLR-7606
 URL: https://issues.apache.org/jira/browse/SOLR-7606
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6512.patch


 I had a customer using BlockJoin with Solr. He executed a block join query 
 and the following appeared in Solr's logs:
 {noformat}
 28 May 2015 17:19:20  ERROR (SolrException.java:131) - 
 java.lang.ArrayIndexOutOfBoundsException: -1
 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149)
 at 
 org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
 at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
 at 
 org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433)
 at 
 org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at 
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
 at 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I debugged this stuff and found out when this happens:
 The last block of documents was not followed by a parent. If one of the child 
 documents without a parent at the end of the index match the inner query, 
 scorer calls nextSetBit() to find next parent document. This returns -1. 
 There is an assert afterwards that checks for -1, but in production code, 
 this is of course never executed.
 If the index has deletetions the false -1 is passed to acceptDocs and then 
 triggers the above problem.
 We should change the assert to another IllegalStateException() which is used 
 to notify the user if the orthogonality is broken. By that the user gets the 
 information that his index is broken and contains child documents without a 
 parent at the very end of a segment.
 I have seen this on 4.10.4. Maybe thats already fixed in 5.0, but I just open 
 this here for investigation. This was clearly a problem in the index, but due 
 to Solr's buggy implementation 

[jira] [Commented] (SOLR-7606) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances

2015-05-29 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564499#comment-14564499
 ] 

Uwe Schindler commented on SOLR-7606:
-

I agree with both of you. I just say:
- we already have a lot of such checks, so this one is just another one
- I would agree to not commit this, if we in turn remove the other checks.

But in any case, we have to fix the block indexing then to do the checks at 
index time. In addition, if you delete a parent doc, the childs should be 
deleted automatically, too. This was likely the problem that lead to this bug.

 ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
 

 Key: SOLR-7606
 URL: https://issues.apache.org/jira/browse/SOLR-7606
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6512.patch


 I had a customer using BlockJoin with Solr. He executed a block join query 
 and the following appeared in Solr's logs:
 {noformat}
 28 May 2015 17:19:20  ERROR (SolrException.java:131) - 
 java.lang.ArrayIndexOutOfBoundsException: -1
 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149)
 at 
 org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
 at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
 at 
 org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433)
 at 
 org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at 
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
 at 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I debugged this stuff and found out when this happens:
 The last block of documents was not followed by a parent. If one of the child 
 documents without a parent at the end of the index match the inner query, 
 scorer calls nextSetBit() to find next parent document. This returns -1. 
 There is an assert afterwards that checks for -1, but in production code, 
 this is of course never executed.
 If the index has deletetions the false -1 is passed to acceptDocs and then 
 triggers the above problem.
 We should change the assert to another IllegalStateException() which is used 
 to notify the user if the orthogonality is broken. By that the user gets the 
 information that his index is broken and contains child documents without a 
 parent at the very end of a segment.
 I have seen this on 4.10.4. Maybe thats already fixed in 5.0, but I just open 

Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 225 - Still Failing

2015-05-29 Thread Michael McCandless
I'm looking ...

Mike McCandless

http://blog.mikemccandless.com


On Thu, May 28, 2015 at 5:59 AM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/225/

 No tests ran.

 Build Log:
 [...truncated 52005 lines...]
 prepare-release-no-sign:
 [mkdir] Created dir: 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist
  [copy] Copying 446 files to 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/lucene
  [copy] Copying 245 files to 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/solr
[smoker] Java 1.8 
 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
[smoker] NOTE: output encoding is UTF-8
[smoker]
[smoker] Load release URL 
 file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/...
[smoker]
[smoker] Test Lucene...
[smoker]   test basics...
[smoker]   get KEYS
[smoker] 0.1 MB in 0.01 sec (15.0 MB/sec)
[smoker]   check changes HTML...
[smoker]   download lucene-6.0.0-src.tgz...
[smoker] 28.0 MB in 0.04 sec (736.4 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-6.0.0.tgz...
[smoker] 64.4 MB in 0.09 sec (696.2 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   download lucene-6.0.0.zip...
[smoker] 74.6 MB in 0.11 sec (685.2 MB/sec)
[smoker] verify md5/sha1 digests
[smoker]   unpack lucene-6.0.0.tgz...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.8...
[smoker]   got 5746 hits for query lucene
[smoker] checkindex with 1.8...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-6.0.0.zip...
[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
[smoker] test demo with 1.8...
[smoker]   got 5746 hits for query lucene
[smoker] checkindex with 1.8...
[smoker] check Lucene's javadoc JAR
[smoker]   unpack lucene-6.0.0-src.tgz...
[smoker] make sure no JARs/WARs in src dist...
[smoker] run ant validate
[smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
[smoker] test demo with 1.8...
[smoker]   got 209 hits for query lucene
[smoker] checkindex with 1.8...
[smoker] generate javadocs w/ Java 8...
[smoker]
[smoker] Crawl/parse...
[smoker]
[smoker] Verify...
[smoker]   confirm all releases have coverage in TestBackwardsCompatibility
[smoker] find all past Lucene releases...
[smoker] run TestBackwardsCompatibility..
[smoker] Traceback (most recent call last):
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
  line 1498, in module
[smoker] main()
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
  line 1443, in main
[smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
 c.is_signed, ' '.join(c.test_args))
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
  line 1481, in smokeTest
[smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
 version, svnRevision, version, testArgs, baseURL)
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
  line 628, in unpackAndVerify
[smoker] verifyUnpacked(java, project, artifact, unpackPath, 
 svnRevision, version, testArgs, tmpDir, baseURL)
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
  line 775, in verifyUnpacked
[smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
[smoker]   File 
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
  line 1404, in confirmAllReleasesAreTestedForBackCompat
[smoker] raise RuntimeError('could not parse version %s' % name)
[smoker] RuntimeError: could not parse version 5x-with-4x-segments

 BUILD FAILED
 /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/build.xml:412:
  exec returned: 1

 Total time: 38 minutes 3 seconds
 Build step 'Invoke Ant' marked build as failure
 Email was triggered for: Failure
 Sending email for trigger: Failure




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

-

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 696 - Still Failing

2015-05-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/696/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
file handle leaks: 
[SeekableByteChannel(/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest
 3AF6852C6379681-001/index-SimpleFSDirectory-011/segments_2)]

Stack Trace:
java.lang.RuntimeException: file handle leaks: 
[SeekableByteChannel(/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest
 3AF6852C6379681-001/index-SimpleFSDirectory-011/segments_2)]
at __randomizedtesting.SeedInfo.seed([3AF6852C6379681]:0)
at org.apache.lucene.mockfile.LeakFS.onClose(LeakFS.java:64)
at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:78)
at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:79)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:227)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.Exception
at org.apache.lucene.mockfile.LeakFS.onOpen(LeakFS.java:47)
at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:82)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:272)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:213)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:241)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:76)
at 
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:110)
at 
org.apache.lucene.store.RawDirectoryWrapper.openChecksumInput(RawDirectoryWrapper.java:42)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:269)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:488)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:278)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:265)
at 
org.apache.lucene.store.BaseDirectoryWrapper.close(BaseDirectoryWrapper.java:46)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:307)
at 
org.apache.solr.core.CachingDirectoryFactory.closeCacheValue(CachingDirectoryFactory.java:273)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:203)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1254)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:311)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:198)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:159)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:348)
at 
org.apache.solr.cloud.ZkController.joinElection(ZkController.java:1070)
at org.apache.solr.cloud.ZkController.register(ZkController.java:884)
at 
org.apache.solr.cloud.ZkController$RegisterCoreAsync.call(ZkController.java:225)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:156)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more


REGRESSION:  
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

Error Message:
distrib-chain-explicit expected LogUpdateProcessor in chain 

[jira] [Commented] (SOLR-7606) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances

2015-05-29 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564524#comment-14564524
 ] 

Uwe Schindler commented on SOLR-7606:
-

One other solution for the query time checks maybe: We could try/catch the 
whole parent/child logic with some try/catch block and if stuff like AIOOBE 
happens, it rethrows a useful message. We could then remove all checks we 
currently have (all those IllegalStateEx). This would be much cheaper that the 
tons of checks we have now.

But I agree, we should fix indexing or at least provide a block join indexing 
api that allows to index and delete blocks with some definition which fields to 
use for parent and child documents.

 ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
 

 Key: SOLR-7606
 URL: https://issues.apache.org/jira/browse/SOLR-7606
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6512.patch


 I had a customer using BlockJoin with Solr. He executed a block join query 
 and the following appeared in Solr's logs:
 {noformat}
 28 May 2015 17:19:20  ERROR (SolrException.java:131) - 
 java.lang.ArrayIndexOutOfBoundsException: -1
 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149)
 at 
 org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
 at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
 at 
 org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433)
 at 
 org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at 
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
 at 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I debugged this stuff and found out when this happens:
 The last block of documents was not followed by a parent. If one of the child 
 documents without a parent at the end of the index match the inner query, 
 scorer calls nextSetBit() to find next parent document. This returns -1. 
 There is an assert afterwards that checks for -1, but in production code, 
 this is of course never executed.
 If the index has deletetions the false -1 is passed to acceptDocs and then 
 triggers the above problem.
 We should change the assert to another IllegalStateException() which is used 
 to notify the user if the orthogonality is broken. By that the user gets the 
 information that his index is broken and contains child documents without a 
 

[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-29 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564550#comment-14564550
 ] 

Karl Wright commented on LUCENE-6487:
-

Hi David,

This test is now wrong, and will blow up whenever a point is chosen at the 
poles:

{code}
final double pLat = (randomFloat() * 180.0 - 90.0) * 
DistanceUtils.DEGREES_TO_RADIANS;
final double pLon = (randomFloat() * 360.0 - 180.0) * 
DistanceUtils.DEGREES_TO_RADIANS;
final GeoPoint p1 = new GeoPoint(PlanetModel.SPHERE, pLat, pLon);
assertEquals(pLat, p1.getLatitude(), 1e-12);
assertEquals(pLon, p1.getLongitude(), 1e-12);
final GeoPoint p2 = new GeoPoint(PlanetModel.WGS84, pLat, pLon);
assertEquals(pLat, p2.getLatitude(), 1e-12);
assertEquals(pLon, p2.getLongitude(), 1e-12);
{code}

The conversion at the pole will produce a longitude value always of zero, not 
what went into it.


 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch, 
 LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7606) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances

2015-05-29 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564470#comment-14564470
 ] 

Michael McCandless commented on SOLR-7606:
--

This case happened because a block of documents was added to the index, but 
(incorrectly) there was no parent doc as the last document?

I agree it's not good to expect the query to have to check for this for every 
block and every query, at search time.  And unfortunately we've already added a 
number of such query-time checks (used to be asserts only): search for all the 
throw new IllegalStateExceptions.

I think we need better index-time checking/support somehow, e.g. an API on top 
of IW.addDocuments that's somehow told what the parent/child criteria is, and 
then validates during indexing that the block is correct?

 ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
 

 Key: SOLR-7606
 URL: https://issues.apache.org/jira/browse/SOLR-7606
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.4
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6512.patch


 I had a customer using BlockJoin with Solr. He executed a block join query 
 and the following appeared in Solr's logs:
 {noformat}
 28 May 2015 17:19:20  ERROR (SolrException.java:131) - 
 java.lang.ArrayIndexOutOfBoundsException: -1
 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149)
 at 
 org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192)
 at 
 org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
 at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
 at 
 org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433)
 at 
 org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at 
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
 at 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I debugged this stuff and found out when this happens:
 The last block of documents was not followed by a parent. If one of the child 
 documents without a parent at the end of the index match the inner query, 
 scorer calls nextSetBit() to find next parent document. This returns -1. 
 There is an assert afterwards that checks for -1, but in production code, 
 this is of course never executed.
 If the index has deletetions the false -1 is passed to acceptDocs and then 
 triggers the above problem.
 We should change the assert to another IllegalStateException() which is used 
 to notify the user if the orthogonality is 

[jira] [Commented] (SOLR-7570) Config APIs should not modify the ConfigSet

2015-05-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564527#comment-14564527
 ] 

Noble Paul commented on SOLR-7570:
--

bq.Changes shared between collections should be done through a different API, I 
think. Something like the configset API being discussed on SOLR-5955 would be 
more appropriate for that.

Those changes will be rare and I would say let them upload full files using a 
config upload API

bq.Changes shared between collections should be done through a different API, I 
think

Can we make it a property of the configset called shareable. We should use an 
extra empty node in the conf dir called SHAREABLE , which signifies that this 
configset is shareable and overlay extra can be written there . Which will 
ensure that the overlay etc will be written to the configset dir itself. 

 Config APIs should not modify the ConfigSet
 ---

 Key: SOLR-7570
 URL: https://issues.apache.org/jira/browse/SOLR-7570
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe
 Attachments: SOLR-7570.patch


 Originally discussed here: 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201505.mbox/%3CCAMJgJxSXCHxDzJs5-C-pKFDEBQD6JbgxB=-xp7u143ekmgp...@mail.gmail.com%3E
 The ConfigSet used to create a collection should be read-only. Changes made 
 via any of the Config APIs should only be applied to the collection where the 
 operation is done and no to other collections that may be using the same 
 ConfigSet. As discussed in the dev list: 
 When a collection is created we should have two things, an immutable part 
 (the ConfigSet) and a mutable part (configoverlay, generated schema, etc). 
 The ConfigSet will still be placed in ZooKeeper under /configs but the 
 mutable part should be placed under /collections/$COLLECTION_NAME/…
 [~romseygeek] suggested: 
 {quote}
 A nice way of doing it would be to make it part of the SolrResourceLoader 
 interface.  The ZK resource loader could check in the collection-specific 
 zknode first, and then under configs/, and we could add a writeResource() 
 method that writes to the collection-specific node as well.  Then all config 
 I/O goes via the resource loader, and we have a way of keeping certain parts 
 immutable.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6371) Improve Spans payload collection

2015-05-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564740#comment-14564740
 ] 

Robert Muir commented on LUCENE-6371:
-

{quote}
There's still an extra visit in SpanMTQWrapper, but I think we can fix that by 
adding a SpanTermQuery(Term, TermContext) constructor, like we have with the 
standard TermQuery. Maybe we should carry this over to LUCENE-6466, as here 
it's getting mixed up with the span collection API, which is a separate thing. 
I'll put up a patch.
{quote}

OK, this sounds good to me, because it would at least be consistent with 
TermQuery. 

{quote}
Maybe we should carry this over to LUCENE-6466, as here it's getting mixed up 
with the span collection API, which is a separate thing. I'll put up a patch.
{quote}

OK I agree, lets not try to tackle it all at once in one patch. Lets just fix 
trunk until we are happy on whatever issues we need. Then we backport 
everything to 5.x for 5.3 here. 

I have the feeling its really not that far away (your cleanups already here 
addressed a lot of my concerns), but it would be good to make sure the api 
changes support what we need. Fixing this MTQ termcontext stuff would a great 
improvement.

Just as a reminder I am still concerned about LUCENE-6495, which is test 
failures introduced from LUCENE-6466 (somehow scoring is changed, and only when 
using java 7!!!). This makes life tricky because trunk requires java 8 so we 
can't easily dig in. But maybe we can just do a lot of beasting before 
backporting the whole thing and try to figure that one out at that time.

 Improve Spans payload collection
 

 Key: LUCENE-6371
 URL: https://issues.apache.org/jira/browse/LUCENE-6371
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6371.patch, LUCENE-6371.patch, LUCENE-6371.patch, 
 LUCENE-6371.patch


 Spin off from LUCENE-6308, see the comments there from around 23 March 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564741#comment-14564741
 ] 

David Smiley commented on LUCENE-6487:
--

Thanks!  Instead what I should measure is that the distance between the 
original point and the round-trip point is close to 0.

 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch, 
 LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6371) Improve Spans payload collection

2015-05-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564777#comment-14564777
 ] 

Robert Muir commented on LUCENE-6371:
-

Definitely sounds like a possibility, though usually those hashmap ordering 
bugs never reproduce for me, because we can't tell the JVM the seed. In this 
case they reproduce at least. The problem in those tests is that scores get 
inconsistent with explains, and that is a strange way for it show up.

 Improve Spans payload collection
 

 Key: LUCENE-6371
 URL: https://issues.apache.org/jira/browse/LUCENE-6371
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6371.patch, LUCENE-6371.patch, LUCENE-6371.patch, 
 LUCENE-6371.patch


 Spin off from LUCENE-6308, see the comments there from around 23 March 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.2.0 RC2

2015-05-29 Thread Steve Rowe
+1

SUCCESS! [0:22:46.736047]

I first downloaded via Subversion (took ~9 min), then pointed the smoke tester 
at the checkout:

cd /tmp
svn co 
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356
cd ~/svn/lucene/dev/branches/lucene_solr_5_2
python3 -u dev-tools/scripts/smokeTestRelease.py 
file:///tmp/lucene-solr-5.2.0-RC2-rev1682356/

Steve

 On May 29, 2015, at 1:14 AM, Anshum Gupta ans...@anshumgupta.net wrote:
 
 Please vote for the second release candidate for Apache Lucene/Solr 5.2.0.
 
 The artifacts can be downloaded from:
 
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356
 
 You can run the smoke tester directly with this command:
 
 python3 -u dev-tools/scripts/smokeTestRelease.py 
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356/
 
 Here's my +1
 
 SUCCESS! [0:31:06.632891]
 
 -- 
 Anshum Gupta


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6371) Improve Spans payload collection

2015-05-29 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564727#comment-14564727
 ] 

Alan Woodward commented on LUCENE-6371:
---

I *think* leniency should be fixed now, because the TermContext for each leaf 
is built by SpanTermQuery.createWeight(), and then collected for IDF via a new 
SpanWeight.extractTermContexts() method, rather than being built by the parent 
Weight via extractTerms().  So there should only be one visit to the terms 
dictionary per term in normal use.

There's still an extra visit in SpanMTQWrapper, but I think we can fix that by 
adding a SpanTermQuery(Term, TermContext) constructor, like we have with the 
standard TermQuery.  Maybe we should carry this over to LUCENE-6466, as here 
it's getting mixed up with the span collection API, which is a separate thing.  
I'll put up a patch.

 Improve Spans payload collection
 

 Key: LUCENE-6371
 URL: https://issues.apache.org/jira/browse/LUCENE-6371
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6371.patch, LUCENE-6371.patch, LUCENE-6371.patch, 
 LUCENE-6371.patch


 Spin off from LUCENE-6308, see the comments there from around 23 March 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.2.0 RC2

2015-05-29 Thread Joel Bernstein
+1

SUCCESS! [0:45:41.729120]

Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, May 29, 2015 at 1:14 AM, Anshum Gupta ans...@anshumgupta.net
wrote:

 Please vote for the second release candidate for Apache Lucene/Solr 5.2.0.

 The artifacts can be downloaded from:


 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356

 You can run the smoke tester directly with this command:

 python3 -u dev-tools/scripts/smokeTestRelease.py
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356/

 Here's my +1

 SUCCESS! [0:31:06.632891]

 --
 Anshum Gupta



[jira] [Commented] (LUCENE-6481) Improve GeoPointField type to only visit high precision boundary terms

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564784#comment-14564784
 ] 

ASF subversion and git services commented on LUCENE-6481:
-

Commit 1682453 from [~mikemccand] in branch 'dev/branches/LUCENE-6481'
[ https://svn.apache.org/r1682453 ]

LUCENE-6481: Nick's latest patch: create range terms once per query, not per 
segment; check cell intersection against polygon not its bbox for more 
restrictive recursing

 Improve GeoPointField type to only visit high precision boundary terms 
 ---

 Key: LUCENE-6481
 URL: https://issues.apache.org/jira/browse/LUCENE-6481
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Nicholas Knize
 Attachments: LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, 
 LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, 
 LUCENE-6481_WIP.patch


 Current GeoPointField [LUCENE-6450 | 
 https://issues.apache.org/jira/browse/LUCENE-6450] computes a set of ranges 
 along the space-filling curve that represent a provided bounding box.  This 
 determines which terms to visit in the terms dictionary and which to skip. 
 This is suboptimal for large bounding boxes as we may end up visiting all 
 terms (which could be quite large). 
 This incremental improvement is to improve GeoPointField to only visit high 
 precision terms in boundary ranges and use the postings list for ranges that 
 are completely within the target bounding box.
 A separate improvement is to switch over to auto-prefix and build an 
 Automaton representing the bounding box.  That can be tracked in a separate 
 issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7601) If a test fails, that error should be reported and not an error about resources that were not closed later.

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564787#comment-14564787
 ] 

ASF subversion and git services commented on SOLR-7601:
---

Commit 1682455 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1682455 ]

SOLR-7601: We should only check that tests have properly closed resources if 
the tests passed.
Speeds up test fails and cleans up Jenkin's failure reports.

 If a test fails, that error should be reported and not an error about 
 resources that were not closed later.
 ---

 Key: SOLR-7601
 URL: https://issues.apache.org/jira/browse/SOLR-7601
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.3

 Attachments: SOLR-7601.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.2.0 RC2

2015-05-29 Thread Mark Miller
bq. SUCCESS! [0:22:46.736047]

That is just absurd.

+1

SUCCESS! [0:45:01.183084]

- Mark


On Fri, May 29, 2015 at 9:20 AM Steve Rowe sar...@gmail.com wrote:

 +1

 SUCCESS! [0:22:46.736047]

 I first downloaded via Subversion (took ~9 min), then pointed the smoke
 tester at the checkout:

 cd /tmp
 svn co
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356
 cd ~/svn/lucene/dev/branches/lucene_solr_5_2
 python3 -u dev-tools/scripts/smokeTestRelease.py
 file:///tmp/lucene-solr-5.2.0-RC2-rev1682356/

 Steve

  On May 29, 2015, at 1:14 AM, Anshum Gupta ans...@anshumgupta.net
 wrote:
 
  Please vote for the second release candidate for Apache Lucene/Solr
 5.2.0.
 
  The artifacts can be downloaded from:
 
 
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356
 
  You can run the smoke tester directly with this command:
 
  python3 -u dev-tools/scripts/smokeTestRelease.py
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356/
 
  Here's my +1
 
  SUCCESS! [0:31:06.632891]
 
  --
  Anshum Gupta


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Updated] (SOLR-7601) We should only check that tests have properly closed resources if the tests passed.

2015-05-29 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7601:
--
Summary: We should only check that tests have properly closed resources if 
the tests passed.  (was: If a test fails, that error should be reported and not 
an error about resources that were not closed later.)

 We should only check that tests have properly closed resources if the tests 
 passed.
 ---

 Key: SOLR-7601
 URL: https://issues.apache.org/jira/browse/SOLR-7601
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.3

 Attachments: SOLR-7601.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.2.0 RC2

2015-05-29 Thread Ishan Chattopadhyaya
+1
SUCCESS! [1:53:58.019931]

(A cloudatcost.com, one time, $500 8GB ram VPS here)

On Fri, May 29, 2015 at 6:59 PM, Mark Miller markrmil...@gmail.com wrote:

 bq. SUCCESS! [0:22:46.736047]

 That is just absurd.

 +1

 SUCCESS! [0:45:01.183084]

 - Mark


 On Fri, May 29, 2015 at 9:20 AM Steve Rowe sar...@gmail.com wrote:

 +1

 SUCCESS! [0:22:46.736047]

 I first downloaded via Subversion (took ~9 min), then pointed the smoke
 tester at the checkout:

 cd /tmp
 svn co
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356
 cd ~/svn/lucene/dev/branches/lucene_solr_5_2
 python3 -u dev-tools/scripts/smokeTestRelease.py
 file:///tmp/lucene-solr-5.2.0-RC2-rev1682356/

 Steve

  On May 29, 2015, at 1:14 AM, Anshum Gupta ans...@anshumgupta.net
 wrote:
 
  Please vote for the second release candidate for Apache Lucene/Solr
 5.2.0.
 
  The artifacts can be downloaded from:
 
 
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356
 
  You can run the smoke tester directly with this command:
 
  python3 -u dev-tools/scripts/smokeTestRelease.py
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356/
 
  Here's my +1
 
  SUCCESS! [0:31:06.632891]
 
  --
  Anshum Gupta


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (LUCENE-6371) Improve Spans payload collection

2015-05-29 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564768#comment-14564768
 ] 

Alan Woodward commented on LUCENE-6371:
---

Could it be because SpanWeight was previously using a TreeMap to collect terms, 
which was enforcing an ordering?  I'm a bit confused by how it would affect 
things, though, because the test that failed was running the exact same query 
in different ways, which would suggest that Java 7 was iterating over the exact 
same map non-deterministically.

 Improve Spans payload collection
 

 Key: LUCENE-6371
 URL: https://issues.apache.org/jira/browse/LUCENE-6371
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6371.patch, LUCENE-6371.patch, LUCENE-6371.patch, 
 LUCENE-6371.patch


 Spin off from LUCENE-6308, see the comments there from around 23 March 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7601) If a test fails, that error should be reported and not an error about resources that were not closed later.

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564785#comment-14564785
 ] 

ASF subversion and git services commented on SOLR-7601:
---

Commit 1682454 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1682454 ]

SOLR-7601: We should only check that tests have properly closed resources if 
the tests passed.
Speeds up test fails and cleans up Jenkin's failure reports.

 If a test fails, that error should be reported and not an error about 
 resources that were not closed later.
 ---

 Key: SOLR-7601
 URL: https://issues.apache.org/jira/browse/SOLR-7601
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.3

 Attachments: SOLR-7601.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6383) regexTransformer returns no results if there is no match

2015-05-29 Thread Jens (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564844#comment-14564844
 ] 

Jens  commented on SOLR-6383:
-

I would also prefer the previous behaviour as it was added in SOLR-1080

 regexTransformer returns no results if there is no match
 

 Key: SOLR-6383
 URL: https://issues.apache.org/jira/browse/SOLR-6383
 Project: Solr
  Issue Type: Bug
Reporter: Alexander Kingson
Assignee: Shalin Shekhar Mangar
 Fix For: 4.10, Trunk

 Attachments: SOLR-6383.patch, regexTransformer.patch


 When used in data-import config file to replace spaces in title with _ 
  
 field column=title_underscore  regex=\s+ replaceWith=_
  sourceColName=title /
 regexTransformer returns empty results for titles without spaces, i.e. when 
 there is no match for the regex. According to the description it is 
 equivalent to replaceAll which returns string when there is no match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6507) NativeFSLock.close() can invalidate other locks

2015-05-29 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6507.

   Resolution: Fixed
Fix Version/s: (was: 4.10.5)
   Trunk

 NativeFSLock.close() can invalidate other locks
 ---

 Key: LUCENE-6507
 URL: https://issues.apache.org/jira/browse/LUCENE-6507
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Simon Willnauer
Priority: Blocker
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6507-410x.patch, LUCENE-6507.patch, 
 LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, 
 LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, 
 LUCENE-6507.patch, LUCENE-6507.patch


 the lock API in Lucene is super trappy since the lock that we return form 
 this API must first be obtained and if we can't obtain it the lock should not 
 be closed since we might ie. close the underlying channel in the NativeLock 
 case which releases all lock for this file on some operating systems. I think 
 the makeLock method should try to obtain and only return a lock if we 
 successfully obtained it. Not sure if it's possible everywhere but we should 
 at least make the documentation clear here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6466) Move SpanQuery.getSpans() to SpanWeight

2015-05-29 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6466:
--
Attachment: LUCENE-6466-2.patch

Patch part 2, following discussion on LUCENE-6371.
* removes SpanSimilarity, in favour of a map of terms to termcontexts
* SpanTermQuery can take an optional TermContext in its constructor, similar to 
TermQuery
* SpanMTQWrapper now preserves term states when rewriting to SpanTermQueries

What would be nice would be to try and write an asserting TermsEnum that could 
check how many times seekExact(BytesRef) was called, to ensure that the various 
queries are re-using their term states properly.

 Move SpanQuery.getSpans() to SpanWeight
 ---

 Key: LUCENE-6466
 URL: https://issues.apache.org/jira/browse/LUCENE-6466
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6466-2.patch, LUCENE-6466.patch, 
 LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch


 SpanQuery.getSpans() should only be called on rewritten queries, so it seems 
 to make more sense to have this being called from SpanWeight



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564849#comment-14564849
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1682471 from [~rcmuir] in branch 'dev/branches/lucene6508'
[ https://svn.apache.org/r1682471 ]

LUCENE-6508: fix some tests/test-framework and fix stupid bug

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7607) Improve BlockJoin testing

2015-05-29 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-7607:
--

 Summary: Improve BlockJoin testing
 Key: SOLR-7607
 URL: https://issues.apache.org/jira/browse/SOLR-7607
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley


I did some block-join work in Heliosearch a while back.
Part of that was random testing that used both the normal join qparser to 
compare output with, and validate, the block join qparser.  This issue is to 
bring that test back to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564850#comment-14564850
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1682474 from [~rcmuir] in branch 'dev/branches/lucene6508'
[ https://svn.apache.org/r1682474 ]

LUCENE-6508: remove double-obtain tests, no longer possible

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564851#comment-14564851
 ] 

David Smiley commented on LUCENE-6487:
--

I rewrote the test; and indirectly test the arcDistance somewhat since it calls 
that.  Can you try this?  It seems my error epsilons are too tiny.  Or maybe 
you see something the matter.
{code:java}

  @Test
  public void testConversion() {
testPointRoundTrip(PlanetModel.SPHERE, 90, 0, 1e-12);
testPointRoundTrip(PlanetModel.SPHERE, -90, 0, 1e-12);
testPointRoundTrip(PlanetModel.WGS84, 90, 0, 1e-12);
testPointRoundTrip(PlanetModel.WGS84, -90, 0, 1e-12);

final double pLat = (randomFloat() * 180.0 - 90.0) * 
DistanceUtils.DEGREES_TO_RADIANS;
final double pLon = (randomFloat() * 360.0 - 180.0) * 
DistanceUtils.DEGREES_TO_RADIANS;
testPointRoundTrip(PlanetModel.SPHERE, pLat, pLon, 1e-12);
testPointRoundTrip(PlanetModel.WGS84, pLat, pLon, 1e-6);//bigger error 
tolerance
  }

  protected void testPointRoundTrip(PlanetModel planetModel, double pLat, 
double pLon, double epsilon) {
final GeoPoint p1 = new GeoPoint(planetModel, pLat, pLon);
final GeoPoint p2 = new GeoPoint(planetModel, p1.getLatitude(), 
p1.getLongitude());
double dist = p1.arcDistance(p2);
assertEquals(0, dist, epsilon);
  }

{code}

 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch, 
 LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7607) Improve BlockJoin testing

2015-05-29 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-7607:
--

Assignee: Yonik Seeley

 Improve BlockJoin testing
 -

 Key: SOLR-7607
 URL: https://issues.apache.org/jira/browse/SOLR-7607
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.3


 I did some block-join work in Heliosearch a while back.
 Part of that was random testing that used both the normal join qparser to 
 compare output with, and validate, the block join qparser.  This issue is to 
 bring that test back to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7607) Improve BlockJoin testing

2015-05-29 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7607:
---
Fix Version/s: 5.3

 Improve BlockJoin testing
 -

 Key: SOLR-7607
 URL: https://issues.apache.org/jira/browse/SOLR-7607
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 5.3


 I did some block-join work in Heliosearch a while back.
 Part of that was random testing that used both the normal join qparser to 
 compare output with, and validate, the block join qparser.  This issue is to 
 bring that test back to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6507) NativeFSLock.close() can invalidate other locks

2015-05-29 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6507:
---
Attachment: LUCENE-6507-410x.patch

So ... here's a 4.10.x backport patch, but it was kinda messy: lots of
conflicts because we've basically already rewritten locking once in 5.x.

I stuck with java.io APIs (File) instead of converting to NIO.2 apis
(Path).  I also back-ported AssertingLock to MockDirectoryWrapper.

This patch breaks NativeFSLockFactory.clearLock: its impl relied on
this when I close I nuke any other locks behavior, and I had to
remove one test case that in facets module that was doing this.  The
API is deprecated (gone in 5.x) but still feels wrong to break it on
such an old bugfix branch...

Net/net this is a biggish change, and I don't think we should backport
this to 4.10.x: this branch is very old now, and this change is a too risky.


 NativeFSLock.close() can invalidate other locks
 ---

 Key: LUCENE-6507
 URL: https://issues.apache.org/jira/browse/LUCENE-6507
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Simon Willnauer
Priority: Blocker
 Fix For: 4.10.5, 5.2

 Attachments: LUCENE-6507-410x.patch, LUCENE-6507.patch, 
 LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, 
 LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, 
 LUCENE-6507.patch, LUCENE-6507.patch


 the lock API in Lucene is super trappy since the lock that we return form 
 this API must first be obtained and if we can't obtain it the lock should not 
 be closed since we might ie. close the underlying channel in the NativeLock 
 case which releases all lock for this file on some operating systems. I think 
 the makeLock method should try to obtain and only return a lock if we 
 successfully obtained it. Not sure if it's possible everywhere but we should 
 at least make the documentation clear here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6466) Move SpanQuery.getSpans() to SpanWeight

2015-05-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564923#comment-14564923
 ] 

Robert Muir commented on LUCENE-6466:
-

Looks simpler. I am still hoping we can remove the map, but lets do this for 
now.

Can we make SpanWeight.buildSimWeight private final? its only used by its ctor.

Can both rewrite methods currently in SpanMultiTermQuery be fixed to avoid 
re-seeking? Maybe TopTermsSpanBooleanQueryRewrite was forgotten.

{quote}
What would be nice would be to try and write an asserting TermsEnum that could 
check how many times seekExact(BytesRef) was called, to ensure that the various 
queries are re-using their term states properly.
{quote}

TermQuery/Weight has checks around this that can help. Look for stuff like 
assertTermNotInReader check. This was missing from spans because of its 
previous leniency.

 Move SpanQuery.getSpans() to SpanWeight
 ---

 Key: LUCENE-6466
 URL: https://issues.apache.org/jira/browse/LUCENE-6466
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6466-2.patch, LUCENE-6466.patch, 
 LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch


 SpanQuery.getSpans() should only be called on rewritten queries, so it seems 
 to make more sense to have this being called from SpanWeight



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



MapReduceIndexerTool on 5.0+

2015-05-29 Thread Adam McElwee
Is anyone running the MapReduceIndexerTool for solr 5.0+? I ran into an
issue the other day when I upgraded from 4.10 to 5.1, but I haven't
stumbled upon anyone else who's having problems w/ it.

JIRA: https://issues.apache.org/jira/browse/SOLR-7512
PR: https://github.com/apache/lucene-solr/pull/147

Anyone have a minute to check out the PR or the patch attached to the
ticket?


[jira] [Updated] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool

2015-05-29 Thread Adam McElwee (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam McElwee updated SOLR-7512:
---
Attachment: SOLR-7512.patch

Patch updated to remove usage of `java.io.File`. 

For some reason that method in `TestUtil` wasn't correctly unpacking the zip 
and using relative paths. Maybe that's another issue, in itself. I switched to 
the hadoop fs `FileUtil.unZip`.

 SolrOutputFormat creates an invalid solr.xml in the solr home zip for 
 MapReduceIndexerTool
 --

 Key: SOLR-7512
 URL: https://issues.apache.org/jira/browse/SOLR-7512
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.1
Reporter: Adam McElwee
Priority: Blocker
 Attachments: SOLR-7512.patch, SOLR-7512.patch


 Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because 
 invalid `solr.xml` contents were being written to the solr home dir zip. My 
 guess is that a 5.0 change made the invalid file start to matter. 
 The error manifests as:
 {code:java}
 Error: java.lang.IllegalStateException: Failed to initialize record writer 
 for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, 
 attempt_1430953999892_0012_r_01_1
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126)
 at 
 org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163)
 at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569)
 at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
 Caused by: org.apache.solr.common.SolrException: 
 org.xml.sax.SAXParseException; Premature end of file.
 at org.apache.solr.core.Config.init(Config.java:156)
 at 
 org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127)
 at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110)
 at 
 org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138)
 at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119)
 ... 9 more
 Caused by: org.xml.sax.SAXParseException; Premature end of file.
 at 
 org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown 
 Source)
 at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown 
 Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at 
 org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
 at org.apache.solr.core.Config.init(Config.java:145)
 ... 15 more
 {code}
 The last version that I've successfully used `MapReduceIndexerTool` was 4.9, 
 and I verified that this patch resolves the issue for me (testing on 5.1). I 
 spent a couple hours trying to write a simple test case to exhibit the error, 
 but I haven't quite figured out how to deal with the 
 {noformat}java.security.AccessControlException: java.io.FilePermission 
 ...{noformat} errors. 
 Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7610) Improve and demonstrate VelocityResponseWriter's $resource localization tool

2015-05-29 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-7610:
---
Attachment: SOLR-7610-resourcetool.patch

Here's a patch that fixes $resource.locale to report the current setting 
properly

 Improve and demonstrate VelocityResponseWriter's $resource localization tool
 

 Key: SOLR-7610
 URL: https://issues.apache.org/jira/browse/SOLR-7610
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2
Reporter: Erik Hatcher
Assignee: Erik Hatcher
 Fix For: 5.3

 Attachments: SOLR-7610-resourcetool.patch


 Improvement: fix $resource.locale, which currently reports the base Solr 
 server locale rather than the one set by v.locale
 Demonstrate: Localize example/file's /browse



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6466) Move SpanQuery.getSpans() to SpanWeight

2015-05-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565002#comment-14565002
 ] 

Robert Muir commented on LUCENE-6466:
-

Can we replace methods like this one in SpanTermQuery with calls to 
Collections.singletonMap?

{code}
  protected static MapTerm, TermContext toMap(Term term, TermContext 
termContext) {
MapTerm, TermContext map = new HashMap();
map.put(term, termContext);
return map;
  }
{code}

 Move SpanQuery.getSpans() to SpanWeight
 ---

 Key: LUCENE-6466
 URL: https://issues.apache.org/jira/browse/LUCENE-6466
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6466-2.patch, LUCENE-6466-2.patch, 
 LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, 
 LUCENE-6466.patch


 SpanQuery.getSpans() should only be called on rewritten queries, so it seems 
 to make more sense to have this being called from SpanWeight



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6466) Move SpanQuery.getSpans() to SpanWeight

2015-05-29 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reassigned LUCENE-6466:
-

Assignee: Alan Woodward

 Move SpanQuery.getSpans() to SpanWeight
 ---

 Key: LUCENE-6466
 URL: https://issues.apache.org/jira/browse/LUCENE-6466
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6466-2.patch, LUCENE-6466-2.patch, 
 LUCENE-6466-2.patch, LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, 
 LUCENE-6466.patch, LUCENE-6466.patch


 SpanQuery.getSpans() should only be called on rewritten queries, so it seems 
 to make more sense to have this being called from SpanWeight



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6466) Move SpanQuery.getSpans() to SpanWeight

2015-05-29 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565026#comment-14565026
 ] 

Alan Woodward commented on LUCENE-6466:
---

I'll clean up LUCENE-6371 next, before we put this all back into 5.x

 Move SpanQuery.getSpans() to SpanWeight
 ---

 Key: LUCENE-6466
 URL: https://issues.apache.org/jira/browse/LUCENE-6466
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6466-2.patch, LUCENE-6466-2.patch, 
 LUCENE-6466-2.patch, LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, 
 LUCENE-6466.patch, LUCENE-6466.patch


 SpanQuery.getSpans() should only be called on rewritten queries, so it seems 
 to make more sense to have this being called from SpanWeight



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool

2015-05-29 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565038#comment-14565038
 ] 

Uwe Schindler commented on SOLR-7512:
-

There are some problems with the patch:

{noformat}
-Path targetFile = destDir.resolve(entry.getName());
-
+Path targetFile = new File(destDir.toFile(), entry.getName()).toPath();
+
{noformat}

This is a no-go with Lucene/Solr: java.io.File is not allowed to be aused 
anywhere in Lucene code.

 SolrOutputFormat creates an invalid solr.xml in the solr home zip for 
 MapReduceIndexerTool
 --

 Key: SOLR-7512
 URL: https://issues.apache.org/jira/browse/SOLR-7512
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.1
Reporter: Adam McElwee
Priority: Blocker
 Attachments: SOLR-7512.patch


 Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because 
 invalid `solr.xml` contents were being written to the solr home dir zip. My 
 guess is that a 5.0 change made the invalid file start to matter. 
 The error manifests as:
 {code:java}
 Error: java.lang.IllegalStateException: Failed to initialize record writer 
 for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, 
 attempt_1430953999892_0012_r_01_1
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126)
 at 
 org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163)
 at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569)
 at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
 Caused by: org.apache.solr.common.SolrException: 
 org.xml.sax.SAXParseException; Premature end of file.
 at org.apache.solr.core.Config.init(Config.java:156)
 at 
 org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127)
 at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110)
 at 
 org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138)
 at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119)
 ... 9 more
 Caused by: org.xml.sax.SAXParseException; Premature end of file.
 at 
 org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown 
 Source)
 at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown 
 Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at 
 org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
 at org.apache.solr.core.Config.init(Config.java:145)
 ... 15 more
 {code}
 The last version that I've successfully used `MapReduceIndexerTool` was 4.9, 
 and I verified that this patch resolves the issue for me (testing on 5.1). I 
 spent a couple hours trying to write a simple test case to exhibit the error, 
 but I haven't quite figured out how to deal with the 
 {noformat}java.security.AccessControlException: java.io.FilePermission 
 ...{noformat} errors. 
 Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7609) ShardSplitTest NPE

2015-05-29 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-7609:


 Summary: ShardSplitTest NPE
 Key: SOLR-7609
 URL: https://issues.apache.org/jira/browse/SOLR-7609
 Project: Solr
  Issue Type: Bug
Reporter: Steve Rowe
Priority: Minor


I'm guessing this is a test bug, but the seed doesn't reproduce for me (tried 
on the same Linux machine it occurred on and on OS X):

{noformat}
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=test -Dtests.seed=9318DDA46578ECF9 -Dtests.slow=true 
-Dtests.locale=is -Dtests.timezone=America/St_Vincent -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   55.8s J6  | ShardSplitTest.test 
   [junit4] Throwable #1: java.lang.NullPointerException
   [junit4]at 
__randomizedtesting.SeedInfo.seed([9318DDA46578ECF9:1B4CE27ECB848101]:0)
   [junit4]at 
org.apache.solr.cloud.ShardSplitTest.logDebugHelp(ShardSplitTest.java:547)
   [junit4]at 
org.apache.solr.cloud.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:438)
   [junit4]at 
org.apache.solr.cloud.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:222)
   [junit4]at 
org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:84)
   [junit4]at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
   [junit4]at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
   [junit4]at java.lang.Thread.run(Thread.java:745)
{noformat}

Line 547 of {{ShardSplitTest.java}} is:

{code:java}
  idVsVersion.put(document.getFieldValue(id).toString(), 
document.getFieldValue(_version_).toString());
{code}

Skimming the code, it's not obvious what could be null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7609) ShardSplitTest NPE

2015-05-29 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-7609:
---

Assignee: Shalin Shekhar Mangar

 ShardSplitTest NPE
 --

 Key: SOLR-7609
 URL: https://issues.apache.org/jira/browse/SOLR-7609
 Project: Solr
  Issue Type: Bug
Reporter: Steve Rowe
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Attachments: ShardSplitTest.NPE.log


 I'm guessing this is a test bug, but the seed doesn't reproduce for me (tried 
 on the same Linux machine it occurred on and on OS X):
 {noformat}
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=ShardSplitTest 
 -Dtests.method=test -Dtests.seed=9318DDA46578ECF9 -Dtests.slow=true 
 -Dtests.locale=is -Dtests.timezone=America/St_Vincent -Dtests.asserts=true 
 -Dtests.file.encoding=US-ASCII
[junit4] ERROR   55.8s J6  | ShardSplitTest.test 
[junit4] Throwable #1: java.lang.NullPointerException
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([9318DDA46578ECF9:1B4CE27ECB848101]:0)
[junit4]  at 
 org.apache.solr.cloud.ShardSplitTest.logDebugHelp(ShardSplitTest.java:547)
[junit4]  at 
 org.apache.solr.cloud.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:438)
[junit4]  at 
 org.apache.solr.cloud.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:222)
[junit4]  at 
 org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:84)
[junit4]  at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
[junit4]  at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
[junit4]  at java.lang.Thread.run(Thread.java:745)
 {noformat}
 Line 547 of {{ShardSplitTest.java}} is:
 {code:java}
   idVsVersion.put(document.getFieldValue(id).toString(), 
 document.getFieldValue(_version_).toString());
 {code}
 Skimming the code, it's not obvious what could be null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool

2015-05-29 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565050#comment-14565050
 ] 

Mark Miller commented on SOLR-7512:
---

Though if you are writing a new test, perhaps it's just your new code as Uwe 
points out.

There are two tests that are currently generally skipped because of what I 
mention above - they are likely the tests that would catch this.

 SolrOutputFormat creates an invalid solr.xml in the solr home zip for 
 MapReduceIndexerTool
 --

 Key: SOLR-7512
 URL: https://issues.apache.org/jira/browse/SOLR-7512
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.1
Reporter: Adam McElwee
Priority: Blocker
 Attachments: SOLR-7512.patch


 Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because 
 invalid `solr.xml` contents were being written to the solr home dir zip. My 
 guess is that a 5.0 change made the invalid file start to matter. 
 The error manifests as:
 {code:java}
 Error: java.lang.IllegalStateException: Failed to initialize record writer 
 for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, 
 attempt_1430953999892_0012_r_01_1
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126)
 at 
 org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163)
 at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569)
 at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
 Caused by: org.apache.solr.common.SolrException: 
 org.xml.sax.SAXParseException; Premature end of file.
 at org.apache.solr.core.Config.init(Config.java:156)
 at 
 org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127)
 at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110)
 at 
 org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138)
 at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119)
 ... 9 more
 Caused by: org.xml.sax.SAXParseException; Premature end of file.
 at 
 org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown 
 Source)
 at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown 
 Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at 
 org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
 at org.apache.solr.core.Config.init(Config.java:145)
 ... 15 more
 {code}
 The last version that I've successfully used `MapReduceIndexerTool` was 4.9, 
 and I verified that this patch resolves the issue for me (testing on 5.1). I 
 spent a couple hours trying to write a simple test case to exhibit the error, 
 but I haven't quite figured out how to deal with the 
 {noformat}java.security.AccessControlException: java.io.FilePermission 
 ...{noformat} errors. 
 Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool

2015-05-29 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565063#comment-14565063
 ] 

Uwe Schindler commented on SOLR-7512:
-

bq. For some reason that method in `TestUtil` wasn't correctly unpacking the 
zip and using relative paths. Maybe that's another issue, in itself. I switched 
to the hadoop fs `FileUtil.unZip`.

The reason could be a incorrectly packed ZIP file. There are some ZIP files 
out there that use backslashes as separator. Maybe the one you uses had this 
problem.

 SolrOutputFormat creates an invalid solr.xml in the solr home zip for 
 MapReduceIndexerTool
 --

 Key: SOLR-7512
 URL: https://issues.apache.org/jira/browse/SOLR-7512
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.1
Reporter: Adam McElwee
Priority: Blocker
 Attachments: SOLR-7512.patch, SOLR-7512.patch


 Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because 
 invalid `solr.xml` contents were being written to the solr home dir zip. My 
 guess is that a 5.0 change made the invalid file start to matter. 
 The error manifests as:
 {code:java}
 Error: java.lang.IllegalStateException: Failed to initialize record writer 
 for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, 
 attempt_1430953999892_0012_r_01_1
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126)
 at 
 org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163)
 at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569)
 at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
 Caused by: org.apache.solr.common.SolrException: 
 org.xml.sax.SAXParseException; Premature end of file.
 at org.apache.solr.core.Config.init(Config.java:156)
 at 
 org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127)
 at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110)
 at 
 org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138)
 at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119)
 ... 9 more
 Caused by: org.xml.sax.SAXParseException; Premature end of file.
 at 
 org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown 
 Source)
 at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown 
 Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at 
 org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
 at org.apache.solr.core.Config.init(Config.java:145)
 ... 15 more
 {code}
 The last version that I've successfully used `MapReduceIndexerTool` was 4.9, 
 and I verified that this patch resolves the issue for me (testing on 5.1). I 
 spent a couple hours trying to write a simple test case to exhibit the error, 
 but I haven't quite figured out how to deal with the 
 {noformat}java.security.AccessControlException: java.io.FilePermission 
 ...{noformat} errors. 
 Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6466) Move SpanQuery.getSpans() to SpanWeight

2015-05-29 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6466:
--
Attachment: LUCENE-6466-2.patch

Nits appropriately picked :-)

 Move SpanQuery.getSpans() to SpanWeight
 ---

 Key: LUCENE-6466
 URL: https://issues.apache.org/jira/browse/LUCENE-6466
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6466-2.patch, LUCENE-6466-2.patch, 
 LUCENE-6466-2.patch, LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, 
 LUCENE-6466.patch, LUCENE-6466.patch


 SpanQuery.getSpans() should only be called on rewritten queries, so it seems 
 to make more sense to have this being called from SpanWeight



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6466) Move SpanQuery.getSpans() to SpanWeight

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565024#comment-14565024
 ] 

ASF subversion and git services commented on LUCENE-6466:
-

Commit 1682513 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1682513 ]

LUCENE-6466: Remove SpanSimilarity class and make SpanMTQWrapper single-pass

 Move SpanQuery.getSpans() to SpanWeight
 ---

 Key: LUCENE-6466
 URL: https://issues.apache.org/jira/browse/LUCENE-6466
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6466-2.patch, LUCENE-6466-2.patch, 
 LUCENE-6466-2.patch, LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, 
 LUCENE-6466.patch, LUCENE-6466.patch


 SpanQuery.getSpans() should only be called on rewritten queries, so it seems 
 to make more sense to have this being called from SpanWeight



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool

2015-05-29 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565047#comment-14565047
 ] 

Mark Miller commented on SOLR-7512:
---

bq. but I haven't quite figured out how to deal with the 
{noformat}java.security.AccessControlException: java.io.FilePermission 
...{noformat} errors.

That's a known current problem - a couple tests have to be run via IDE or 
without a security manager because a Hadoop piece tries to write in an illegal 
location for tests.

 SolrOutputFormat creates an invalid solr.xml in the solr home zip for 
 MapReduceIndexerTool
 --

 Key: SOLR-7512
 URL: https://issues.apache.org/jira/browse/SOLR-7512
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.1
Reporter: Adam McElwee
Priority: Blocker
 Attachments: SOLR-7512.patch


 Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because 
 invalid `solr.xml` contents were being written to the solr home dir zip. My 
 guess is that a 5.0 change made the invalid file start to matter. 
 The error manifests as:
 {code:java}
 Error: java.lang.IllegalStateException: Failed to initialize record writer 
 for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, 
 attempt_1430953999892_0012_r_01_1
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126)
 at 
 org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163)
 at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569)
 at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
 Caused by: org.apache.solr.common.SolrException: 
 org.xml.sax.SAXParseException; Premature end of file.
 at org.apache.solr.core.Config.init(Config.java:156)
 at 
 org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127)
 at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110)
 at 
 org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138)
 at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119)
 ... 9 more
 Caused by: org.xml.sax.SAXParseException; Premature end of file.
 at 
 org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown 
 Source)
 at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown 
 Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at 
 org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
 at org.apache.solr.core.Config.init(Config.java:145)
 ... 15 more
 {code}
 The last version that I've successfully used `MapReduceIndexerTool` was 4.9, 
 and I verified that this patch resolves the issue for me (testing on 5.1). I 
 spent a couple hours trying to write a simple test case to exhibit the error, 
 but I haven't quite figured out how to deal with the 
 {noformat}java.security.AccessControlException: java.io.FilePermission 
 ...{noformat} errors. 
 Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7609) ShardSplitTest NPE

2015-05-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-7609:
-
Attachment: ShardSplitTest.NPE.log

Attaching log excerpt for the failing test.

 ShardSplitTest NPE
 --

 Key: SOLR-7609
 URL: https://issues.apache.org/jira/browse/SOLR-7609
 Project: Solr
  Issue Type: Bug
Reporter: Steve Rowe
Priority: Minor
 Attachments: ShardSplitTest.NPE.log


 I'm guessing this is a test bug, but the seed doesn't reproduce for me (tried 
 on the same Linux machine it occurred on and on OS X):
 {noformat}
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=ShardSplitTest 
 -Dtests.method=test -Dtests.seed=9318DDA46578ECF9 -Dtests.slow=true 
 -Dtests.locale=is -Dtests.timezone=America/St_Vincent -Dtests.asserts=true 
 -Dtests.file.encoding=US-ASCII
[junit4] ERROR   55.8s J6  | ShardSplitTest.test 
[junit4] Throwable #1: java.lang.NullPointerException
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([9318DDA46578ECF9:1B4CE27ECB848101]:0)
[junit4]  at 
 org.apache.solr.cloud.ShardSplitTest.logDebugHelp(ShardSplitTest.java:547)
[junit4]  at 
 org.apache.solr.cloud.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:438)
[junit4]  at 
 org.apache.solr.cloud.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:222)
[junit4]  at 
 org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:84)
[junit4]  at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
[junit4]  at 
 org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
[junit4]  at java.lang.Thread.run(Thread.java:745)
 {noformat}
 Line 547 of {{ShardSplitTest.java}} is:
 {code:java}
   idVsVersion.put(document.getFieldValue(id).toString(), 
 document.getFieldValue(_version_).toString());
 {code}
 Skimming the code, it's not obvious what could be null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7610) Improve and demonstrate VelocityResponseWriter's $resource localization tool

2015-05-29 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-7610:
---
Attachment: SOLR-7610-example_files_config.patch

This patch moves resources.properties to 
example/files/browse-resources/velocity (new directory to be created) and has 
solrconfig.xml point to that directory.

 Improve and demonstrate VelocityResponseWriter's $resource localization tool
 

 Key: SOLR-7610
 URL: https://issues.apache.org/jira/browse/SOLR-7610
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2
Reporter: Erik Hatcher
Assignee: Erik Hatcher
 Fix For: 5.3

 Attachments: SOLR-7610-example_files_config.patch, 
 SOLR-7610-resourcetool.patch


 Improvement: fix $resource.locale, which currently reports the base Solr 
 server locale rather than the one set by v.locale
 Demonstrate: Localize example/file's /browse



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6466) Move SpanQuery.getSpans() to SpanWeight

2015-05-29 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6466:
--
Attachment: LUCENE-6466-2.patch

Oops, yes, I missed TopTermsSpanBooleanQueryRewrite.  Final patch with the 
changes there, plus some assertions copied from TermWeight/TermScorer.

 Move SpanQuery.getSpans() to SpanWeight
 ---

 Key: LUCENE-6466
 URL: https://issues.apache.org/jira/browse/LUCENE-6466
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6466-2.patch, LUCENE-6466-2.patch, 
 LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, 
 LUCENE-6466.patch


 SpanQuery.getSpans() should only be called on rewritten queries, so it seems 
 to make more sense to have this being called from SpanWeight



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5283) Fail the build if ant test didn't execute any tests (everything filtered out).

2015-05-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565049#comment-14565049
 ] 

Steve Rowe commented on LUCENE-5283:


This feature caused a build failure for me when dataimporthandler-extras had 
all its tests skipped or ignored, when run as part of {{ant test-contrib}} from 
{{solr/}}: 

{noformat}
-init-totals:

-test:

[mkdir] Created dir: 
/var/lib/jenkins/jobs/Lucene-Solr-tests-5.2-Java8/workspace/solr/build/contrib/solr-dataimporthandler-extras/test
[mkdir] Created dir: 
/var/lib/jenkins/jobs/Lucene-Solr-tests-5.2-Java8/workspace/solr/build/contrib/solr-dataimporthandler-extras/test/temp
[mkdir] Created dir: 
/var/lib/jenkins/jobs/Lucene-Solr-tests-5.2-Java8/workspace/.caches/test-stats/solr-dataimporthandler-extras
   [junit4] JUnit4 says hi! Master seed: 2085E7C0234F42B1
   [junit4] Executing 2 suites with 2 JVMs.
   [junit4] 
   [junit4] Started J0 PID(7435@goose).
   [junit4] Started J1 PID(7498@goose).
   [junit4] Suite: org.apache.solr.handler.dataimport.TestMailEntityProcessor
   [junit4] Completed [1/2] on J1 in 0.47s, 6 tests, 6 skipped
   [junit4] 
   [junit4] Suite: org.apache.solr.handler.dataimport.TestTikaEntityProcessor
   [junit4] Completed [2/2] on J0 in 2.02s, 9 tests, 9 skipped
   [junit4] 
   [junit4] JVM J0: 0.84 .. 3.80 = 2.96s
   [junit4] JVM J1: 1.08 .. 2.82 = 1.74s
   [junit4] Execution time total: 3.87 sec.
   [junit4] Tests summary: 2 suites, 15 tests, 15 ignored
 [echo] 5 slowest tests:
[junit4:tophints]   2.02s | 
org.apache.solr.handler.dataimport.TestTikaEntityProcessor
[junit4:tophints]   0.47s | 
org.apache.solr.handler.dataimport.TestMailEntityProcessor

-check-totals:

BUILD FAILED
/var/lib/jenkins/jobs/Lucene-Solr-tests-5.2-Java8/workspace/solr/build.xml:249: 
The following error occurred while executing this line:
/var/lib/jenkins/jobs/Lucene-Solr-tests-5.2-Java8/workspace/solr/common-build.xml:454:
 The following error occurred while executing this line:
/var/lib/jenkins/jobs/Lucene-Solr-tests-5.2-Java8/workspace/solr/common-build.xml:511:
 The following error occurred while executing this line:
/var/lib/jenkins/jobs/Lucene-Solr-tests-5.2-Java8/workspace/lucene/common-build.xml:1501:
 Not even a single test was executed (a typo in the filter pattern maybe?).
Total time: 5 minutes 12 seconds
{noformat}
 
Looks like {{-Dtests.ifNoTests=ignore}} didn't make it into the final 
implementation.  I thought using {{-Dtests.totals.toplevel=false}} might work, 
but still fails for me when I purposely test a non-existent testcase.

Is there some way to not fail when no tests run in a module?

 Fail the build if ant test didn't execute any tests (everything filtered out).
 --

 Key: LUCENE-5283
 URL: https://issues.apache.org/jira/browse/LUCENE-5283
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 4.6, Trunk

 Attachments: LUCENE-5283-permgen.patch, LUCENE-5283.patch, 
 LUCENE-5283.patch, LUCENE-5283.patch


 This should be an optional setting that defaults to 'false' (the build 
 proceeds).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7610) Improve and demonstrate VelocityResponseWriter's $resource localization tool

2015-05-29 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-7610:
--

 Summary: Improve and demonstrate VelocityResponseWriter's 
$resource localization tool
 Key: SOLR-7610
 URL: https://issues.apache.org/jira/browse/SOLR-7610
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2
Reporter: Erik Hatcher
Assignee: Erik Hatcher
 Fix For: 5.3


Improvement: fix $resource.locale, which currently reports the base Solr server 
locale rather than the one set by v.locale

Demonstrate: Localize example/file's /browse



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6466) Move SpanQuery.getSpans() to SpanWeight

2015-05-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565009#comment-14565009
 ] 

Robert Muir commented on LUCENE-6466:
-

I only have one more stylish nit, otherwise I like it. Great to remove the 
confusing class and also get single-pass spanmultitermquery!

Can we break this very long run-on in SpanTermQuery.buildSimWeight?

{code}
return searcher.getSimilarity().computeWeight(query.getBoost(), 
searcher.collectionStatistics(query.getField()), stats);
{code}

Instead I would rename 'stats' to 'termStats' and do maybe something like:
{code}
CollectionStatistics collectionStats = searcher.collectionStatistics(...);
return xxx.computeWeight(query.getBoost(), collectionStats, termStats);
{code}

 Move SpanQuery.getSpans() to SpanWeight
 ---

 Key: LUCENE-6466
 URL: https://issues.apache.org/jira/browse/LUCENE-6466
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6466-2.patch, LUCENE-6466-2.patch, 
 LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, LUCENE-6466.patch, 
 LUCENE-6466.patch


 SpanQuery.getSpans() should only be called on rewritten queries, so it seems 
 to make more sense to have this being called from SpanWeight



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool

2015-05-29 Thread Adam McElwee (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam McElwee updated SOLR-7512:
---
Attachment: SOLR-7512.patch

 SolrOutputFormat creates an invalid solr.xml in the solr home zip for 
 MapReduceIndexerTool
 --

 Key: SOLR-7512
 URL: https://issues.apache.org/jira/browse/SOLR-7512
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.1
Reporter: Adam McElwee
Priority: Blocker
 Attachments: SOLR-7512.patch


 Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because 
 invalid `solr.xml` contents were being written to the solr home dir zip. My 
 guess is that a 5.0 change made the invalid file start to matter. 
 The error manifests as:
 {code:java}
 Error: java.lang.IllegalStateException: Failed to initialize record writer 
 for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, 
 attempt_1430953999892_0012_r_01_1
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126)
 at 
 org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163)
 at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569)
 at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
 Caused by: org.apache.solr.common.SolrException: 
 org.xml.sax.SAXParseException; Premature end of file.
 at org.apache.solr.core.Config.init(Config.java:156)
 at 
 org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127)
 at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110)
 at 
 org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138)
 at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119)
 ... 9 more
 Caused by: org.xml.sax.SAXParseException; Premature end of file.
 at 
 org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown 
 Source)
 at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown 
 Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at 
 org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
 at org.apache.solr.core.Config.init(Config.java:145)
 ... 15 more
 {code}
 The last version that I've successfully used `MapReduceIndexerTool` was 4.9, 
 and I verified that this patch resolves the issue for me (testing on 5.1). I 
 spent a couple hours trying to write a simple test case to exhibit the error, 
 but I haven't quite figured out how to deal with the 
 {noformat}java.security.AccessControlException: java.io.FilePermission 
 ...{noformat} errors. 
 Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7608) Json API: cannot get metric for field with spaces.

2015-05-29 Thread Iana Bondarska (JIRA)
Iana Bondarska created SOLR-7608:


 Summary: Json API: cannot get metric for field with spaces.
 Key: SOLR-7608
 URL: https://issues.apache.org/jira/browse/SOLR-7608
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 5.2
Reporter: Iana Bondarska
 Fix For: 5.2


There is numeric field with spaces in name in schema. When try to calculate any 
metric for it, I get an error about unrecognized field. Seems there is no way 
to escape spaces in field names. Same query works fine for field names without 
spaces.
Examle json query: 
{limit:0,offset:0,filter:[],facet:{facet:{facet:{actual_sales_sum:sum(Actual
 Sales)},limit:0,field:City,type:terms}}}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: MapReduceIndexerTool on 5.0+

2015-05-29 Thread Mark Miller
Hmm...yeah, I'll take a look.

- Mark

On Fri, May 29, 2015 at 12:31 PM Adam McElwee a...@mcelwee.me wrote:

 Is anyone running the MapReduceIndexerTool for solr 5.0+? I ran into an
 issue the other day when I upgraded from 4.10 to 5.1, but I haven't
 stumbled upon anyone else who's having problems w/ it.

 JIRA: https://issues.apache.org/jira/browse/SOLR-7512
 PR: https://github.com/apache/lucene-solr/pull/147

 Anyone have a minute to check out the PR or the patch attached to the
 ticket?



[jira] [Resolved] (SOLR-7274) Pluggable authentication module in Solr

2015-05-29 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-7274.

Resolution: Fixed

 Pluggable authentication module in Solr
 ---

 Key: SOLR-7274
 URL: https://issues.apache.org/jira/browse/SOLR-7274
 Project: Solr
  Issue Type: Sub-task
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7274-reconfigure-sdf-httpclient.patch, 
 SOLR-7274-reconfigure-sdf-httpclient.patch, 
 SOLR-7274-reconfigure-sdf-httpclient.patch, 
 SOLR-7274-reconfigure-sdf-httpclient.patch, SOLR-7274.patch, SOLR-7274.patch, 
 SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch, 
 SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch, 
 SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch, 
 SOLR-7274.patch, SOLR-7274.patch, SOLR-7274.patch


 It would be good to have Solr support different authentication protocols.
 To begin with, it'd be good to have support for kerberos and basic auth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7468) Kerberos authentication module

2015-05-29 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-7468.

Resolution: Fixed

 Kerberos authentication module
 --

 Key: SOLR-7468
 URL: https://issues.apache.org/jira/browse/SOLR-7468
 Project: Solr
  Issue Type: New Feature
  Components: security
Reporter: Ishan Chattopadhyaya
Assignee: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, 
 SOLR-7468-alt-test.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 hoss_trunk_r1681791_TEST-org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.xml,
  hoss_trunk_r1681791_tests-failures.txt


 SOLR-7274 introduces a pluggable authentication framework. This issue 
 provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7603) Scary non reproducible failure from UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

2015-05-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565170#comment-14565170
 ] 

Hoss Man commented on SOLR-7603:


This happened again last night with the new test assertions providing a bit 
more detail...

{noformat}
Date: Fri, 29 May 2015 09:42:29 + (UTC)
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/696/

java.lang.AssertionError: distrib-chain-explicit expected LogUpdateProcessor in 
chain due to @RunAllways, but not
found: org.apache.solr.update.processor.DistributedUpdateProcessor@638c6e19,
org.apache.solr.update.processor.RemoveBlankFieldUpdateProcessorFactory$1@25bbdfcb,
org.apache.solr.update.processor.RunUpdateProcessor@473fea05, 
at 
__randomizedtesting.SeedInfo.seed([3AF6852C6379681:724B9684B0DCB14D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:127)

{noformat}

(builds.apache.org is down at the moment so i can't confirm the full reproduce 
line)

I realy can't make heads or tails of why LogUpdateProcessor wouldn't be in that 
chain.



FWIW: first incidence i can find of a failure from this assert is...

{noformat}
Date: Sat, 2 May 2015 17:14:16 + (UTC)
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/835/

   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=UpdateRequestProcessorFactoryTest 
-Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=D82449739AD0D42D 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=nl -Dtests.timezone=America/Denver -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.01s J0 | 
UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping 
   [junit4] Throwable #1: java.lang.AssertionError
   [junit4]at 
__randomizedtesting.SeedInfo.seed([D82449739AD0D42D:A9C0B7A5EC3BF3E1]:0)
   [junit4]at 
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:111)
   [junit4]at java.lang.Thread.run(Thread.java:745)
{noformat}

 Scary non reproducible failure from 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping
 

 Key: SOLR-7603
 URL: https://issues.apache.org/jira/browse/SOLR-7603
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-7603.consoleText.txt


 jenkins nightly hit a very inexplicable error today...
 {noformat}
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/860/
 At revision 1682097
 Updating http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x at 
 revision '2015-05-27T14:50:50.016 -0400'
 [java-info] java version 1.7.0_72
 [java-info] Java(TM) SE Runtime Environment (1.7.0_72-b14, Oracle Corporation)
 [java-info] Java HotSpot(TM) 64-Bit Server VM (24.72-b04, Oracle Corporation)
 {noformat}
 {noformat}
   [junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=UpdateRequestProcessorFactoryTest
 -Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=4ECABCCFD159BE21 
 -Dtests.multiplier=2
 -Dtests.nightly=true -Dtests.slow=true 
 -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt
 -Dtests.locale=mt_MT -Dtests.timezone=Etc/GMT0 -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.01s J0 | 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]at 
 __randomizedtesting.SeedInfo.seed([4ECABCCFD159BE21:3F2E4219A7B299ED]:0)
[junit4]at
 org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:111)
[junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}
 ...the line in question is asserting that when executing a distributed update 
 (ie: forwarded from another node), the LogUpdateProcessor is still part of 
 the chain because it's got got the RunAlways annotation indicating it 
 should always be included in the chain (everything before hte 
 DistribUpdateProcessor is normally)
 There's really no explanation for why the LogUpdateProcessor wouldn't be 
 found other then a code bug -- but in that case why doesn't the seed 
 reproduce reliably?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_80) - Build # 4745 - Failure!

2015-05-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4745/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ActionThrottleTest.testAZeroNanoTimeReturnInWait

Error Message:
989ms

Stack Trace:
java.lang.AssertionError: 989ms
at 
__randomizedtesting.SeedInfo.seed([210947A435382937:E262BC968179D4D4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.ActionThrottleTest.testAZeroNanoTimeReturnInWait(ActionThrottleTest.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.ActionThrottleTest.testBasics

Error Message:
989ms

Stack Trace:
java.lang.AssertionError: 989ms
at 
__randomizedtesting.SeedInfo.seed([210947A435382937:1CD1E9880DD67747]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
  

[jira] [Resolved] (SOLR-7275) Pluggable authorization module in Solr

2015-05-29 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-7275.

Resolution: Fixed

 Pluggable authorization module in Solr
 --

 Key: SOLR-7275
 URL: https://issues.apache.org/jira/browse/SOLR-7275
 Project: Solr
  Issue Type: Sub-task
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
 SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
 SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
 SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, 
 SOLR-7275.patch, SOLR-7275.patch


 Solr needs an interface that makes it easy for different authorization 
 systems to be plugged into it. Here's what I plan on doing:
 Define an interface {{SolrAuthorizationPlugin}} with one single method 
 {{isAuthorized}}. This would take in a {{SolrRequestContext}} object and 
 return an {{SolrAuthorizationResponse}} object. The object as of now would 
 only contain a single boolean value but in the future could contain more 
 information e.g. ACL for document filtering etc.
 The reason why we need a context object is so that the plugin doesn't need to 
 understand Solr's capabilities e.g. how to extract the name of the collection 
 or other information from the incoming request as there are multiple ways to 
 specify the target collection for a request. Similarly request type can be 
 specified by {{qt}} or {{/handler_name}}.
 Flow:
 Request - SolrDispatchFilter - isAuthorized(context) - Process/Return.
 {code}
 public interface SolrAuthorizationPlugin {
   public SolrAuthorizationResponse isAuthorized(SolrRequestContext context);
 }
 {code}
 {code}
 public  class SolrRequestContext {
   UserInfo; // Will contain user context from the authentication layer.
   HTTPRequest request;
   Enum OperationType; // Correlated with user roles.
   String[] CollectionsAccessed;
   String[] FieldsAccessed;
   String Resource;
 }
 {code}
 {code}
 public class SolrAuthorizationResponse {
   boolean authorized;
   public boolean isAuthorized();
 }
 {code}
 User Roles: 
 * Admin
 * Collection Level:
   * Query
   * Update
   * Admin
 Using this framework, an implementation could be written for specific 
 security systems e.g. Apache Ranger or Sentry. It would keep all the security 
 system specific code out of Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565121#comment-14565121
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1682520 from [~rcmuir] in branch 'dev/branches/lucene6508'
[ https://svn.apache.org/r1682520 ]

LUCENE-6508: fix tests and cleanup

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565141#comment-14565141
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1682523 from [~rcmuir] in branch 'dev/branches/lucene6508'
[ https://svn.apache.org/r1682523 ]

LUCENE-6508: add back LockObtainFailedException but in a simpler way

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 696 - Still Failing

2015-05-29 Thread Chris Hostetter

I've updated SOLR-7603 with the new details from the 
UpdateRequestProcessorFactoryTest failure and will try to make sense of 
that.

Anybody have any clue what's up with the HttpPartitionTest test here?

leaking file handles on the segments file?


: Date: Fri, 29 May 2015 09:42:29 + (UTC)
: From: Apache Jenkins Server jenk...@builds.apache.org
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: [JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 696 - Still
: Failing
: 
: Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/696/
: 
: 3 tests failed.
: FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest
: 
: Error Message:
: file handle leaks: 
[SeekableByteChannel(/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest
 3AF6852C6379681-001/index-SimpleFSDirectory-011/segments_2)]
: 
: Stack Trace:
: java.lang.RuntimeException: file handle leaks: 
[SeekableByteChannel(/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest
 3AF6852C6379681-001/index-SimpleFSDirectory-011/segments_2)]
:   at __randomizedtesting.SeedInfo.seed([3AF6852C6379681]:0)
:   at org.apache.lucene.mockfile.LeakFS.onClose(LeakFS.java:64)
:   at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:78)
:   at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:79)
:   at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:227)
:   at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
:   at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
:   at java.lang.Thread.run(Thread.java:745)
: Caused by: java.lang.Exception
:   at org.apache.lucene.mockfile.LeakFS.onOpen(LeakFS.java:47)
:   at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:82)
:   at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:272)
:   at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:213)
:   at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:241)
:   at java.nio.file.Files.newByteChannel(Files.java:361)
:   at java.nio.file.Files.newByteChannel(Files.java:407)
:   at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:76)
:   at 
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:110)
:   at 
org.apache.lucene.store.RawDirectoryWrapper.openChecksumInput(RawDirectoryWrapper.java:42)
:   at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:269)
:   at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:488)
:   at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:278)
:   at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:265)
:   at 
org.apache.lucene.store.BaseDirectoryWrapper.close(BaseDirectoryWrapper.java:46)
:   at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:307)
:   at 
org.apache.solr.core.CachingDirectoryFactory.closeCacheValue(CachingDirectoryFactory.java:273)
:   at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:203)
:   at org.apache.solr.core.SolrCore.close(SolrCore.java:1254)
:   at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:311)
:   at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:198)
:   at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:159)
:   at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:348)
:   at 
org.apache.solr.cloud.ZkController.joinElection(ZkController.java:1070)
:   at org.apache.solr.cloud.ZkController.register(ZkController.java:884)
:   at 
org.apache.solr.cloud.ZkController$RegisterCoreAsync.call(ZkController.java:225)
:   at 

[jira] [Commented] (SOLR-7512) SolrOutputFormat creates an invalid solr.xml in the solr home zip for MapReduceIndexerTool

2015-05-29 Thread Adam McElwee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565070#comment-14565070
 ] 

Adam McElwee commented on SOLR-7512:


Hmm, possibly. The zip in question is the one created as part of the existing 
MRIndexerTool in `SolrOutputFormat`. A quick look at it shows that it's simply 
doing substring manipulation for creating the zip entries. Seems a bit 
questionable. At any rate, the hadoop `FileUtil.unZip` unpacks it w/ no issues.

 SolrOutputFormat creates an invalid solr.xml in the solr home zip for 
 MapReduceIndexerTool
 --

 Key: SOLR-7512
 URL: https://issues.apache.org/jira/browse/SOLR-7512
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.1
Reporter: Adam McElwee
Priority: Blocker
 Attachments: SOLR-7512.patch, SOLR-7512.patch


 Sometime after Solr 4.9, the `MapReduceIndexerTool` got busted because 
 invalid `solr.xml` contents were being written to the solr home dir zip. My 
 guess is that a 5.0 change made the invalid file start to matter. 
 The error manifests as:
 {code:java}
 Error: java.lang.IllegalStateException: Failed to initialize record writer 
 for org.apache.solr.hadoop.MapReduceIndexerTool/SolrMapper, 
 attempt_1430953999892_0012_r_01_1
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:126)
 at 
 org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:163)
 at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.init(ReduceTask.java:569)
 at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:643)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:394)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
 Caused by: org.apache.solr.common.SolrException: 
 org.xml.sax.SAXParseException; Premature end of file.
 at org.apache.solr.core.Config.init(Config.java:156)
 at 
 org.apache.solr.core.SolrXmlConfig.fromInputStream(SolrXmlConfig.java:127)
 at org.apache.solr.core.SolrXmlConfig.fromFile(SolrXmlConfig.java:110)
 at 
 org.apache.solr.core.SolrXmlConfig.fromSolrHome(SolrXmlConfig.java:138)
 at org.apache.solr.core.CoreContainer.init(CoreContainer.java:142)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:162)
 at 
 org.apache.solr.hadoop.SolrRecordWriter.init(SolrRecordWriter.java:119)
 ... 9 more
 Caused by: org.xml.sax.SAXParseException; Premature end of file.
 at 
 org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown 
 Source)
 at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown 
 Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
 at 
 org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
 at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
 at org.apache.solr.core.Config.init(Config.java:145)
 ... 15 more
 {code}
 The last version that I've successfully used `MapReduceIndexerTool` was 4.9, 
 and I verified that this patch resolves the issue for me (testing on 5.1). I 
 spent a couple hours trying to write a simple test case to exhibit the error, 
 but I haven't quite figured out how to deal with the 
 {noformat}java.security.AccessControlException: java.io.FilePermission 
 ...{noformat} errors. 
 Pull request for bugfix [here|https://github.com/apache/lucene-solr/pull/147]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7608) Json API: cannot get metric for field with spaces.

2015-05-29 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-7608.
--
Resolution: Invalid

Please raise questions like this on the user's list first before raising a 
JIRA, _then_ raise a JIRA if it's really a bug.

In this case, spaces in field names are not supported. They _may_ work in some 
situations, but field names should generally follow Java naming conventions.



 Json API: cannot get metric for field with spaces.
 --

 Key: SOLR-7608
 URL: https://issues.apache.org/jira/browse/SOLR-7608
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 5.2
Reporter: Iana Bondarska
 Fix For: 5.2


 There is numeric field with spaces in name in schema. When try to calculate 
 any metric for it, I get an error about unrecognized field. Seems there is no 
 way to escape spaces in field names. Same query works fine for field names 
 without spaces.
 Examle json query: 
 {limit:0,offset:0,filter:[],facet:{facet:{facet:{actual_sales_sum:sum(Actual
  Sales)},limit:0,field:City,type:terms}}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6371) Improve Spans payload collection

2015-05-29 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6371:
--
Attachment: LUCENE-6371.patch

Patch updated to trunk.

 Improve Spans payload collection
 

 Key: LUCENE-6371
 URL: https://issues.apache.org/jira/browse/LUCENE-6371
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Alan Woodward
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6371.patch, LUCENE-6371.patch, LUCENE-6371.patch, 
 LUCENE-6371.patch, LUCENE-6371.patch


 Spin off from LUCENE-6308, see the comments there from around 23 March 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565144#comment-14565144
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1682525 from [~rcmuir] in branch 'dev/branches/lucene6508'
[ https://svn.apache.org/r1682525 ]

LUCENE-6508: add back LockObtainFailedException for SingleInstance too

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565158#comment-14565158
 ] 

ASF subversion and git services commented on LUCENE-6508:
-

Commit 1682526 from [~thetaphi] in branch 'dev/branches/lucene6508'
[ https://svn.apache.org/r1682526 ]

LUCENE-6508: Make the lock stress tester use new Exception; add 
Windows-specific Exception to SimpleFSLockFactory

 Simplify Directory/lock api
 ---

 Key: LUCENE-6508
 URL: https://issues.apache.org/jira/browse/LUCENE-6508
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Uwe Schindler
 Attachments: LUCENE-6508-deadcode1.patch, LUCENE-6508.patch, 
 LUCENE-6508.patch


 See LUCENE-6507 for some background. In general it would be great if you can 
 just acquire an immutable lock (or you get a failure) and then you close that 
 to release it.
 Today the API might be too much for what is needed by IW.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7603) Scary non reproducible failure from UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565610#comment-14565610
 ] 

ASF subversion and git services commented on SOLR-7603:
---

Commit 1682564 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1682564 ]

SOLR-7603: more detail in asserts, and more asserts on the initial chain 
(before looking at the distributed version) to try and figure out WTF is going 
on here

 Scary non reproducible failure from 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping
 

 Key: SOLR-7603
 URL: https://issues.apache.org/jira/browse/SOLR-7603
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-7603.consoleText.txt


 jenkins nightly hit a very inexplicable error today...
 {noformat}
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/860/
 At revision 1682097
 Updating http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x at 
 revision '2015-05-27T14:50:50.016 -0400'
 [java-info] java version 1.7.0_72
 [java-info] Java(TM) SE Runtime Environment (1.7.0_72-b14, Oracle Corporation)
 [java-info] Java HotSpot(TM) 64-Bit Server VM (24.72-b04, Oracle Corporation)
 {noformat}
 {noformat}
   [junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=UpdateRequestProcessorFactoryTest
 -Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=4ECABCCFD159BE21 
 -Dtests.multiplier=2
 -Dtests.nightly=true -Dtests.slow=true 
 -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt
 -Dtests.locale=mt_MT -Dtests.timezone=Etc/GMT0 -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.01s J0 | 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]at 
 __randomizedtesting.SeedInfo.seed([4ECABCCFD159BE21:3F2E4219A7B299ED]:0)
[junit4]at
 org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:111)
[junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}
 ...the line in question is asserting that when executing a distributed update 
 (ie: forwarded from another node), the LogUpdateProcessor is still part of 
 the chain because it's got got the RunAlways annotation indicating it 
 should always be included in the chain (everything before hte 
 DistribUpdateProcessor is normally)
 There's really no explanation for why the LogUpdateProcessor wouldn't be 
 found other then a code bug -- but in that case why doesn't the seed 
 reproduce reliably?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7603) Scary non reproducible failure from UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

2015-05-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565617#comment-14565617
 ] 

Hoss Man commented on SOLR-7603:


TL;DR: still no clue, but added more logging/assert details to test

i've been beating my head against this and still can't make heads or tells of 
this failure -- the best guess i've got is that the logic for pruning the 
distributed chain down (but including any RunAllways processors) is actually 
working fine, but perhaps there is some failure in the initial construction of 
the chain in the first place? (SOLR-6892 recently modified the way the chains 
are initialized)

So i've added a hack to increase the log level for the duration of the test, as 
well as some more asserts regarding the state of the chain, and simplified the 
logic around how we assert properties of the distributed chain so it's a bit 
more straight forward an we can include a list of every proc in every assert.


 Scary non reproducible failure from 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping
 

 Key: SOLR-7603
 URL: https://issues.apache.org/jira/browse/SOLR-7603
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-7603.consoleText.txt


 jenkins nightly hit a very inexplicable error today...
 {noformat}
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/860/
 At revision 1682097
 Updating http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x at 
 revision '2015-05-27T14:50:50.016 -0400'
 [java-info] java version 1.7.0_72
 [java-info] Java(TM) SE Runtime Environment (1.7.0_72-b14, Oracle Corporation)
 [java-info] Java HotSpot(TM) 64-Bit Server VM (24.72-b04, Oracle Corporation)
 {noformat}
 {noformat}
   [junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=UpdateRequestProcessorFactoryTest
 -Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=4ECABCCFD159BE21 
 -Dtests.multiplier=2
 -Dtests.nightly=true -Dtests.slow=true 
 -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt
 -Dtests.locale=mt_MT -Dtests.timezone=Etc/GMT0 -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.01s J0 | 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]at 
 __randomizedtesting.SeedInfo.seed([4ECABCCFD159BE21:3F2E4219A7B299ED]:0)
[junit4]at
 org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:111)
[junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}
 ...the line in question is asserting that when executing a distributed update 
 (ie: forwarded from another node), the LogUpdateProcessor is still part of 
 the chain because it's got got the RunAlways annotation indicating it 
 should always be included in the chain (everything before hte 
 DistribUpdateProcessor is normally)
 There's really no explanation for why the LogUpdateProcessor wouldn't be 
 found other then a code bug -- but in that case why doesn't the seed 
 reproduce reliably?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7408) Let SolrCore be the only thing which registers/unregisters a config directory listener

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565736#comment-14565736
 ] 

ASF subversion and git services commented on SOLR-7408:
---

Commit 1682570 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1682570 ]

SOLR-7603: more test tweaks to protect ourselves from unexpected log levels in 
tests like the one introduced by SOLR-7408

 Let SolrCore be the only thing which registers/unregisters a config directory 
 listener
 --

 Key: SOLR-7408
 URL: https://issues.apache.org/jira/browse/SOLR-7408
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: Trunk, 5.2

 Attachments: SOLR-7408.patch, SOLR-7408.patch, SOLR-7408.patch, 
 SOLR-7408.patch, SOLR-7408.patch, SOLR-7408.patch, SOLR-7408.patch


 As reported here: http://markmail.org/message/ynkm2axkdprppgef, there is a 
 race condition which results in an exception when creating multiple 
 collections over the same config set. I was able to reproduce it in a test, 
 although I am only able to reproduce if I put break points and manually 
 simulate the problematic context switches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7603) Scary non reproducible failure from UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565735#comment-14565735
 ] 

ASF subversion and git services commented on SOLR-7603:
---

Commit 1682570 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1682570 ]

SOLR-7603: more test tweaks to protect ourselves from unexpected log levels in 
tests like the one introduced by SOLR-7408

 Scary non reproducible failure from 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping
 

 Key: SOLR-7603
 URL: https://issues.apache.org/jira/browse/SOLR-7603
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-7603.consoleText.txt


 jenkins nightly hit a very inexplicable error today...
 {noformat}
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/860/
 At revision 1682097
 Updating http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x at 
 revision '2015-05-27T14:50:50.016 -0400'
 [java-info] java version 1.7.0_72
 [java-info] Java(TM) SE Runtime Environment (1.7.0_72-b14, Oracle Corporation)
 [java-info] Java HotSpot(TM) 64-Bit Server VM (24.72-b04, Oracle Corporation)
 {noformat}
 {noformat}
   [junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=UpdateRequestProcessorFactoryTest
 -Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=4ECABCCFD159BE21 
 -Dtests.multiplier=2
 -Dtests.nightly=true -Dtests.slow=true 
 -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt
 -Dtests.locale=mt_MT -Dtests.timezone=Etc/GMT0 -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.01s J0 | 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]at 
 __randomizedtesting.SeedInfo.seed([4ECABCCFD159BE21:3F2E4219A7B299ED]:0)
[junit4]at
 org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:111)
[junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}
 ...the line in question is asserting that when executing a distributed update 
 (ie: forwarded from another node), the LogUpdateProcessor is still part of 
 the chain because it's got got the RunAlways annotation indicating it 
 should always be included in the chain (everything before hte 
 DistribUpdateProcessor is normally)
 There's really no explanation for why the LogUpdateProcessor wouldn't be 
 found other then a code bug -- but in that case why doesn't the seed 
 reproduce reliably?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7603) Scary non reproducible failure from UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

2015-05-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565170#comment-14565170
 ] 

Hoss Man edited comment on SOLR-7603 at 5/29/15 11:12 PM:
--

This happened again last night with the new test assertions providing a bit 
more detail...

{noformat}
Date: Fri, 29 May 2015 09:42:29 + (UTC)
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/696/
Updating http://svn.apache.org/repos/asf/lucene/dev/trunk at revision 
'2015-05-29T03:12:26.055 -0400'
At revision 1682377

-print-java-info:
[java-info] java version 1.8.0_25
[java-info] Java(TM) SE Runtime Environment (1.8.0_25-b17, Oracle Corporation)
[java-info] Java HotSpot(TM) 64-Bit Server VM (25.25-b02, Oracle Corporation)
[java-info] Test args: []

java.lang.AssertionError: distrib-chain-explicit expected LogUpdateProcessor in 
chain due to @RunAllways, but not
found: org.apache.solr.update.processor.DistributedUpdateProcessor@638c6e19,
org.apache.solr.update.processor.RemoveBlankFieldUpdateProcessorFactory$1@25bbdfcb,
org.apache.solr.update.processor.RunUpdateProcessor@473fea05, 
at 
__randomizedtesting.SeedInfo.seed([3AF6852C6379681:724B9684B0DCB14D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:127)

   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=UpdateRequestProcessorFactoryTest 
-Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=3AF6852C6379681 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=ru -Dtests.timezone=Asia/Baghdad -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.03s J2 | 
UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping 
   [junit4] Throwable #1: java.lang.AssertionError: distrib-chain-explicit 
expected LogUpdateProcessor in chain due to @RunAllways, but not found: 
org.apache.solr.update.processor.DistributedUpdateProcessor@638c6e19, 
org.apache.solr.update.processor.RemoveBlankFieldUpdateProcessorFactory$1@25bbdfcb,
 org.apache.solr.update.processor.RunUpdateProcessor@473fea05, 
   [junit4]at 
__randomizedtesting.SeedInfo.seed([3AF6852C6379681:724B9684B0DCB14D]:0)
   [junit4]at 
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:127)
   [junit4]at java.lang.Thread.run(Thread.java:745)
{noformat}

(*EDIT*: updated with reproduce line now that builds.apache.org is back up)

I realy can't make heads or tails of why LogUpdateProcessor wouldn't be in that 
chain.



FWIW: first incidence i can find of a failure from this assert is...

{noformat}
Date: Sat, 2 May 2015 17:14:16 + (UTC)
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/835/

   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=UpdateRequestProcessorFactoryTest 
-Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=D82449739AD0D42D 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=nl -Dtests.timezone=America/Denver -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.01s J0 | 
UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping 
   [junit4] Throwable #1: java.lang.AssertionError
   [junit4]at 
__randomizedtesting.SeedInfo.seed([D82449739AD0D42D:A9C0B7A5EC3BF3E1]:0)
   [junit4]at 
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:111)
   [junit4]at java.lang.Thread.run(Thread.java:745)
{noformat}


was (Author: hossman):
This happened again last night with the new test assertions providing a bit 
more detail...

{noformat}
Date: Fri, 29 May 2015 09:42:29 + (UTC)
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/696/

java.lang.AssertionError: distrib-chain-explicit expected LogUpdateProcessor in 
chain due to @RunAllways, but not
found: org.apache.solr.update.processor.DistributedUpdateProcessor@638c6e19,
org.apache.solr.update.processor.RemoveBlankFieldUpdateProcessorFactory$1@25bbdfcb,
org.apache.solr.update.processor.RunUpdateProcessor@473fea05, 
at 
__randomizedtesting.SeedInfo.seed([3AF6852C6379681:724B9684B0DCB14D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:127)

{noformat}


[jira] [Created] (SOLR-7611) TestSearcherReuse failure

2015-05-29 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-7611:


 Summary: TestSearcherReuse failure
 Key: SOLR-7611
 URL: https://issues.apache.org/jira/browse/SOLR-7611
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2
Reporter: Steve Rowe


{noformat}
   [junit4] FAILURE 0.94s | TestSearcherReuse.test 
   [junit4] Throwable #1: java.lang.AssertionError: expected 
same:Searcher@66681f2[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.2.0):C3)
 Uninverting(_2(5.2.0):c2)))} was not:Searcher@5d94043f[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.2.0):C3)
 Uninverting(_2(5.2.0):c2)))}
   [junit4]at 
__randomizedtesting.SeedInfo.seed([F1A11DF972B907D6:79F52223DC456A2E]:0)
   [junit4]at 
org.apache.solr.search.TestSearcherReuse.assertSearcherHasNotChanged(TestSearcherReuse.java:247)
   [junit4]at 
org.apache.solr.search.TestSearcherReuse.test(TestSearcherReuse.java:104)
   [junit4]at java.lang.Thread.run(Thread.java:745)
{noformat}

Reproduces for me on the 5.2 release branch with the following - note that both 
{{-Dtests.multiplier=2}} and {{-Dtests.nightly=true}} are required to reproduce:

{noformat}
ant test  -Dtestcase=TestSearcherReuse -Dtests.seed=F1A11DF972B907D6 
-Dtests.multiplier=2 -Dtests.nightly=true
{noformat}

Full log:

{noformat}
   [junit4] JUnit4 says hallo! Master seed: F1A11DF972B907D6
   [junit4] Executing 1 suite with 1 JVM.
   [junit4] 
   [junit4] Started J0 PID(776@smb.local).
   [junit4] Suite: org.apache.solr.search.TestSearcherReuse
   [junit4]   2 log4j:WARN No such property [conversionPattern] in 
org.apache.solr.util.SolrLogLayout.
   [junit4]   2 Creating dataDir: 
/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
 F1A11DF972B907D6-002/init-core-data-001
   [junit4]   2 889 T11 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2 959 T11 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 1093 T11 oasc.SolrResourceLoader.init new SolrResourceLoader 
for directory: 
'/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
 F1A11DF972B907D6-002/tempDir-001/collection1/'
   [junit4]   2 1390 T11 oasc.SolrConfig.refreshRequestParams current version 
of requestparams : -1
   [junit4]   2 1449 T11 oasc.SolrConfig.init Using Lucene MatchVersion: 
5.2.0
   [junit4]   2 1551 T11 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig-managed-schema.xml
   [junit4]   2 1563 T11 oass.ManagedIndexSchemaFactory.readSchemaLocally The 
schema is configured as managed, but managed schema resource managed-schema not 
found - loading non-managed schema schema-id-and-version-fields-only.xml instead
   [junit4]   2 1580 T11 oass.IndexSchema.readSchema Reading Solr Schema from 
/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
 
F1A11DF972B907D6-002/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml
   [junit4]   2 1594 T11 oass.IndexSchema.readSchema [null] Schema 
name=id-and-version-fields-only
   [junit4]   2 1676 T11 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 1706 T11 oass.ManagedIndexSchema.persistManagedSchema Upgraded 
to managed schema at 
/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
 F1A11DF972B907D6-002/tempDir-001/collection1/conf/managed-schema
   [junit4]   2 1709 T11 oass.ManagedIndexSchemaFactory.upgradeToManagedSchema 
After upgrading to managed schema, renamed the non-managed schema 
/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
 
F1A11DF972B907D6-002/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml
 to 
/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
 
F1A11DF972B907D6-002/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml.bak
   [junit4]   2 1714 T11 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 1715 T11 oasc.SolrResourceLoader.locateSolrHome using system 
property solr.solr.home: 
/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
 F1A11DF972B907D6-002/tempDir-001
   [junit4]   2 1715 T11 oasc.SolrResourceLoader.init new SolrResourceLoader 
for directory: 
'/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
 F1A11DF972B907D6-002/tempDir-001/'
   [junit4]   2 1765 T11 oasc.CoreContainer.init New CoreContainer 731222945
   [junit4]   2 1767 T11 

Re: Moving to git?

2015-05-29 Thread Yonik Seeley
+1 to move to git!

-Yonik


On Fri, May 29, 2015 at 5:07 PM, Anshum Gupta ans...@anshumgupta.net wrote:
 I know this has come up a few times in the past but I wanted to bring this
 up again.

 The lucene-solr ASF git mirror has been behind by about a day. I was
 speaking with the infra people and they say that the size of the repo needs
 more and more ram. Forcing a sync causes a fork-bomb:

 Can't fork: Cannot allocate memory at /usr/share/perl5/Git.pm line 1517.

 They tried a few things but it's almost certain that it needs even more RAM,
 which still is a band-aid as they'd soon need even more RAM. Also, adding
 RAM involves downtime for git.a.o which needs to be planned. As a stop gap
 arrangement attached a volume to the instance and are using it as swap to
 work around the adding RAM requires restart issue.

 FAQ: How would the memory requirement change if we moved to git instead of
 mirroring?
 Answer: svn - git mirroring is a weird process and has quite the memory
 leak. Using git directly is much cleaner.

 I personally think git does make things easier to manage when you're working
 on multiple overlapping things and so we should re-evaluate moving to it. I
 would have been fine had the mirroring worked, as all I want is a way to be
 able to work on multiple (local) branches without having to create and
 maintain directories like: lucene-solr-trunk1, lucene-solr-trunk2, or
 SOLR-, etc.

 Opinions?


 --
 Anshum Gupta

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4869 - Failure!

2015-05-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4869/
Java: 64bit/jdk1.8.0_45 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([2CC12E17CB5BF046]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:472)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:232)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=2425, name=searcherExecutor-1614-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=2425, name=searcherExecutor-1614-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([2CC12E17CB5BF046]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=2425, 

Re: [JENKINS] Lucene-Solr-5.x-Linux (32bit/ibm-j9-jdk7) - Build # 12700 - Failure!

2015-05-29 Thread Steve Rowe
Uwe:

Error: JAVA_HOME is not defined correctly.
  We cannot execute /var/lib/jenkins/tools/java/32bit/ibm-j9-jdk7/bin/java

 On May 29, 2015, at 9:09 PM, Policeman Jenkins Server jenk...@thetaphi.de 
 wrote:
 
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12700/
 Java: 32bit/ibm-j9-jdk7 
 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}
 
 No tests ran.
 
 Build Log:
 [...truncated 309 lines...]
 ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
 files were found. Configuration error?
 Email was triggered for: Failure - Any
 Sending email for trigger: Failure - Any
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7603) Scary non reproducible failure from UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565612#comment-14565612
 ] 

ASF subversion and git services commented on SOLR-7603:
---

Commit 1682565 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1682565 ]

SOLR-7603: more detail in asserts, and more asserts on the initial chain 
(before looking at the distributed version) to try and figure out WTF is going 
on here (merge r1682564)

 Scary non reproducible failure from 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping
 

 Key: SOLR-7603
 URL: https://issues.apache.org/jira/browse/SOLR-7603
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-7603.consoleText.txt


 jenkins nightly hit a very inexplicable error today...
 {noformat}
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/860/
 At revision 1682097
 Updating http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x at 
 revision '2015-05-27T14:50:50.016 -0400'
 [java-info] java version 1.7.0_72
 [java-info] Java(TM) SE Runtime Environment (1.7.0_72-b14, Oracle Corporation)
 [java-info] Java HotSpot(TM) 64-Bit Server VM (24.72-b04, Oracle Corporation)
 {noformat}
 {noformat}
   [junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=UpdateRequestProcessorFactoryTest
 -Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=4ECABCCFD159BE21 
 -Dtests.multiplier=2
 -Dtests.nightly=true -Dtests.slow=true 
 -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt
 -Dtests.locale=mt_MT -Dtests.timezone=Etc/GMT0 -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.01s J0 | 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]at 
 __randomizedtesting.SeedInfo.seed([4ECABCCFD159BE21:3F2E4219A7B299ED]:0)
[junit4]at
 org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:111)
[junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}
 ...the line in question is asserting that when executing a distributed update 
 (ie: forwarded from another node), the LogUpdateProcessor is still part of 
 the chain because it's got got the RunAlways annotation indicating it 
 should always be included in the chain (everything before hte 
 DistribUpdateProcessor is normally)
 There's really no explanation for why the LogUpdateProcessor wouldn't be 
 found other then a code bug -- but in that case why doesn't the seed 
 reproduce reliably?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7603) Scary non reproducible failure from UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

2015-05-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565681#comment-14565681
 ] 

Hoss Man commented on SOLR-7603:


TL;DR: fairly certain this is a (nightly only) test bug caused by SOLR-7408, 
working on fix

On IRC Tim drew my attention to some behavior of the LogUpdateProcessorFactory 
that i had completely forgotten about...

{code}
  @Override
  public UpdateRequestProcessor getInstance(SolrQueryRequest req, 
SolrQueryResponse rsp, UpdateRequestProcessor next) {
return LogUpdateProcessor.log.isInfoEnabled() ? new LogUpdateProcessor(req, 
rsp, this, next) : null;
  }
{code}

...in a nut shell: as an optimization, the factory doesn't produce a processor 
if it can tell from the current logging level that there is no point in using 
that processor.

Tim's theory was that some of the recent MDC/logging related changes may be 
affecting hte log level used i nthe nightly tests -- but i'm fairly certain the 
root cause is much more of a fluke...

SOLR-7408 introduced ConcurrentDeleteAndCreateCollectionTest in r1675274 on 
Apr 22 08:25:26 2015 UTC .. this is an {{@Nightly}} test that has this bit of 
code in it...

{code}
Logger.getLogger(org.apache.solr).setLevel(Level.WARN);
{code}

Which means if this test runs before UpdateRequestProcessorFactoryTest in the 
same JVM, the log level won't be low enough for the LogUpdateProcessor to ever 
be created.

I've confirmed that happeend in both of the very recent failures...

https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/696/consoleText
{noformat}
   [junit4]   2 NOTE: All tests run in this JVM: [NotRequiredUniqueKeyTest, 
ShardSplitTest, DeleteLastCustomShardedReplicaTest, RequestLoggingTest, 
TestReplicaProperties, TestCSVResponseWriter, TestShardHandlerFactory, 
QueryEqualityTest, TestAnalyzeInfixSuggestions, TestSystemIdResolver, 
InfoHandlerTest, TestLRUStatsCache, StatsComponentTest, 
CoreAdminRequestStatusTest, SoftAutoCommitTest, FacetPivotSmallTest, TestTrie, 
SimpleFacetsTest, TestExtendedDismaxParser, TestQueryUtils, TestDocSet, 
TestPerFieldSimilarity, SolrCoreCheckLockOnStartupTest, SearchHandlerTest, 
TestSchemaManager, DirectUpdateHandlerTest, HdfsChaosMonkeySafeLeaderTest, 
SpellCheckCollatorTest, SolrInfoMBeanTest, TestConfigOverlay, 
DistributedQueueTest, TestXIncludeConfig, TestSolrJ, OutputWriterTest, 
HdfsLockFactoryTest, TestLeaderElectionZkExpiry, TestBM25SimilarityFactory, 
AddSchemaFieldsUpdateProcessorFactoryTest, PathHierarchyTokenizerFactoryTest, 
DistributedFacetPivotSmallAdvancedTest, BinaryUpdateRequestHandlerTest, 
SOLR749Test, TestComplexPhraseQParserPlugin, TestRandomMergePolicy, 
CachingDirectoryFactoryTest, LeaderElectionTest, HdfsNNFailoverTest, 
TestManagedSchemaDynamicFieldResource, OverseerCollectionProcessorTest, 
TestQuerySenderNoQuery, SortByFunctionTest, TestNRTOpen, AddBlockUpdateTest, 
TestBinaryResponseWriter, AutoCommitTest, CloudExitableDirectoryReaderTest, 
TestExactSharedStatsCache, HighlighterMaxOffsetTest, 
TestDefaultSimilarityFactory, TestSchemaNameResource, 
TestAuthorizationFramework, TestSchemaSimilarityResource, 
TestCollationFieldDocValues, TestZkChroot, 
ConcurrentDeleteAndCreateCollectionTest, BufferStoreTest, TestRestoreCore, 
QueryResultKeyTest, TermVectorComponentDistributedTest, 
TestFieldTypeCollectionResource, TestSolrConfigHandlerConcurrent, 
MoreLikeThisHandlerTest, TestChildDocTransformer, CursorMarkTest, 
TestSimpleQParserPlugin, XsltUpdateRequestHandlerTest, TestSurroundQueryParser, 
OverseerTest, FullSolrCloudDistribCmdsTest, ZkSolrClientTest, 
TestRandomDVFaceting, ZkCLITest, TestDistributedSearch, 
TestDistributedGrouping, TestRecovery, TestRealTimeGet, TestStressReorder, 
TestJoin, TestReload, HardAutoCommitTest, TestRangeQuery, TestGroupingSearch, 
SolrCmdDistributorTest, PeerSyncTest, BadIndexSchemaTest, TestSort, 
TestFiltering, BasicFunctionalityTest, TestIndexSearcher, 
ShowFileRequestHandlerTest, CurrencyFieldOpenExchangeTest, 
DistributedQueryElevationComponentTest, SolrIndexSplitterTest, 
AnalysisAfterCoreReloadTest, SignatureUpdateProcessorFactoryTest, 
SuggesterFSTTest, CoreAdminHandlerTest, SolrRequestParserTest, 
TestFoldingMultitermQuery, DocValuesTest, SuggesterTest, SpatialFilterTest, 
PolyFieldTest, NoCacheHeaderTest, WordBreakSolrSpellCheckerTest, 
SchemaVersionSpecificBehaviorTest, TestPseudoReturnFields, 
FieldMutatingUpdateProcessorTest, TestAtomicUpdateErrorCases, 
DirectUpdateHandlerOptimizeTest, TestRemoteStreaming, TestSolrDeletionPolicy1, 
StandardRequestHandlerTest, TestWriterPerf, DirectSolrSpellCheckerTest, 
TestReversedWildcardFilterFactory, DocumentAnalysisRequestHandlerTest, 
TestQueryTypes, PrimitiveFieldTypeTest, TestOmitPositions, 
FileBasedSpellCheckerTest, XmlUpdateRequestHandlerTest, DocumentBuilderTest, 
TestValueSourceCache, TestIndexingPerformance, RequiredFieldsTest, 
FieldAnalysisRequestHandlerTest, 

[jira] [Commented] (SOLR-7611) TestSearcherReuse failure

2015-05-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565745#comment-14565745
 ] 

Steve Rowe commented on SOLR-7611:
--

Another seed that reproduces on the 5.2 release branch:

{noformat}
ant test  -Dtestcase=TestSearcherReuse -Dtests.seed=64C8ED4E4C36262F 
-Dtests.multiplier=2 -Dtests.nightly=true
{noformat}

log:

{noformat}
  [junit4] Suite: org.apache.solr.search.TestSearcherReuse
  [junit4]   2 Creating dataDir: 
/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 64C8ED4E4C36262F-001/init-core-data-001
  [junit4]   2 247111 T5288 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
  [junit4]   2 247112 T5288 oas.SolrTestCaseJ4.initCore initCore
  [junit4]   2 247112 T5288 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 64C8ED4E4C36262F-001/tempDir-001/collection1/'
  [junit4]   2 247128 T5288 oasc.SolrConfig.refreshRequestParams current 
version of requestparams : -1
  [junit4]   2 247135 T5288 oasc.SolrConfig.init Using Lucene MatchVersion: 
5.2.0
  [junit4]   2 247157 T5288 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig-managed-schema.xml
  [junit4]   2 247161 T5288 oass.ManagedIndexSchemaFactory.readSchemaLocally 
The schema is configured as managed, but managed schema resource managed-schema 
not found - loading non-managed schema schema-id-and-version-fields-only.xml 
instead
  [junit4]   2 247164 T5288 oass.IndexSchema.readSchema Reading Solr Schema 
from 
/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 
64C8ED4E4C36262F-001/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml
  [junit4]   2 247168 T5288 oass.IndexSchema.readSchema [null] Schema 
name=id-and-version-fields-only
  [junit4]   2 247173 T5288 oass.IndexSchema.readSchema unique key field: id
  [junit4]   2 247176 T5288 oass.ManagedIndexSchema.persistManagedSchema 
Upgraded to managed schema at 
/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 64C8ED4E4C36262F-001/tempDir-001/collection1/conf/managed-schema
  [junit4]   2 247176 T5288 
oass.ManagedIndexSchemaFactory.upgradeToManagedSchema After upgrading to 
managed schema, renamed the non-managed schema 
/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 
64C8ED4E4C36262F-001/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml
 to 
/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 
64C8ED4E4C36262F-001/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml.bak
  [junit4]   2 247176 T5288 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
  [junit4]   2 247176 T5288 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 
/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 64C8ED4E4C36262F-001/tempDir-001
  [junit4]   2 247177 T5288 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 64C8ED4E4C36262F-001/tempDir-001/'
  [junit4]   2 247196 T5288 oasc.CoreContainer.init New CoreContainer 
365438927
  [junit4]   2 247196 T5288 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 64C8ED4E4C36262F-001/tempDir-001/]
  [junit4]   2 247196 T5288 oasc.CoreContainer.load loading shared library: 
/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 64C8ED4E4C36262F-001/tempDir-001/lib
  [junit4]   2 247197 T5288 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/var/lib/jenkins/jobs/Solr-core-NightlyTests-5.2-Java7/workspace/solr/build/solr-core/test/J10/temp/solr.search.TestSearcherReuse
 64C8ED4E4C36262F-001/tempDir-001/lib).
  [junit4]   2 247207 T5288 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 60,connTimeout : 6,maxConnectionsPerHost : 
20,maxConnections : 1,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
  [junit4]   2 247209 T5288 

Re: [VOTE] 5.2.0 RC2

2015-05-29 Thread Timothy Potter
+1 SUCCESS! [0:53:16.298079]

Woots!

On Fri, May 29, 2015 at 7:30 AM, Ishan Chattopadhyaya
ichattopadhy...@gmail.com wrote:
 +1
 SUCCESS! [1:53:58.019931]

 (A cloudatcost.com, one time, $500 8GB ram VPS here)

 On Fri, May 29, 2015 at 6:59 PM, Mark Miller markrmil...@gmail.com wrote:

 bq. SUCCESS! [0:22:46.736047]

 That is just absurd.

 +1

 SUCCESS! [0:45:01.183084]

 - Mark


 On Fri, May 29, 2015 at 9:20 AM Steve Rowe sar...@gmail.com wrote:

 +1

 SUCCESS! [0:22:46.736047]

 I first downloaded via Subversion (took ~9 min), then pointed the smoke
 tester at the checkout:

 cd /tmp
 svn co
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356
 cd ~/svn/lucene/dev/branches/lucene_solr_5_2
 python3 -u dev-tools/scripts/smokeTestRelease.py
 file:///tmp/lucene-solr-5.2.0-RC2-rev1682356/

 Steve

  On May 29, 2015, at 1:14 AM, Anshum Gupta ans...@anshumgupta.net
  wrote:
 
  Please vote for the second release candidate for Apache Lucene/Solr
  5.2.0.
 
  The artifacts can be downloaded from:
 
 
  https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356
 
  You can run the smoke tester directly with this command:
 
  python3 -u dev-tools/scripts/smokeTestRelease.py
  https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC2-rev1682356/
 
  Here's my +1
 
  SUCCESS! [0:31:06.632891]
 
  --
  Anshum Gupta


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7603) Scary non reproducible failure from UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565737#comment-14565737
 ] 

ASF subversion and git services commented on SOLR-7603:
---

Commit 1682571 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1682571 ]

SOLR-7603: more test tweaks to protect ourselves from unexpected log levels in 
tests like the one introduced by SOLR-7408 (merge r1682570)

 Scary non reproducible failure from 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping
 

 Key: SOLR-7603
 URL: https://issues.apache.org/jira/browse/SOLR-7603
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-7603.consoleText.txt


 jenkins nightly hit a very inexplicable error today...
 {noformat}
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/860/
 At revision 1682097
 Updating http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x at 
 revision '2015-05-27T14:50:50.016 -0400'
 [java-info] java version 1.7.0_72
 [java-info] Java(TM) SE Runtime Environment (1.7.0_72-b14, Oracle Corporation)
 [java-info] Java HotSpot(TM) 64-Bit Server VM (24.72-b04, Oracle Corporation)
 {noformat}
 {noformat}
   [junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=UpdateRequestProcessorFactoryTest
 -Dtests.method=testUpdateDistribChainSkipping -Dtests.seed=4ECABCCFD159BE21 
 -Dtests.multiplier=2
 -Dtests.nightly=true -Dtests.slow=true 
 -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt
 -Dtests.locale=mt_MT -Dtests.timezone=Etc/GMT0 -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.01s J0 | 
 UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]at 
 __randomizedtesting.SeedInfo.seed([4ECABCCFD159BE21:3F2E4219A7B299ED]:0)
[junit4]at
 org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:111)
[junit4]at java.lang.Thread.run(Thread.java:745)
 {noformat}
 ...the line in question is asserting that when executing a distributed update 
 (ie: forwarded from another node), the LogUpdateProcessor is still part of 
 the chain because it's got got the RunAlways annotation indicating it 
 should always be included in the chain (everything before hte 
 DistribUpdateProcessor is normally)
 There's really no explanation for why the LogUpdateProcessor wouldn't be 
 found other then a code bug -- but in that case why doesn't the seed 
 reproduce reliably?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7408) Let SolrCore be the only thing which registers/unregisters a config directory listener

2015-05-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14565738#comment-14565738
 ] 

ASF subversion and git services commented on SOLR-7408:
---

Commit 1682571 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1682571 ]

SOLR-7603: more test tweaks to protect ourselves from unexpected log levels in 
tests like the one introduced by SOLR-7408 (merge r1682570)

 Let SolrCore be the only thing which registers/unregisters a config directory 
 listener
 --

 Key: SOLR-7408
 URL: https://issues.apache.org/jira/browse/SOLR-7408
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: Trunk, 5.2

 Attachments: SOLR-7408.patch, SOLR-7408.patch, SOLR-7408.patch, 
 SOLR-7408.patch, SOLR-7408.patch, SOLR-7408.patch, SOLR-7408.patch


 As reported here: http://markmail.org/message/ynkm2axkdprppgef, there is a 
 race condition which results in an exception when creating multiple 
 collections over the same config set. I was able to reproduce it in a test, 
 although I am only able to reproduce if I put break points and manually 
 simulate the problematic context switches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/ibm-j9-jdk7) - Build # 12700 - Failure!

2015-05-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12700/
Java: 32bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

No tests ran.

Build Log:
[...truncated 309 lines...]
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12875 - Failure!

2015-05-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12875/
Java: 64bit/jdk1.8.0_60-ea-b12 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls

Error Message:
Shard split did not complete. Last recorded state: running 
expected:[completed] but was:[running]

Stack Trace:
org.junit.ComparisonFailure: Shard split did not complete. Last recorded state: 
running expected:[completed] but was:[running]
at 
__randomizedtesting.SeedInfo.seed([5EF1A336FDF09537:6952F57FB9A3DE3]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls(CollectionsAPIAsyncDistributedZkTest.java:90)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

  1   2   >