[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682488#comment-14682488
 ] 

Michael McCandless commented on LUCENE-6699:


Thanks [~daddywri], I started on one part, lemme go commit so we don't stomp on 
each other!

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-11 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-7836:
-
Attachment: deadlock_5_pass_iw.res.zip

Here's a fail with passing the index writer from addDoc0() to addAndDelete, my 
first attempt at a fix.

I commented out the necessary lines rather than add or delete them, so the line 
numbers should correspond to the checkout I mentioned above.

 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-synch.patch, SOLR-7836.patch, SOLR-7836.patch, 
 SOLR-7836.patch, deadlock_3.res.zip, deadlock_5_pass_iw.res.zip, deadlock_test


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682492#comment-14682492
 ] 

Michael McCandless commented on LUCENE-6699:


OK I committed my change, hack away!

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682501#comment-14682501
 ] 

Karl Wright commented on LUCENE-6699:
-

One other question: what should I return for the four cases listed below?

{code}
   @Override
   public BKD3DTreeReader.Relation 
compare(int xMin, int xMax, int yMin, int yMax, int zMin, int zMax) {
 final GeoArea xyzSolid = new 
XYZSolid(planetModel, xMin, xMax, yMin, yMax, zMin, zMax);
 final int relationship = 
xyzSolid.getRelationship(shape);
 switch (relationship) {
 case GeoArea.WITHIN:
   // nocommit: shape is within 
xyzsolid
   return 
BKD3DTreeReader.Relation.INSIDE;
 case GeoArea.CONTAINS:
   // nocommit: shape contains 
xyzsolid
   return 
BKD3DTreeReader.Relation.INSIDE;
 case GeoArea.OVERLAPS:
   // nocommit: shape overlaps 
xyzsolid
   return 
BKD3DTreeReader.Relation.INSIDE;
 case GeoArea.DISJOINT:
   // nocommit: shape has nothing 
to do with xyzsolid
   return 
BKD3DTreeReader.Relation.INSIDE;
 default:
   throw new 
RuntimeException(Unexpected result value from getRelationship(): 
+relationship);
 }
   }
 });
{code}


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7639) Bring MLTQParser at par with the MLT Handler w.r.t supported options

2015-08-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14687404#comment-14687404
 ] 

Anshum Gupta commented on SOLR-7639:


I'm really sorry Jens for not noticing this patch and getting it into 5.3 in 
time. I'll create a new JIRA and add this patch to it there. This patch doesn't 
have changes for CloudMLTQParser and also doesn't have any tests. Let's get 
both of that done and I'll make sure I spend time to get this in ASAP.

 Bring MLTQParser at par with the MLT Handler w.r.t supported options
 

 Key: SOLR-7639
 URL: https://issues.apache.org/jira/browse/SOLR-7639
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.3

 Attachments: SOLR-7639-add-boost-and-exclude-current.patch, 
 SOLR-7639-add-boost-and-exclude-current.patch, SOLR-7639.patch, 
 SOLR-7639.patch


 As of now, there are options that the MLT Handler supports which the QParser 
 doesn't. It would be good to have the QParser tap into everything that's 
 supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7912) Add support for boost and exclude the queried document id in MoreLikeThis QParser

2015-08-11 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-7912:
--

 Summary: Add support for boost and exclude the queried document id 
in MoreLikeThis QParser
 Key: SOLR-7912
 URL: https://issues.apache.org/jira/browse/SOLR-7912
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta


Continuing from SOLR-7639. We need to support boost, and also exclude input 
document from returned doc list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7639) Bring MLTQParser at par with the MLT Handler w.r.t supported options

2015-08-11 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-7639.

Resolution: Fixed

Marking this issue as resolved as we shouldn't be adding more to this 
particular JIRA# considering 5.3 branch has been cut.

 Bring MLTQParser at par with the MLT Handler w.r.t supported options
 

 Key: SOLR-7639
 URL: https://issues.apache.org/jira/browse/SOLR-7639
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.3

 Attachments: SOLR-7639-add-boost-and-exclude-current.patch, 
 SOLR-7639-add-boost-and-exclude-current.patch, SOLR-7639.patch, 
 SOLR-7639.patch


 As of now, there are options that the MLT Handler supports which the QParser 
 doesn't. It would be good to have the QParser tap into everything that's 
 supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-11 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-6760:
-
Attachment: (was: SOLR-6760.patch)

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682491#comment-14682491
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695377 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695377 ]

LUCENE-6699: fold in some feedback

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682495#comment-14682495
 ] 

Michael McCandless commented on LUCENE-6699:


bq. So some plumbing will be necessary to set that up.

Ahh yes, now they are static methods.  So I guess this means you must pass 
PlanetModel to the doc values format, and to the query.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14687391#comment-14687391
 ] 

Karl Wright commented on LUCENE-6699:
-

And [~mikemccand], I just ran into something else.  tree.intersect() accepts 
only int values at the moment, but x,y,z are doubles in the range of roughly 
-1.0 to 1.0, and need to be treated as such.  It looks like the integer stuff 
goes fairly deep into the BKD3DTreeReader code.  Question: If I embark on 
turning these all into doubles, what kinds of problems will I have?



 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7912) Add support for boost and exclude the queried document id in MoreLikeThis QParser

2015-08-11 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7912:
---
Attachment: SOLR-7912.patch

Patch from SOLR-7639.

 Add support for boost and exclude the queried document id in MoreLikeThis 
 QParser
 -

 Key: SOLR-7912
 URL: https://issues.apache.org/jira/browse/SOLR-7912
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7912.patch


 Continuing from SOLR-7639. We need to support boost, and also exclude input 
 document from returned doc list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7912) Add support for boost and exclude the queried document id in MoreLikeThis QParser

2015-08-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692285#comment-14692285
 ] 

Anshum Gupta edited comment on SOLR-7912 at 8/11/15 9:27 PM:
-

 [~blackwinter] : Here's the patch from SOLR-7639.


was (Author: anshumg):
Patch from SOLR-7639.

 Add support for boost and exclude the queried document id in MoreLikeThis 
 QParser
 -

 Key: SOLR-7912
 URL: https://issues.apache.org/jira/browse/SOLR-7912
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7912.patch


 Continuing from SOLR-7639. We need to support boost, and also exclude input 
 document from returned doc list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7913) Add stream.body support to MLT QParser

2015-08-11 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-7913:
--

 Summary: Add stream.body support to MLT QParser
 Key: SOLR-7913
 URL: https://issues.apache.org/jira/browse/SOLR-7913
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta


Continuing from 
https://issues.apache.org/jira/browse/SOLR-7639?focusedCommentId=14601011page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14601011.

It'd be good to have stream.body be supported by the mlt qparser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7639) Bring MLTQParser at par with the MLT Handler w.r.t supported options

2015-08-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692295#comment-14692295
 ] 

Anshum Gupta commented on SOLR-7639:


I've created SOLR-7912 (boost and exclusion) and SOLR-7913 (support for 
stream.body).

 Bring MLTQParser at par with the MLT Handler w.r.t supported options
 

 Key: SOLR-7639
 URL: https://issues.apache.org/jira/browse/SOLR-7639
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Fix For: 5.3

 Attachments: SOLR-7639-add-boost-and-exclude-current.patch, 
 SOLR-7639-add-boost-and-exclude-current.patch, SOLR-7639.patch, 
 SOLR-7639.patch


 As of now, there are options that the MLT Handler supports which the QParser 
 doesn't. It would be good to have the QParser tap into everything that's 
 supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-11 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-6760:
-
Attachment: (was: SOLR-6760.patch)

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-11 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-6760:
-
Attachment: SOLR-6760.patch

New patch, with better testing.  Tests all passing, I think.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-11 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692319#comment-14692319
 ] 

Scott Blum commented on SOLR-6760:
--

[~shalinmangar] please sanity check the block of code in 
testDistributedQueueBlocking() that forces a ZK disconnect / reconnect... I 
looked around but couldn't really find canonical patterns.  I wanted to ensure 
that we don't end up in a state where the session is disconnected, but we think 
we still have a watcher, so we're stuck forever.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692456#comment-14692456
 ] 

Karl Wright commented on LUCENE-6699:
-

Ok, did not understand that.  We don't yet have the ability to get a Bounds 
result for a shape that is x,y,z instead of lat/lon.  But I presume you *do* 
want the ability to know, for a given planet model, the actual bounds of the 
planet. ;-)  That's gotta go somewhere.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692391#comment-14692391
 ] 

Uwe Schindler commented on LUCENE-6174:
---

Looks fine! Go ahead. I did not find out that there is some generic selector 
in the GUI that selects those types with generic name!

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Trivial
 Attachments: LUCENE-6174.patch


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7826) Permission issues when creating cores with bin/solr

2015-08-11 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692407#comment-14692407
 ] 

Shawn Heisey commented on SOLR-7826:


Initial attempts are not working completely, and I'm fighting with a flaky 
Internet connection at the location where I'm doing the work.  If I manage to 
get something that works right, I'll upload a patch.


 Permission issues when creating cores with bin/solr
 ---

 Key: SOLR-7826
 URL: https://issues.apache.org/jira/browse/SOLR-7826
 Project: Solr
  Issue Type: Improvement
Reporter: Shawn Heisey
Priority: Minor

 Ran into an interesting situation on IRC today.
 Solr has been installed as a service using the shell script 
 install_solr_service.sh ... so it is running as an unprivileged user.
 User is running bin/solr create as root.  This causes permission problems, 
 because the script creates the core's instanceDir with root ownership, then 
 when Solr is instructed to actually create the core, it cannot create the 
 dataDir.
 Enhancement idea:  When the install script is used, leave breadcrumbs 
 somewhere so that the create core section of the main script can find it 
 and su to the user specified during install.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692425#comment-14692425
 ] 

Nicholas Knize commented on LUCENE-6699:


bq. The biggest thing, though, is that we need access to a PlanetModel instance 
inside the compare()

I didn't say anything about cost. It was a matter of saving work, making 
maintenance less of a nightmare, de-duping code and using what's already 
available.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692426#comment-14692426
 ] 

Michael McCandless commented on LUCENE-6699:


bq. tree.intersect() accepts only int values at the moment, but x,y,z are 
doubles

Wait, this was by design (having BKD operate only on int): the encoding of 
double - int should happen outside BKD.

I'm assuming 32 bits precision for each dimension is enough?

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60-ea-b24) - Build # 13821 - Still Failing!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13821/
Java: 64bit/jdk1.8.0_60-ea-b24 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
commitWithin did not work on node: http://127.0.0.1:33614/l_gav/collection1 
expected:68 but was:67

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:33614/l_gav/collection1 expected:68 but was:67
at 
__randomizedtesting.SeedInfo.seed([2E356A85C4B0BD53:A661555F6A4CD0AB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:333)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692438#comment-14692438
 ] 

Michael McCandless commented on LUCENE-6699:


bq. One other question: what should I return for the four cases listed below?

Oh this was the part I committed already ... if you svn up do you see conflicts?

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692444#comment-14692444
 ] 

ASF subversion and git services commented on LUCENE-6732:
-

Commit 1695395 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1695395 ]

LUCENE-6732: Remove tabs in JS and XML files

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6732.patch, LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692445#comment-14692445
 ] 

Karl Wright commented on LUCENE-6699:
-

Thanks, that clarifies.  Stay tuned.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692446#comment-14692446
 ] 

Michael McCandless commented on LUCENE-6699:


It seems like we need PlanetModel at query time, for the XYZSolid ctor, and 
also at indexing time, to know the full range for x, y, z during the encode of 
double - int (hmm and also at query time for the decode from int - double, so 
we can do the per-hit filtering on the boundary cells).

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6699:

Attachment: LUCENE-6699.patch

New patch which finishes some of the remaining fix mes

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692450#comment-14692450
 ] 

Karl Wright commented on LUCENE-6699:
-

[~mikemccand] see the new patch

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692452#comment-14692452
 ] 

ASF subversion and git services commented on LUCENE-6732:
-

Commit 1695401 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1695401 ]

Merged revision(s) 1695395 from lucene/dev/trunk:
LUCENE-6732: Remove tabs in JS and XML files

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6732.patch, LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692451#comment-14692451
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695400 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695400 ]

LUCENE-6699: iterate

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692454#comment-14692454
 ] 

Karl Wright commented on LUCENE-6699:
-

not to worry; I fixed it in my new patch.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14692453#comment-14692453
 ] 

Michael McCandless commented on LUCENE-6699:


Thanks [~daddywri], I committed.

Hmm the minX/maxX etc. in the query was supposed to be for the query shape, not 
for the planet (i.e., the 3D bbox for the query).  But if this is problematic I 
think we could simply remove it, and let BKD recurse from the entire world 
down I put a nocommit about this.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 925 - Still Failing

2015-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/925/

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=8286, name=collection2, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8286, name=collection2, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:37696/m: Could not find collection : 
awholynewstresscollection_collection2_1
at __randomizedtesting.SeedInfo.seed([43D7C7E8846AE831]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:894)


FAILED:  org.apache.solr.search.TestReloadDeadlock.testReloadDeadlock

Error Message:
Captured an uncaught exception in thread: Thread[id=5371, name=WRITER0, 
state=RUNNABLE, group=TGRP-TestReloadDeadlock]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=5371, name=WRITER0, state=RUNNABLE, 
group=TGRP-TestReloadDeadlock]
at 
__randomizedtesting.SeedInfo.seed([43D7C7E8846AE831:7E566349D0210B73]:0)
Caused by: java.lang.RuntimeException: org.apache.solr.common.SolrException: 
Error opening new searcher
at __randomizedtesting.SeedInfo.seed([43D7C7E8846AE831]:0)
at 
org.apache.solr.search.TestReloadDeadlock$1.run(TestReloadDeadlock.java:166)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1662)
at org.apache.solr.core.SolrCore.getRealtimeSearcher(SolrCore.java:1519)
at 
org.apache.solr.update.VersionInfo.getVersionFromIndex(VersionInfo.java:201)
at org.apache.solr.update.UpdateLog.lookupVersion(UpdateLog.java:778)
at 
org.apache.solr.update.VersionInfo.lookupVersion(VersionInfo.java:194)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1089)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleAdds(JsonLoader.java:470)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:134)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:113)
at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:76)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
at 
org.apache.solr.servlet.DirectSolrConnection.request(DirectSolrConnection.java:131)
at org.apache.solr.SolrTestCaseJ4.updateJ(SolrTestCaseJ4.java:1104)
at 
org.apache.solr.SolrTestCaseJ4.addAndGetVersion(SolrTestCaseJ4.java:1250)
at 
org.apache.solr.search.TestReloadDeadlock.addDoc(TestReloadDeadlock.java:200)
at 
org.apache.solr.search.TestReloadDeadlock.access$100(TestReloadDeadlock.java:46)
at 
org.apache.solr.search.TestReloadDeadlock$1.run(TestReloadDeadlock.java:156)
Caused by: java.lang.NullPointerException
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1631)
... 21 more




Build Log:
[...truncated 10719 lines...]
   [junit4] Suite: org.apache.solr.search.TestReloadDeadlock
   [junit4]   2 Creating dataDir: 

[jira] [Updated] (SOLR-7909) ZK ACL credential provider cannot be set from JVM params as documented

2015-08-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-7909:
--
   Priority: Major  (was: Blocker)
Description: In RefGuide 
https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control you 
are told to setup ZK security provider classes with system properties, but as 
noted in the comments to that page, that no longer works, and you need to set 
these in solr.xml.  (was: In RefGuide 
https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control you 
are told to setup ZK security provider classes with system properties, but as 
noted in the comments to that page, that no longer works, and you need to set 
these in solr.xml.

This should be a simple fix to get into 5.3, and quite important since 5.3 is 
more than anything a security release...)

Changing from blocker to major, since the bug has existed for several releases 
and we have a known workaround. Leaving this open so we can fix refguide.

 ZK ACL credential provider cannot be set from JVM params as documented
 --

 Key: SOLR-7909
 URL: https://issues.apache.org/jira/browse/SOLR-7909
 Project: Solr
  Issue Type: Bug
  Components: security
Affects Versions: 5.2.1
Reporter: Jan Høydahl
 Fix For: 5.3


 In RefGuide 
 https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control you 
 are told to setup ZK security provider classes with system properties, but as 
 noted in the comments to that page, that no longer works, and you need to set 
 these in solr.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5584) Update to Guava 15.0

2015-08-11 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-5584:
--
Assignee: (was: Dawid Weiss)

 Update to Guava 15.0
 

 Key: SOLR-5584
 URL: https://issues.apache.org/jira/browse/SOLR-5584
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Priority: Minor
 Fix For: Trunk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7790) Update Carrot2 clustering contrib to version 3.10.2

2015-08-11 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-7790:
--
Summary: Update Carrot2 clustering contrib to version 3.10.2  (was: Update 
Carrot2 clustering contrib to version 3.10.1)

 Update Carrot2 clustering contrib to version 3.10.2
 ---

 Key: SOLR-7790
 URL: https://issues.apache.org/jira/browse/SOLR-7790
 Project: Solr
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 5.3, Trunk

 Attachments: SOLR-7790.patch


 This issue brings the clustering extension up to date and also involves 
 upgrading a few other libraries (see sub-tasks or linked issues).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3 release

2015-08-11 Thread Jan Høydahl
I opened SOLR-7909 yesterday as blocker to flag that ZK ACL cannot be setup 
through VM-params as documented. The fix looks simple, but an alternative for 
5.3 is to change the documentation since there is a known workaround using 
solr.xml. Will do some testing.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

 10. aug. 2015 kl. 19.51 skrev Noble Paul noble.p...@gmail.com:
 
 I reopened https://issues.apache.org/jira/browse/SOLR-7838
 Need to incorporate the syntax changes. I'll cut the firs release
 candidate after that
 
 On Mon, Aug 10, 2015 at 9:33 PM, Erik Hatcher erik.hatc...@gmail.com wrote:
 I knew things were stalled so I slipped SOLR-7900 in to 5.3.   (this is
 example/files stuff that is soon to be publicized more, so wanted to be sure
 the latest stuff made it to next release).
 
 —
 Erik Hatcher, Senior Solutions Architect
 http://www.lucidworks.com
 
 
 
 
 On Aug 5, 2015, at 11:09 AM, Noble Paul noble.p...@gmail.com wrote:
 
 I have created the lucene_solr_5_3 branch.
 If anything new must go into 5.3 release please communicate it here
 and commit to the new branch as well
 
 @uwe, @sarowe , can someone help me setup jenkins for the same.
 --Noble
 
 On Wed, Aug 5, 2015 at 7:27 PM, Varun Thacker
 varunthacker1...@gmail.com wrote:
 
 Hi Noble,
 
 I've just now committed SOLR-7818 and SOLR-7756. Both were bug fixes. I'm
 done from my side for 5.3
 
 On Wed, Aug 5, 2015 at 1:22 PM, Noble Paul noble.p...@gmail.com wrote:
 
 
 https://issues.apache.org/jira/browse/SOLR-7692 is slated to be a part
 of 5.3 . I'm wrapping it up.
 As you suggested , it makes sense to cut a branch and stabilize stuff.
 I shall cut a branch as soon as possible
 
 I guess there could be other things too.
 --Noble
 
 On Tue, Aug 4, 2015 at 6:55 PM, Adrien Grand jpou...@gmail.com wrote:
 
 Hi Noble,
 
 Which changes are delaying the branch creation? Even if everything
 that we want for 5.3 is not ready yet, it could be useful to create
 the branch now to help stabilize it? We could still merge the changes
 we want in after the branch is created.
 
 On Mon, Aug 3, 2015 at 4:52 PM, Noble Paul noble.p...@gmail.com wrote:
 
 I may have to push the branch by a day or two. There are some more
 loose ends to be tied up from my side. Sorry for the trouble
 
 --Noble
 
 On Thu, Jul 30, 2015 at 12:48 PM, Adrien Grand jpou...@gmail.com
 wrote:
 
 +1 to releasing 5.3, and thanks for volunteering!
 
 On Mon, Jul 27, 2015 at 10:56 AM, Noble Paul noble.p...@gmail.com
 wrote:
 
 Hi,
 I would like to volunteer myself as the RM for 5.3 release. I plan to
 cut the 5.3 release branch by next Monday (03/Aug).
 
 --
 -
 Noble Paul
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 --
 Adrien
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 --
 -
 Noble Paul
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 --
 Adrien
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 --
 -
 Noble Paul
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 --
 
 
 Regards,
 Varun Thacker
 
 
 
 
 --
 -
 Noble Paul
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 
 -- 
 -
 Noble Paul
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7826) Permission issues when creating cores with bin/solr

2015-08-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681358#comment-14681358
 ] 

Jan Høydahl commented on SOLR-7826:
---

I believe creating cores as root will cause problems every single time, so why 
allow it at all? Perhaps bin/solr always should bail out early if executed as 
root, perhaps with an {{--runasrootonyourownrisk}} param to override?

 Permission issues when creating cores with bin/solr
 ---

 Key: SOLR-7826
 URL: https://issues.apache.org/jira/browse/SOLR-7826
 Project: Solr
  Issue Type: Improvement
Reporter: Shawn Heisey
Priority: Minor

 Ran into an interesting situation on IRC today.
 Solr has been installed as a service using the shell script 
 install_solr_service.sh ... so it is running as an unprivileged user.
 User is running bin/solr create as root.  This causes permission problems, 
 because the script creates the core's instanceDir with root ownership, then 
 when Solr is instructed to actually create the core, it cannot create the 
 dataDir.
 Enhancement idea:  When the install script is used, leave breadcrumbs 
 somewhere so that the create core section of the main script can find it 
 and su to the user specified during install.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6725) Reindex crashes the JVM

2015-08-11 Thread Jan Eerdekens (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Eerdekens updated LUCENE-6725:
--
Attachment: branch5-jdk8u51-results.txt
branch5-jdk7u75-results.txt

I've been only able to run 2 tests, using different JVMs, for the 5 branch. I 
first tried to run all tests with all the params you gave, but those took very 
long 30+ minutes and eventually the machine ran out of disk space. The DEV 
machine is a bit memory and disk constraint, so I re-ran the tests, but this 
time only test-core without the params which took about 10 minutes to run each 
time. Just let me know if this is sufficient. If so I should also be able to 
run to same for trunk on Thursday.

 Reindex crashes the JVM
 ---

 Key: LUCENE-6725
 URL: https://issues.apache.org/jira/browse/LUCENE-6725
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.5
 Environment: Solaris 10 1/13 (Update 11) Patchset applied.
 Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
 CPU:total 64 v9, popc, vis1, vis2, vis3, blk_init, cbcond, sun4v, niagara_plus
 Memory: 8k page, physical 25165824k(3240848k free)
 vm_info: Java HotSpot(TM) 64-Bit Server VM (24.75-b04) for solaris-sparc JRE 
 (1.7.0_75-b13)
Reporter: Jan Eerdekens
Priority: Minor
 Attachments: branch5-jdk7u75-results.txt, 
 branch5-jdk8u51-results.txt, hs_err_pid18938.log, 
 lucene-3.5-ant-test-results.txt


 We're using Liferay which uses Lucene behind the screens to index things like 
 documents, web content, users, etc... . When we trigger a full reindex via 
 the Liferay Control Panel, which uses IndexWriter.deleteAll(), the JVM 
 crashes and generates a dump with the following message: 
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x78de94a8, pid=18938, tid=2478
 #
 # JRE version: Java(TM) SE Runtime Environment (7.0_75-b13) (build 
 1.7.0_75-b13)
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed mode 
 solaris-sparc compressed oops)
 # Problematic frame:
 # J 5227 C2 
 org.apache.lucene.index.IndexFileNames.segmentFileName(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;
  (44 bytes) @ 0x78de94a8 [0x78de9480+0x28]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7909) ZK ACL credential provider cannot be set from JVM params as documented

2015-08-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-7909:
--
Fix Version/s: (was: 5.3)
   5.4

Fixed reference guide to require configuration in solr.xml for providers 
instead of through sysprops. Leaving open to fix the code to also accept 
sysprops for 5.4 release.

 ZK ACL credential provider cannot be set from JVM params as documented
 --

 Key: SOLR-7909
 URL: https://issues.apache.org/jira/browse/SOLR-7909
 Project: Solr
  Issue Type: Bug
  Components: security
Affects Versions: 5.2.1
Reporter: Jan Høydahl
 Fix For: 5.4


 In RefGuide 
 https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control you 
 are told to setup ZK security provider classes with system properties, but as 
 noted in the comments to that page, that no longer works, and you need to set 
 these in solr.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Better DocSetCollector

2015-08-11 Thread Ramkumar R. Aiyengar
I wonder if there might be value in BitDocIdSet.Builder which Lucene uses.
It had perf issues of its soon, but LUCENE-6645 seems to have fixed them,
and it does a similar approach as above (int array and then fixedbitset).
On 3 Aug 2015 12:35, Toke Eskildsen t...@statsbiblioteket.dk wrote:

 On Sat, 2015-08-01 at 15:09 -0700, Yonik Seeley wrote:
  I also investigated going the other way and tracking a Listint[] and
  allocating in smaller chunks (and even having a memory pool to pull
  the fixed size chunks from) but it was slower on my first attempt and
  I haven't returned to try more variants yet.  It *feels* like we
  should be able to get overall speedups by allocating in 8K chunks or
  so when the effects of memory bandwidth (the cost of zeroing) and GC
  are considered.

 Chunked allocations of int[] would still have the problem of having the
 copy-to-bitmap step if the result set gets too big.

 Chunks might work better with the garbage collector, compared to the
 current solution, but I greatly prefer the idea of re-using structures.

 That being said, I realize that it is not simple to choose the proper
 strategy:

 http://stackoverflow.com/questions/1955322/at-what-point-is-it-worth-reusing-arrays-in-java

 In the case of an update-tracked structure, the cost of zeroing is
 linear to the amount of changed values. This makes it even harder to
 determine the best strategy as it will be tied to concrete index size
 and query pattern.

 - Toke Eskildsen, State and University Library, Denmark


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_51) - Build # 13814 - Failure!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13814/
Java: 32bit/jdk1.8.0_51 -client -XX:+UseG1GC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=1686, 
name=zkCallback-252-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=1656, 
name=zkCallback-252-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=1709, 
name=zkCallback-252-thread-4, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=1654, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[C8E88B2E754CE63A]-SendThread(127.0.0.1:52426),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
5) Thread[id=1655, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[C8E88B2E754CE63A]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=1686, name=zkCallback-252-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

Re: Better DocSetCollector

2015-08-11 Thread Adrien Grand
The go-to class for building a DocIdSet from a sorted iterator in
Lucene would rather be RoaringDocIdSet. BitDocIdSet works too, but it
has to trade quite some iteration/build speed and memory efficiency in
order to provide random write and read access. It also looks to me
that roaring bitmaps would be a good compromise for the ideas that
Toke and Yonik are having, as it allocates chunks of memory and
happily deals with holes in the doc ID space?

On Tue, Aug 11, 2015 at 11:25 AM, Ramkumar R. Aiyengar
andyetitmo...@gmail.com wrote:
 I wonder if there might be value in BitDocIdSet.Builder which Lucene uses.
 It had perf issues of its soon, but LUCENE-6645 seems to have fixed them,
 and it does a similar approach as above (int array and then fixedbitset).

 On 3 Aug 2015 12:35, Toke Eskildsen t...@statsbiblioteket.dk wrote:

 On Sat, 2015-08-01 at 15:09 -0700, Yonik Seeley wrote:
  I also investigated going the other way and tracking a Listint[] and
  allocating in smaller chunks (and even having a memory pool to pull
  the fixed size chunks from) but it was slower on my first attempt and
  I haven't returned to try more variants yet.  It *feels* like we
  should be able to get overall speedups by allocating in 8K chunks or
  so when the effects of memory bandwidth (the cost of zeroing) and GC
  are considered.

 Chunked allocations of int[] would still have the problem of having the
 copy-to-bitmap step if the result set gets too big.

 Chunks might work better with the garbage collector, compared to the
 current solution, but I greatly prefer the idea of re-using structures.

 That being said, I realize that it is not simple to choose the proper
 strategy:

 http://stackoverflow.com/questions/1955322/at-what-point-is-it-worth-reusing-arrays-in-java

 In the case of an update-tracked structure, the cost of zeroing is
 linear to the amount of changed values. This makes it even harder to
 determine the best strategy as it will be tied to concrete index size
 and query pattern.

 - Toke Eskildsen, State and University Library, Denmark


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7909) ZK ACL credential provider cannot be set from JVM params as documented

2015-08-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681367#comment-14681367
 ] 

Jan Høydahl commented on SOLR-7909:
---

I see that {{ZkACLProvider}} is created both in {{ZkController}} (from solr.xml 
config only) and in {{SolrZkClient}} (from System props only), and when created 
in {{ZkController.java}} the instance is passed to {{SolrZkClient}}'s 
constructor.

Fix is probably to make a single factory class or method for {{ZkACLProvider}} 
which takes both sysprop name and class name from solr.xml as input and creates 
the correct provider, with sysprop having priority.

 ZK ACL credential provider cannot be set from JVM params as documented
 --

 Key: SOLR-7909
 URL: https://issues.apache.org/jira/browse/SOLR-7909
 Project: Solr
  Issue Type: Bug
  Components: security
Affects Versions: 5.2.1
Reporter: Jan Høydahl
Priority: Blocker
 Fix For: 5.3


 In RefGuide 
 https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control you 
 are told to setup ZK security provider classes with system properties, but as 
 noted in the comments to that page, that no longer works, and you need to set 
 these in solr.xml.
 This should be a simple fix to get into 5.3, and quite important since 5.3 is 
 more than anything a security release...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2563 - Failure!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2563/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds

Error Message:
soft529 wasn't fast enough

Stack Trace:
java.lang.AssertionError: soft529 wasn't fast enough
at 
__randomizedtesting.SeedInfo.seed([9B258208505778:514FDC02B92367DF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds(SoftAutoCommitTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9899 lines...]
   [junit4] Suite: org.apache.solr.update.SoftAutoCommitTest
   [junit4]   2 Creating dataDir: 

[jira] [Commented] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681780#comment-14681780
 ] 

Robert Muir commented on LUCENE-6732:
-

+1, this is great

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6725) Reindex crashes the JVM

2015-08-11 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss reassigned LUCENE-6725:
---

Assignee: Dawid Weiss

 Reindex crashes the JVM
 ---

 Key: LUCENE-6725
 URL: https://issues.apache.org/jira/browse/LUCENE-6725
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.5
 Environment: Solaris 10 1/13 (Update 11) Patchset applied.
 Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
 CPU:total 64 v9, popc, vis1, vis2, vis3, blk_init, cbcond, sun4v, niagara_plus
 Memory: 8k page, physical 25165824k(3240848k free)
 vm_info: Java HotSpot(TM) 64-Bit Server VM (24.75-b04) for solaris-sparc JRE 
 (1.7.0_75-b13)
Reporter: Jan Eerdekens
Assignee: Dawid Weiss
Priority: Minor
 Attachments: branch5-jdk7u75-results.txt, 
 branch5-jdk8u51-results.txt, hs_err_pid18938.log, 
 lucene-3.5-ant-test-results.txt


 We're using Liferay which uses Lucene behind the screens to index things like 
 documents, web content, users, etc... . When we trigger a full reindex via 
 the Liferay Control Panel, which uses IndexWriter.deleteAll(), the JVM 
 crashes and generates a dump with the following message: 
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x78de94a8, pid=18938, tid=2478
 #
 # JRE version: Java(TM) SE Runtime Environment (7.0_75-b13) (build 
 1.7.0_75-b13)
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed mode 
 solaris-sparc compressed oops)
 # Problematic frame:
 # J 5227 C2 
 org.apache.lucene.index.IndexFileNames.segmentFileName(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;
  (44 bytes) @ 0x78de94a8 [0x78de9480+0x28]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6725) Reindex crashes the JVM

2015-08-11 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681633#comment-14681633
 ] 

Dawid Weiss commented on LUCENE-6725:
-

The nightlies indeed may consume a lot of disk space and CPU. Thanks for 
running the tests on those two JVMs though!

The segfault is really difficult to explain -- there is nothing special in:
{code}
J 5227 C2 
org.apache.lucene.index.IndexFileNames.segmentFileName(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;
 (44 bytes) @ 0x78de94a8 [0x78de9480+0x28]
{code}
to justify a seg fault. 

I think you should move to a newer JVM, as suggested by Uwe.


 Reindex crashes the JVM
 ---

 Key: LUCENE-6725
 URL: https://issues.apache.org/jira/browse/LUCENE-6725
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.5
 Environment: Solaris 10 1/13 (Update 11) Patchset applied.
 Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
 CPU:total 64 v9, popc, vis1, vis2, vis3, blk_init, cbcond, sun4v, niagara_plus
 Memory: 8k page, physical 25165824k(3240848k free)
 vm_info: Java HotSpot(TM) 64-Bit Server VM (24.75-b04) for solaris-sparc JRE 
 (1.7.0_75-b13)
Reporter: Jan Eerdekens
Priority: Minor
 Attachments: branch5-jdk7u75-results.txt, 
 branch5-jdk8u51-results.txt, hs_err_pid18938.log, 
 lucene-3.5-ant-test-results.txt


 We're using Liferay which uses Lucene behind the screens to index things like 
 documents, web content, users, etc... . When we trigger a full reindex via 
 the Liferay Control Panel, which uses IndexWriter.deleteAll(), the JVM 
 crashes and generates a dump with the following message: 
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x78de94a8, pid=18938, tid=2478
 #
 # JRE version: Java(TM) SE Runtime Environment (7.0_75-b13) (build 
 1.7.0_75-b13)
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed mode 
 solaris-sparc compressed oops)
 # Problematic frame:
 # J 5227 C2 
 org.apache.lucene.index.IndexFileNames.segmentFileName(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;
  (44 bytes) @ 0x78de94a8 [0x78de9480+0x28]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 13599 - Still Failing!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13599/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionStateFormat2Test.test

Error Message:
Error from server at http://127.0.0.1:38683: Could not find collection : 
myExternColl

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:38683: Could not find collection : myExternColl
at 
__randomizedtesting.SeedInfo.seed([3DE14CEBB95BC5EF:B5B5733117A7A817]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.testZkNodeLocation(CollectionStateFormat2Test.java:84)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.test(CollectionStateFormat2Test.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13815 - Still Failing!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13815/
Java: 64bit/jdk1.9.0-ea-b60 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
-Djava.locale.providers=JRE,SPI

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6294, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6294, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:49029: Could not find collection : 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([34E6386CB841F46A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:857)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:800)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)




Build Log:
[...truncated 10350 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2 Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CollectionsAPIDistributedZkTest_34E6386CB841F46A-001/init-core-data-001
   [junit4]   2 704786 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[34E6386CB841F46A]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2 704787 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[34E6386CB841F46A]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2 704789 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 704789 INFO  (Thread-1985) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 704789 INFO  (Thread-1985) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2 704889 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.ZkTestServer start zk server on port:55789
   [junit4]   2 704889 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 704896 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 704901 INFO  (zkCallback-791-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@4980ca39 
name:ZooKeeperConnection Watcher:127.0.0.1:55789 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 704901 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 704901 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 704901 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 704903 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 704903 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 704904 INFO  (zkCallback-792-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@f51d91e name:ZooKeeperConnection 
Watcher:127.0.0.1:55789/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2 704904 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[34E6386CB841F46A]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 

[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b24) - Build # 13816 - Still Failing!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13816/
Java: 32bit/jdk1.8.0_60-ea-b24 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([5AAC63F1568B9249:FDE8DB553B3081F0]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationWithTruncatedTlog(CdcrReplicationHandlerTest.java:121)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6732:
--
Description: 
Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
log for warnings by javac and reports them in statistics, together with source 
file dumps.

When doing that I found out that someone added again a lot of invalid license 
headers using {{/\*\*}} instead a simple comment. This causes javadocs warnings 
under some circumstances, because {{/\*\*}} is start of javadocs and not a 
license comment.

I then tried to fix the validate-source-patterns to detect this, but due to a 
bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
has multiline matching capabilities!!!).

So I rewrote our checker to run with groovy. This also has some good parts:
- it tells you wwhat was broken, otherwise you just know there is an error, but 
not whats wrong (tab, nocommit,...)
- its much faster (multiple {{containsregexp/}} read file over and over, this 
one reads file one time into a string and then applies all regular expressions).

  was:
Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
log for warnings by javac and reports them in statistics, together with source 
file dumps.

When doing that I found out that someone added again a lot of invalid license 
headers using {{/**}} instead a simple comment. This causes javadocs warnings 
under some circumstances, because {{/**}} is start of javadocs and not a 
license comment.

I then tried to fix the validate-source-patterns to detect this, but due to a 
bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
has multiline matching capabilities!!!).

So I rewrote our checker to run with groovy. This also has some good parts:
- it tells you wwhat was broken, otherwise you just know there is an error, but 
not whats wrong (tab, nocommit,...)
- its much faster (multiple {{containsregexp/}} read file over and over, this 
one reads file one time into a string and then applies all regular expressions).


 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler

 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to a 
 bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
 has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-11 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6174:

Attachment: LUCENE-6174.patch

I think this can be hardcoded in {{dot.classpath.xsl}} (to be 1.7 in branch_5x 
and 1.8 in trunk).

This declaration declares an execution environment which one can then define 
in Eclipse settings to be any available 1.8-compatible JVM:
{code}
  classpathentry kind=con 
path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.8/
{code}

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6174.patch


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7790) Update Carrot2 clustering contrib to version 3.10.3

2015-08-11 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-7790:
--
Summary: Update Carrot2 clustering contrib to version 3.10.3  (was: Update 
Carrot2 clustering contrib to version 3.10.2)

 Update Carrot2 clustering contrib to version 3.10.3
 ---

 Key: SOLR-7790
 URL: https://issues.apache.org/jira/browse/SOLR-7790
 Project: Solr
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 5.3, Trunk

 Attachments: SOLR-7790.patch


 This issue brings the clustering extension up to date and also involves 
 upgrading a few other libraries (see sub-tasks or linked issues).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6725) Reindex crashes the JVM

2015-08-11 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-6725.
-
Resolution: Won't Fix

Closing as won't fix since it only happens on a very old Lucene version 
(3.5.0). Newer Lucene versions seem to pass just fine. A workaround is to 
upgrade the JVM to a modern release of 1.8.x line.

 Reindex crashes the JVM
 ---

 Key: LUCENE-6725
 URL: https://issues.apache.org/jira/browse/LUCENE-6725
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.5
 Environment: Solaris 10 1/13 (Update 11) Patchset applied.
 Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
 CPU:total 64 v9, popc, vis1, vis2, vis3, blk_init, cbcond, sun4v, niagara_plus
 Memory: 8k page, physical 25165824k(3240848k free)
 vm_info: Java HotSpot(TM) 64-Bit Server VM (24.75-b04) for solaris-sparc JRE 
 (1.7.0_75-b13)
Reporter: Jan Eerdekens
Assignee: Dawid Weiss
Priority: Minor
 Attachments: branch5-jdk7u75-results.txt, 
 branch5-jdk8u51-results.txt, hs_err_pid18938.log, 
 lucene-3.5-ant-test-results.txt


 We're using Liferay which uses Lucene behind the screens to index things like 
 documents, web content, users, etc... . When we trigger a full reindex via 
 the Liferay Control Panel, which uses IndexWriter.deleteAll(), the JVM 
 crashes and generates a dump with the following message: 
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x78de94a8, pid=18938, tid=2478
 #
 # JRE version: Java(TM) SE Runtime Environment (7.0_75-b13) (build 
 1.7.0_75-b13)
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed mode 
 solaris-sparc compressed oops)
 # Problematic frame:
 # J 5227 C2 
 org.apache.lucene.index.IndexFileNames.segmentFileName(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;
  (44 bytes) @ 0x78de94a8 [0x78de9480+0x28]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4763) Performance issue when using group.facet=true

2015-08-11 Thread Ovidiu Mihalcea (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681683#comment-14681683
 ] 

Ovidiu Mihalcea commented on SOLR-4763:
---

 Would really need some good news with this. We really need result grouping 
with faceting and this is really slowing our site... :(

 Performance issue when using group.facet=true
 -

 Key: SOLR-4763
 URL: https://issues.apache.org/jira/browse/SOLR-4763
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Alexander Koval
Assignee: Erick Erickson
 Fix For: 5.3, Trunk

 Attachments: SOLR-4763.patch, SOLR-4763.patch, SOLR-4763.patch


 I do not know whether this is bug or not. But calculating facets with 
 {{group.facet=true}} is too slow.
 I have query that:
 {code}
 matches: 730597,
 ngroups: 24024,
 {code}
 1. All queries with {{group.facet=true}}:
 {code}
 QTime: 5171
 facet: {
 time: 4716
 {code}
 2. Without {{group.facet}}:
 * First query:
 {code}
 QTime: 3284
 facet: {
 time: 3104
 {code}
 * Next queries:
 {code}
 QTime: 230,
 facet: {
 time: 76
 {code}
 So I think with {{group.facet=true}} Solr doesn't use cache to calculate 
 facets.
 Is it possible to improve performance of facets when {{group.facet=true}}?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-11 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-6732:
-

 Summary: Improve validate-source-patterns in build.xml (e.g., 
detect invalid license headers!!)
 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler


Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
log for warnings by javac and reports them in statistics, together with source 
file dumps.

When doing that I found out that someone added again a lot of invalid license 
headers using {{/**}} instead a simple comment. This causes javadocs warnings 
under some circumstances, because {{/**}} is start of javadocs and not a 
license comment.

I then tried to fix the validate-source-patterns to detect this, but due to a 
bug in ANT, the {{containsregexp/}} filter is applied per line (although it 
has multiline matching capabilities!!!).

So I rewrote our checker to run with groovy. This also has some good parts:
- it tells you wwhat was broken, otherwise you just know there is an error, but 
not whats wrong (tab, nocommit,...)
- its much faster (multiple {{containsregexp/}} read file over and over, this 
one reads file one time into a string and then applies all regular expressions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7906) java.lang.NullPointerException from Json when doing solr search

2015-08-11 Thread nelson maria (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681764#comment-14681764
 ] 

nelson maria commented on SOLR-7906:


That is pseudo code Erick

Here is what it would look like without UTF.

http://localhost:8080/query?q=(PgmNaFuzzy:tradition~.75 with~.75 a~.75 
twist~.75) OR PgmNa:\tradition with a twist\) AND(SerNaFuzzy:quilting~.75 
arts~.75)))wt=jsonfl=id,PgmCde,PgmNa,PgmPriInd,SerCde,SerNa,SerPriInd,score,PgmNAscore:strdist(PgmNa:\tradition
 with a twist\,edit),SerNAscore:strdist(SerNa:\quilting 
arts\,edit)start=1rows=50PgmId=EP010131490183matchtype=series_ep

Here is with the UTF

http:///localhost:8080://query?q=%28%28PgmNaFuzzy%3A%28tradition%7E0.75+with%7E0.75+a%7E0.75+twist%7E0.75%29+OR+PgmNa%3A%22tradition+with+a+twist%22%29AND+%28SerNaFuzzy%3A%28quilting%7E0.75+arts%7E0.75%29%29%298ShowYr%3A%222015%22+OR+ShowYr%3A%5B*+TO+1939%5D%29wt=jsonfl=id,PgmCde,PgmNa,PgmPriInd,SerCde,SerNa,SerPriInd,score,PgmNAscore:strdist(PgmNa%2C%22tradition+with+a+twist%22%2Cedit%29,SerNAscore:strdist(SerNa%2C%22quilting+arts%22%2Cedit%29start=1rows=50PgmId=EP010131490183matchtype=series_ep

 java.lang.NullPointerException from Json when doing solr search
 ---

 Key: SOLR-7906
 URL: https://issues.apache.org/jira/browse/SOLR-7906
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.8
 Environment: Linux
Reporter: nelson maria
Priority: Blocker
 Attachments: field type.txt


 Getting this response from Solr when doing search.
 {error:{trace:java.lang.NullPointerException\n,code:500}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Better DocSetCollector

2015-08-11 Thread Toke Eskildsen
On Tue, 2015-08-11 at 11:57 +0200, Adrien Grand wrote:
 The go-to class for building a DocIdSet from a sorted iterator in
 Lucene would rather be RoaringDocIdSet. BitDocIdSet works too, but it
 has to trade quite some iteration/build speed and memory efficiency in
 order to provide random write and read access.

I guess one of the reasons for this mess of id set implementations is
the different needs: sparse vs. dense (memory and performance),
iteration vs. random access (performance), and/or/andNot operations
across different implementations.

I have looked at RoaringDocIdSet and agree that it looks very promising.

- Toke Eskildsen, State and University Library, Denmark



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6732:
--
Attachment: LUCENE-6732.patch

Patch. Some files with invalid license headers were fixed already, but I have 
now like 100 more files to fix:

{noformat}
-validate-source-patterns:
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/ar/ArabicAnalyzer.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/bg/BulgarianAnalyzer.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/el/GreekAnalyzer.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/el/GreekLowerCaseFilter.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/fa/PersianAnalyzer.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/hi/HindiAnalyzer.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/th/ThaiAnalyzer.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestStopFilter.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/el/GreekAnalyzerTest.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/miscellaneous/TestScandinavianFoldingFilterFactory.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/miscellaneous/TestScandinavianNormalizationFilterFactory.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/payloads/NumericPayloadTokenFilterTest.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/payloads/TokenOffsetPayloadTokenFilterTest.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/payloads/TypeAsPayloadTokenFilterTest.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/sinks/DateRecognizerSinkTokenizerTest.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/sinks/TestTeeSinkTokenFilter.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/sinks/TokenTypeSinkTokenizerTest.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/test/org/apache/lucene/analysis/snowball/TestSnowballPorterFilterFactory.java
[source-patterns] javadoc-style license header: 
lucene/analysis/common/src/tools/java/org/apache/lucene/analysis/standard/GenerateJflexTLDMacros.java
[source-patterns] javadoc-style license header: 
lucene/analysis/icu/src/java/org/apache/lucene/collation/ICUCollationDocValuesField.java
[source-patterns] javadoc-style license header: 
lucene/analysis/icu/src/test/org/apache/lucene/collation/TestICUCollationDocValuesField.java
[source-patterns] javadoc-style license header: 
lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/JapaneseIterationMarkCharFilter.java
[source-patterns] javadoc-style license header: 
lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/JapaneseIterationMarkCharFilterFactory.java
[source-patterns] javadoc-style license header: 
lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/JapaneseNumberFilter.java
[source-patterns] javadoc-style license header: 
lucene/analysis/kuromoji/src/test/org/apache/lucene/analysis/ja/TestJapaneseIterationMarkCharFilter.java
[source-patterns] javadoc-style license header: 
lucene/analysis/kuromoji/src/test/org/apache/lucene/analysis/ja/TestJapaneseNumberFilter.java
[source-patterns] javadoc-style license header: 
lucene/analysis/stempel/src/java/org/apache/lucene/analysis/stempel/StempelFilter.java
[source-patterns] javadoc-style license header: 
lucene/analysis/stempel/src/java/org/apache/lucene/analysis/stempel/StempelStemmer.java
[source-patterns] javadoc-style license header: 
lucene/benchmark/src/java/org/apache/lucene/benchmark/Constants.java
[source-patterns] javadoc-style license header: 
lucene/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/AbstractQueryMaker.java
[source-patterns] javadoc-style license header: 
lucene/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/FileBasedQueryMaker.java
[source-patterns] javadoc-style license header: 
lucene/benchmark/src/java/org/apache/lucene/benchmark/byTask/programmatic/Sample.java
[source-patterns] javadoc-style license header: 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3412 - Failure

2015-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3412/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=4987, name=collection1, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=4987, name=collection1, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([C8C7D04B95D54AC1:4093EF913B292739]:0)
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:50446/or: Could not find collection : 
awholynewstresscollection_collection1_0
at __randomizedtesting.SeedInfo.seed([C8C7D04B95D54AC1]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:894)


REGRESSION:  org.apache.solr.cloud.hdfs.StressHdfsTest.test

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([C8C7D04B95D54AC1:4093EF913B292739]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:828)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.createAndDeleteCollection(StressHdfsTest.java:154)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.test(StressHdfsTest.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 

[jira] [Updated] (SOLR-7790) Update Carrot2 clustering contrib to version 3.10.3

2015-08-11 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-7790:
--
Attachment: SOLR-7790.patch

New patch, tested against (an unreleased) C2 with shaded guava to avoid 
dependency hell/ conflicts.

 Update Carrot2 clustering contrib to version 3.10.3
 ---

 Key: SOLR-7790
 URL: https://issues.apache.org/jira/browse/SOLR-7790
 Project: Solr
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 5.3, Trunk

 Attachments: SOLR-7790.patch, SOLR-7790.patch


 This issue brings the clustering extension up to date and also involves 
 upgrading a few other libraries (see sub-tasks or linked issues).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7838) Implement a RuleBasedAuthorizationPlugin

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681930#comment-14681930
 ] 

ASF subversion and git services commented on SOLR-7838:
---

Commit 1695324 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1695324 ]

SOLR-7838: changed the permissions froma map to an array so that order is 
obvious

 Implement a RuleBasedAuthorizationPlugin
 

 Key: SOLR-7838
 URL: https://issues.apache.org/jira/browse/SOLR-7838
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.3, Trunk


 h2. authorization plugin
 This would store the roles of various users and their privileges in ZK
 sample authorization.json
 {code:javascript}
 {
   authorization: {
 class: solr.ZKAuthorization,
user-role :{
   john : [admin, guest]
   tom : 'dev'
}
 permissions: [
{name:collection-edit,
  role: admin 
},
{name:coreadmin,
  role:admin
},
{name: mycoll_update,
 collection: mycoll,
 path:[/update/*],
 role: [guest,admin]
   }]
 }
   }
 }
 {code} 
 This also supports editing of the configuration through APIs
 Example 1: add or remove roles
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{ 
   set-user-role: {tom:[admin,dev},
 set-user-role: {harry:null}
 }'
 {code}
  
 Example 2: add or remove permissions
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json'-d '{ 
   set-permission: { name:a-custom-permission-name,
   collection:gettingstarted,
   path:/handler-name,
   before: name-of-another-permission
},
  delete-permission:permission-name
 }'
 {code}
 Use the 'before' property to re-order your permissions
 Example 3: Restrict collection admin operations (writes only) to be performed 
 by an admin only
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{
 set-permission : {name:collection-admin-edit, role:admin}}'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7838) Implement a RuleBasedAuthorizationPlugin

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681932#comment-14681932
 ] 

ASF subversion and git services commented on SOLR-7838:
---

Commit 1695325 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1695325 ]

SOLR-7838: changed the permissions froma map to an array so that order is 
obvious

 Implement a RuleBasedAuthorizationPlugin
 

 Key: SOLR-7838
 URL: https://issues.apache.org/jira/browse/SOLR-7838
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.3, Trunk


 h2. authorization plugin
 This would store the roles of various users and their privileges in ZK
 sample authorization.json
 {code:javascript}
 {
   authorization: {
 class: solr.ZKAuthorization,
user-role :{
   john : [admin, guest]
   tom : 'dev'
}
 permissions: [
{name:collection-edit,
  role: admin 
},
{name:coreadmin,
  role:admin
},
{name: mycoll_update,
 collection: mycoll,
 path:[/update/*],
 role: [guest,admin]
   }]
 }
   }
 }
 {code} 
 This also supports editing of the configuration through APIs
 Example 1: add or remove roles
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{ 
   set-user-role: {tom:[admin,dev},
 set-user-role: {harry:null}
 }'
 {code}
  
 Example 2: add or remove permissions
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json'-d '{ 
   set-permission: { name:a-custom-permission-name,
   collection:gettingstarted,
   path:/handler-name,
   before: name-of-another-permission
},
  delete-permission:permission-name
 }'
 {code}
 Use the 'before' property to re-order your permissions
 Example 3: Restrict collection admin operations (writes only) to be performed 
 by an admin only
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{
 set-permission : {name:collection-admin-edit, role:admin}}'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3 release

2015-08-11 Thread Shalin Shekhar Mangar
Yeah, sorry Mike. I wasn't sure of the protocol here. I'll keep that
in mind from the next time.

On Tue, Aug 11, 2015 at 3:18 AM, Michael McCandless
luc...@mikemccandless.com wrote:
 On Mon, Aug 10, 2015 at 5:30 PM, Chris Hostetter
 hossman_luc...@fucit.org wrote:

 : I'm confused: this issue isn't a blocker?  Why are we holding up a
 : release for non-blocker issues?

 If i understand correctly, SOLR-7838 is (part of) a feature and by
 definition not a blocker -- but after being committed Shalin noticed that
 the syntax of the user facing API introduced in that feature is
 problematic and the folks involved wnat to try and fix it before release
 because otherwise it becomes a back compat nightmare

 https://issues.apache.org/jira/browse/SOLR-7838?focusedCommentId=14680361page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14680361

 ...not sure if marking this feature as a blocker at that point would
 have really been a good process to follow?

 (perhaps a new Blocker: Bug should have been created? Need to fix
 permissions API syntax before 5.3 release otherwise it's a backcompat
 nightmare ?)

 OK thanks for the summary Hoss, I didn't read the comments on the
 issue to see the backstory.

 I do think the right thing to do in this case is to reopen the issue,
 mark it blocker at that point, and in the comment explain why a new
 feature became a release blocker ... either that or a new issue ...

 And thank you Shalin for reviewing a new feature before it's released :)

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs

2015-08-11 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7692:
-
Description: 
This involves various components
h2. Authentication

A basic auth based authentication filter. This should retrieve the user 
credentials from ZK.  The user name and sha1 hash of password should be stored 
in ZK

sample authentication json 
{code:javascript}
{
  authentication:{

class: solr.BasicAuthPlugin,
users :{
  john :09fljnklnoiuy98 buygujkjnlk,
  david:f678njfgfjnklno iuy9865ty,
  pete: 87ykjnklndfhjh8 98uyiy98,
   }
  }
}
{code}

h2. authorization plugin

This would store the roles of various users and their privileges in ZK

sample authorization.json

{code:javascript}
{
  authorization: {
class: solr.ZKAuthorization,
   user-role :{
  john : [admin, guest]
  tom : 'dev'
   }
permissions: [
   {name:collection-edit,
 role: admin 
   },
   {name:coreadmin,
 role:admin
   },
   {name: mycoll_update,
collection: mycoll,
path:[/update/*],
role: [guest,admin]
  }]
}
  }
}
{code} 

We will also need to provide APIs to create users and assign them roles

  was:
This involves various components
h2. Authentication

A basic auth based authentication filter. This should retrieve the user 
credentials from ZK.  The user name and sha1 hash of password should be stored 
in ZK

sample authentication json 
{code:javascript}
{
  authentication:{

class: solr.BasicAuthPlugin,
users :{
  john :09fljnklnoiuy98 buygujkjnlk,
  david:f678njfgfjnklno iuy9865ty,
  pete: 87ykjnklndfhjh8 98uyiy98,
   }
  }
}
{code}

h2. authorization plugin

This would store the roles of various users and their privileges in ZK

sample authorization.json

{code:javascript}
{
  authorization: {
class: solr.ZKAuthorization,
   roles :{
  admin : [john]
  guest : [john, david,pete]
   }
permissions: {
   collection-edit: {
 role: admin 
   },
   coreadmin:{
 role:admin
   },
   config-edit: {
 //all collections
 role: admin,
 method:POST
   },
   schema-edit: {
 roles: admin,
 method:POST
   },
   update: {
 //all collections
 role: dev
   },
  mycoll_update: {
collection: mycoll,
path:[/update/*],
role: [somebody]
  }
}
  }
}
{code} 

We will also need to provide APIs to create users and assign them roles


 Implement BasicAuth based impl for the new Authentication/Authorization APIs
 

 Key: SOLR-7692
 URL: https://issues.apache.org/jira/browse/SOLR-7692
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.3, Trunk

 Attachments: SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, 
 SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, 
 SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, 
 SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, 
 SOLR-7757.patch, SOLR-7757.patch, SOLR-7757.patch


 This involves various components
 h2. Authentication
 A basic auth based authentication filter. This should retrieve the user 
 credentials from ZK.  The user name and sha1 hash of password should be 
 stored in ZK
 sample authentication json 
 {code:javascript}
 {
   authentication:{
 class: solr.BasicAuthPlugin,
 users :{
   john :09fljnklnoiuy98 buygujkjnlk,
   david:f678njfgfjnklno iuy9865ty,
   pete: 87ykjnklndfhjh8 98uyiy98,
}
   }
 }
 {code}
 h2. authorization plugin
 This would store the roles of various users and their privileges in ZK
 sample authorization.json
 {code:javascript}
 {
   authorization: {
 class: solr.ZKAuthorization,
user-role :{
   john : [admin, guest]
   tom : 'dev'
}
 permissions: [
{name:collection-edit,
  role: admin 
},
{name:coreadmin,
  role:admin
},
{name: mycoll_update,
 collection: mycoll,
 path:[/update/*],
 role: [guest,admin]
   }]
 }
   }
 }
 {code} 
 We will also need to provide APIs to create users and assign them roles



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7906) java.lang.NullPointerException from Json when doing solr search

2015-08-11 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681823#comment-14681823
 ] 

Yonik Seeley commented on SOLR-7906:


Is there a full stack trace in the solr logs?
If you change wt=json to wt=xml what happens?

 java.lang.NullPointerException from Json when doing solr search
 ---

 Key: SOLR-7906
 URL: https://issues.apache.org/jira/browse/SOLR-7906
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.8
 Environment: Linux
Reporter: nelson maria
Priority: Blocker
 Attachments: field type.txt


 Getting this response from Solr when doing search.
 {error:{trace:java.lang.NullPointerException\n,code:500}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681889#comment-14681889
 ] 

Mark Miller commented on SOLR-7836:
---

Still trying to duplicate a hang with the above patch removed, but one strange 
thing I had to work around because the test kept failing relatively quickly for 
me (JVM bug?):
{code}
Index: solr/core/src/java/org/apache/solr/core/SolrCore.java
===
--- solr/core/src/java/org/apache/solr/core/SolrCore.java   (revision 
1695180)
+++ solr/core/src/java/org/apache/solr/core/SolrCore.java   (working copy)
@@ -1639,7 +1639,9 @@
   tmp = new SolrIndexSearcher(this, newIndexDir, getLatestSchema(),
   (realtime ? realtime:main), newReader, true, !realtime, 
true, directoryFactory);
 } else  {
-  RefCountedIndexWriter writer = 
getUpdateHandler().getSolrCoreState().getIndexWriter(this);
+  // when this was getUpdateHandler#getSolrCoreState it could hit an 
NPE somehow,
+  // even though variables are all final
+  RefCountedIndexWriter writer = solrCoreState.getIndexWriter(this);
   DirectoryReader newReader = null;
   try {
 newReader = indexReaderFactory.newReader(writer.get(), this);
{code}

 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-synch.patch, SOLR-7836.patch, SOLR-7836.patch, 
 SOLR-7836.patch


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-11 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681912#comment-14681912
 ] 

Yonik Seeley commented on SOLR-7836:


bq.  there are a lot of operations on ulog.something that are synchronized on 
solrCoreState.getUpdateLock(), and a bunch that aren't. What's up there? 

Some background here: I wrote the original tlog code (and DUH2 code that called 
it).  There was no solrCoreState.getUpdateLock() (and no sharing writers across 
reloads even).  Mark implemented that part and changed synchronized(this) to 
synchronized(solrCoreState.getUpdateLock()) I believe (to account for the fact 
that we could have 2 DUH2 instances).

Hopefully there are comments about when something is synchronized (and why it 
needed to be).  The intent was to have the common case unsynchronized for best 
throughput.  For example, I don't believe writer.updateDocument for the common 
case is synchronized.  That would be bad for indexing performance.

deleteByQuery (or an add where we detect a reordered DBQ that we need to apply 
again) contains the following
comment next to the synchronize statement:
{code}
  //
  // synchronized to prevent deleteByQuery from running during the open 
new searcher
  // part of a commit.  DBQ needs to signal that a fresh reader will be 
needed for
  // a realtime view of the index.  When a new searcher is opened after a 
DBQ, that
  // flag can be cleared.  If those thing happen concurrently, it's not 
thread safe.
  //
{code}

I'm re-reviewing all this code now to get it back in my head...

 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-synch.patch, SOLR-7836.patch, SOLR-7836.patch, 
 SOLR-7836.patch


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7838) Implement a RuleBasedAuthorizationPlugin

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681866#comment-14681866
 ] 

ASF subversion and git services commented on SOLR-7838:
---

Commit 1695308 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1695308 ]

SOLR-7838: changed the permissions froma map to an array so that order is 
obvious

 Implement a RuleBasedAuthorizationPlugin
 

 Key: SOLR-7838
 URL: https://issues.apache.org/jira/browse/SOLR-7838
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.3, Trunk


 h2. authorization plugin
 This would store the roles of various users and their privileges in ZK
 sample authorization.json
 {code:javascript}
 {
   authorization: {
 class: solr.ZKAuthorization,
user-role :{
   john : [admin, guest]
   tom : 'dev'
}
 permissions: [
{name:collection-edit,
  role: admin 
},
{name:coreadmin,
  role:admin
},
{name: mycoll_update,
 collection: mycoll,
 path:[/update/*],
 role: [guest,admin]
   }]
 }
   }
 }
 {code} 
 This also supports editing of the configuration through APIs
 Example 1: add or remove roles
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{ 
   set-user-role: {tom:[admin,dev},
 set-user-role: {harry:null}
 }'
 {code}
  
 Example 2: add or remove permissions
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json'-d '{ 
   set-permission: { name:a-custom-permission-name,
   collection:gettingstarted,
   path:/handler-name,
   before: name-of-another-permission
},
  delete-permission:permission-name
 }'
 {code}
 Use the 'before' property to re-order your permissions
 Example 3: Restrict collection admin operations (writes only) to be performed 
 by an admin only
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{
 set-permission : {name:collection-admin-edit, role:admin}}'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 13600 - Still Failing!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13600/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
Delete action failed!

Stack Trace:
java.lang.AssertionError: Delete action failed!
at 
__randomizedtesting.SeedInfo.seed([B80B05CA6C2A7E52:AB6837A55D45C7F4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169)
at 
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-7838) Implement a RuleBasedAuthorizationPlugin

2015-08-11 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7838:
-
Description: 
h2. authorization plugin

This would store the roles of various users and their privileges in ZK

sample authorization.json

{code:javascript}
{
  authorization: {
class: solr.ZKAuthorization,
   user-role :{
  john : [admin, guest]
  tom : 'dev'
   }
permissions: [
   {name:collection-edit,
 role: admin 
   },
   {name:coreadmin,
 role:admin
   },
   {name: mycoll_update,
collection: mycoll,
path:[/update/*],
role: [guest,admin]
  }]
}
  }
}
{code} 
This also supports editing of the configuration through APIs
Example 1: add or remove roles

{code}
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{ 
  set-user-role: {tom:[admin,dev},
set-user-role: {harry:null}
}'
{code}
 

Example 2: add or remove permissions


{code}
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json'-d '{ 

  set-permission: { name:a-custom-permission-name,
  collection:gettingstarted,
  path:/handler-name,
  before: name-of-another-permission
   },

 delete-permission:permission-name

}'
{code}
Use the 'before' property to re-order your permissions

Example 3: Restrict collection admin operations (writes only) to be performed 
by an admin only

{code}
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{
set-permission : {name:collection-admin-edit, role:admin}}'

{code}

  was:
h2. authorization plugin

This would store the roles of various users and their privileges in ZK

sample authorization.json

{code:javascript}
{
  authorization: {
class: solr.ZKAuthorization,
   roles :{
  john : [admin]
  david : [guest,dev]
   }
permissions: {
   collection-edit: {
 role: admin 
   },
   coreadmin:{
 role:admin
   },
   config-edit: {
 //all collections
 role: admin,
 method:POST
   },
   schema-edit: {
 roles: admin,
 method:POST
   },
   update: {
 //all collections
 role: dev
   },
  mycoll_update: {
collection: mycoll,
path:[/update/*],
role: [somebody]
  }
}
  }
}
{code} 
This also supports editing of the configuration through APIs
Example 1: add or remove roles

{code}
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{ 

  set-user-role: {tom:[admin,dev},

  set-user-role: {harry:null}

}'
{code}
 

Example 2: add or remove permissions


{code}
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json'-d '{ 

  set-permission: { name:a-custom-permission-name,

  collection:gettingstarted,

  path:/handler-name,

  before: name-of-another-permission

   },

 delete-permission:permission-name

}'
{code}
Please note that you have to replace the whole permission each time it is 
edited. The API does not support editing one property at a time. Use the 
'before' property to re-order your permissions

Example 3: Restrict collection admin operations (writes only) to be performed 
by an admin only

{code}
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{

set-permission : {name:collection-admin-edit, role:admin}}'

{code}


 Implement a RuleBasedAuthorizationPlugin
 

 Key: SOLR-7838
 URL: https://issues.apache.org/jira/browse/SOLR-7838
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.3, Trunk


 h2. authorization plugin
 This would store the roles of various users and their privileges in ZK
 sample authorization.json
 {code:javascript}
 {
   authorization: {
 class: solr.ZKAuthorization,
user-role :{
   john : [admin, guest]
   tom : 'dev'
}
 permissions: [
{name:collection-edit,
  role: admin 
},
{name:coreadmin,
  role:admin
},
{name: mycoll_update,
 collection: mycoll,
 path:[/update/*],
 role: [guest,admin]
   }]
 }
   }
 }
 {code} 
 This also supports editing of the configuration through APIs
 Example 1: add or remove roles
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{ 
   set-user-role: {tom:[admin,dev},
 set-user-role: {harry:null}
 }'
 {code}
  
 

[jira] [Commented] (SOLR-7826) Permission issues when creating cores with bin/solr

2015-08-11 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681977#comment-14681977
 ] 

Shawn Heisey commented on SOLR-7826:


bq.  Perhaps bin/solr always should bail out early if executed as root, perhaps 
with an --runasrootonyourownrisk param to override?

Sounds awesome to me.  There's another project that does something similar to 
protect the user from themselves, and the option to explicitly force the action 
is not documented anywhere except in the program output, which I think is a 
reasonable thing to do here.

I want to say that it's the linux raid tools (mdadm) that has the undocumented 
I really know what I'm doing, please proceed option, but I can no longer 
remember ... and google isn't helpful since it's not documented. ;)


 Permission issues when creating cores with bin/solr
 ---

 Key: SOLR-7826
 URL: https://issues.apache.org/jira/browse/SOLR-7826
 Project: Solr
  Issue Type: Improvement
Reporter: Shawn Heisey
Priority: Minor

 Ran into an interesting situation on IRC today.
 Solr has been installed as a service using the shell script 
 install_solr_service.sh ... so it is running as an unprivileged user.
 User is running bin/solr create as root.  This causes permission problems, 
 because the script creates the core's instanceDir with root ownership, then 
 when Solr is instructed to actually create the core, it cannot create the 
 dataDir.
 Enhancement idea:  When the install script is used, leave breadcrumbs 
 somewhere so that the create core section of the main script can find it 
 and su to the user specified during install.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7838) Implement a RuleBasedAuthorizationPlugin

2015-08-11 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-7838.
--
Resolution: Fixed

 Implement a RuleBasedAuthorizationPlugin
 

 Key: SOLR-7838
 URL: https://issues.apache.org/jira/browse/SOLR-7838
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.3, Trunk


 h2. authorization plugin
 This would store the roles of various users and their privileges in ZK
 sample authorization.json
 {code:javascript}
 {
   authorization: {
 class: solr.ZKAuthorization,
user-role :{
   john : [admin, guest]
   tom : 'dev'
}
 permissions: [
{name:collection-edit,
  role: admin 
},
{name:coreadmin,
  role:admin
},
{name: mycoll_update,
 collection: mycoll,
 path:[/update/*],
 role: [guest,admin]
   }]
 }
   }
 }
 {code} 
 This also supports editing of the configuration through APIs
 Example 1: add or remove roles
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{ 
   set-user-role: {tom:[admin,dev},
 set-user-role: {harry:null}
 }'
 {code}
  
 Example 2: add or remove permissions
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json'-d '{ 
   set-permission: { name:a-custom-permission-name,
   collection:gettingstarted,
   path:/handler-name,
   before: name-of-another-permission
},
  delete-permission:permission-name
 }'
 {code}
 Use the 'before' property to re-order your permissions
 Example 3: Restrict collection admin operations (writes only) to be performed 
 by an admin only
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{
 set-permission : {name:collection-admin-edit, role:admin}}'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.3 - Build # 5 - Still Failing

2015-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.3/5/

No tests ran.

Build Log:
[...truncated 53192 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (11.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.3.0-src.tgz...
   [smoker] 28.5 MB in 0.04 sec (745.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.0.tgz...
   [smoker] 65.7 MB in 0.09 sec (719.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.0.zip...
   [smoker] 75.9 MB in 0.13 sec (578.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query lucene
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query lucene
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 213 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query lucene
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (21.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.3.0-src.tgz...
   [smoker] 37.0 MB in 0.34 sec (107.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.3.0.tgz...
   [smoker] 128.7 MB in 1.25 sec (102.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.3.0.zip...
   [smoker] 136.2 MB in 1.17 sec (116.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.3.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   starting Solr on port 8983 from 

[jira] [Commented] (SOLR-7838) Implement a RuleBasedAuthorizationPlugin

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681957#comment-14681957
 ] 

ASF subversion and git services commented on SOLR-7838:
---

Commit 1695331 from [~noble.paul] in branch 'dev/branches/lucene_solr_5_3'
[ https://svn.apache.org/r1695331 ]

SOLR-7838: changed the permissions from a map to an array so that order is 
obvious

 Implement a RuleBasedAuthorizationPlugin
 

 Key: SOLR-7838
 URL: https://issues.apache.org/jira/browse/SOLR-7838
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.3, Trunk


 h2. authorization plugin
 This would store the roles of various users and their privileges in ZK
 sample authorization.json
 {code:javascript}
 {
   authorization: {
 class: solr.ZKAuthorization,
user-role :{
   john : [admin, guest]
   tom : 'dev'
}
 permissions: [
{name:collection-edit,
  role: admin 
},
{name:coreadmin,
  role:admin
},
{name: mycoll_update,
 collection: mycoll,
 path:[/update/*],
 role: [guest,admin]
   }]
 }
   }
 }
 {code} 
 This also supports editing of the configuration through APIs
 Example 1: add or remove roles
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{ 
   set-user-role: {tom:[admin,dev},
 set-user-role: {harry:null}
 }'
 {code}
  
 Example 2: add or remove permissions
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json'-d '{ 
   set-permission: { name:a-custom-permission-name,
   collection:gettingstarted,
   path:/handler-name,
   before: name-of-another-permission
},
  delete-permission:permission-name
 }'
 {code}
 Use the 'before' property to re-order your permissions
 Example 3: Restrict collection admin operations (writes only) to be performed 
 by an admin only
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{
 set-permission : {name:collection-admin-edit, role:admin}}'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_51) - Build # 13818 - Failure!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13818/
Java: 64bit/jdk1.8.0_51 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionStateFormat2Test.test

Error Message:
Error from server at http://127.0.0.1:59808: Could not find collection : 
myExternColl

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:59808: Could not find collection : myExternColl
at 
__randomizedtesting.SeedInfo.seed([EA6C1FA82A8555F1:6238207284793809]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:857)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:800)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.testZkNodeLocation(CollectionStateFormat2Test.java:84)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.test(CollectionStateFormat2Test.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

Re: 5.3 release

2015-08-11 Thread Noble Paul
I'm done with the blockers.
Planning to cut an RC soon.
--Noble

On Tue, Aug 11, 2015 at 7:08 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
 Yeah, sorry Mike. I wasn't sure of the protocol here. I'll keep that
 in mind from the next time.

 On Tue, Aug 11, 2015 at 3:18 AM, Michael McCandless
 luc...@mikemccandless.com wrote:
 On Mon, Aug 10, 2015 at 5:30 PM, Chris Hostetter
 hossman_luc...@fucit.org wrote:

 : I'm confused: this issue isn't a blocker?  Why are we holding up a
 : release for non-blocker issues?

 If i understand correctly, SOLR-7838 is (part of) a feature and by
 definition not a blocker -- but after being committed Shalin noticed that
 the syntax of the user facing API introduced in that feature is
 problematic and the folks involved wnat to try and fix it before release
 because otherwise it becomes a back compat nightmare

 https://issues.apache.org/jira/browse/SOLR-7838?focusedCommentId=14680361page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14680361

 ...not sure if marking this feature as a blocker at that point would
 have really been a good process to follow?

 (perhaps a new Blocker: Bug should have been created? Need to fix
 permissions API syntax before 5.3 release otherwise it's a backcompat
 nightmare ?)

 OK thanks for the summary Hoss, I didn't read the comments on the
 issue to see the backstory.

 I do think the right thing to do in this case is to reopen the issue,
 mark it blocker at that point, and in the comment explain why a new
 feature became a release blocker ... either that or a new issue ...

 And thank you Shalin for reviewing a new feature before it's released :)

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 Regards,
 Shalin Shekhar Mangar.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7838) Implement a RuleBasedAuthorizationPlugin

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681956#comment-14681956
 ] 

ASF subversion and git services commented on SOLR-7838:
---

Commit 1695330 from [~noble.paul] in branch 'dev/branches/lucene_solr_5_3'
[ https://svn.apache.org/r1695330 ]

SOLR-7838: changed the permissions from a map to an array so that order is 
obvious

 Implement a RuleBasedAuthorizationPlugin
 

 Key: SOLR-7838
 URL: https://issues.apache.org/jira/browse/SOLR-7838
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.3, Trunk


 h2. authorization plugin
 This would store the roles of various users and their privileges in ZK
 sample authorization.json
 {code:javascript}
 {
   authorization: {
 class: solr.ZKAuthorization,
user-role :{
   john : [admin, guest]
   tom : 'dev'
}
 permissions: [
{name:collection-edit,
  role: admin 
},
{name:coreadmin,
  role:admin
},
{name: mycoll_update,
 collection: mycoll,
 path:[/update/*],
 role: [guest,admin]
   }]
 }
   }
 }
 {code} 
 This also supports editing of the configuration through APIs
 Example 1: add or remove roles
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{ 
   set-user-role: {tom:[admin,dev},
 set-user-role: {harry:null}
 }'
 {code}
  
 Example 2: add or remove permissions
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json'-d '{ 
   set-permission: { name:a-custom-permission-name,
   collection:gettingstarted,
   path:/handler-name,
   before: name-of-another-permission
},
  delete-permission:permission-name
 }'
 {code}
 Use the 'before' property to re-order your permissions
 Example 3: Restrict collection admin operations (writes only) to be performed 
 by an admin only
 {code}
 curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 
 'Content-type:application/json' -d '{
 set-permission : {name:collection-admin-edit, role:admin}}'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (LUCENE-6732) Improve validate-source-patterns in build.xml (e.g., detect invalid license headers!!)

2015-08-11 Thread Erick Erickson
Thanks Uwe!

On Tue, Aug 11, 2015 at 6:14 AM, Robert Muir (JIRA) j...@apache.org wrote:

 [ 
 https://issues.apache.org/jira/browse/LUCENE-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681780#comment-14681780
  ]

 Robert Muir commented on LUCENE-6732:
 -

 +1, this is great

 Improve validate-source-patterns in build.xml (e.g., detect invalid license 
 headers!!)
 --

 Key: LUCENE-6732
 URL: https://issues.apache.org/jira/browse/LUCENE-6732
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6732.patch


 Today I enabled warnings analysis on Policeman Jenkins. This scans the build 
 log for warnings by javac and reports them in statistics, together with 
 source file dumps.
 When doing that I found out that someone added again a lot of invalid 
 license headers using {{/\*\*}} instead a simple comment. This causes 
 javadocs warnings under some circumstances, because {{/\*\*}} is start of 
 javadocs and not a license comment.
 I then tried to fix the validate-source-patterns to detect this, but due to 
 a bug in ANT, the {{containsregexp/}} filter is applied per line (although 
 it has multiline matching capabilities!!!).
 So I rewrote our checker to run with groovy. This also has some good parts:
 - it tells you wwhat was broken, otherwise you just know there is an error, 
 but not whats wrong (tab, nocommit,...)
 - its much faster (multiple {{containsregexp/}} read file over and over, 
 this one reads file one time into a string and then applies all regular 
 expressions).



 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-11 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682096#comment-14682096
 ] 

Erick Erickson commented on SOLR-7836:
--

Thanks guys, getting it all in my head is...interesting.

[~ysee...@gmail.com] Yeah, I saw that comment. In that case, removing the two 
synchronizations in the refactored methods _other_ than addAndDelete is 
probably indicated. The one _in_ addAndDelete was there originally, just within 
the IndexWriter try/finally which is where the issues was since it'd go out and 
get a new searcher eventually.

[~markrmil...@gmail.com] I had to write a shell script to re-submit that 
individual test repeatedly, it'd pretty much always fail for me by 50 runs. 
I'll back those changes out and run it on my machine where it fails reliably 
and post the results when I get a deadlock.

 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-synch.patch, SOLR-7836.patch, SOLR-7836.patch, 
 SOLR-7836.patch


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682191#comment-14682191
 ] 

Mark Miller commented on SOLR-7836:
---

Yeah, I have a beasting script that does the same, though can also launch runs 
in parallel (https://gist.github.com/markrmiller/dbdb792216dc98b018ad). Still 
no deadlock on my machine yet though. I'll keep trying for a while though.

 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-synch.patch, SOLR-7836.patch, SOLR-7836.patch, 
 SOLR-7836.patch


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-11 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-7836:
-
Attachment: deadlock_test
deadlock_3.res.zip

Here's the stack trace. A few things:

 I got this source by svn checkout -r1694809 
 https://svn.apache.org/repos/asf/lucene/dev/trunk; and added the test to that 
 code base.

 It appears to need two things to fail; a reload operation and a delete by 
 query.

 thread WRITER5 and 
 TEST-TestReloadDeadlock.testReloadDeadlock-seed#[4CFFCB253DB33784] are the 
 ones I think are fighting here.

 The original fix was to pass the index writer from addDoc0() to addAndDelete, 
 but this doesn't work either. I'll see if I can attach a run with that change 
 for comparison.

 I've also attached the script I use to run this, although I don't run it in 
 parallel.



 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-synch.patch, SOLR-7836.patch, SOLR-7836.patch, 
 SOLR-7836.patch, deadlock_3.res.zip, deadlock_test


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-11 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682298#comment-14682298
 ] 

Scott Blum commented on SOLR-6760:
--

NOTE: committer should start with an svn copy DistributedQueue - 
DistributedQueueExt to preserve history.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 762 - Still Failing

2015-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/762/

5 tests failed.
REGRESSION:  org.apache.solr.cloud.RollingRestartTest.test

Error Message:
Unable to restart (#6): CloudJettyRunner 
[url=http://127.0.0.1:55998/o_/yp/collection1]

Stack Trace:
java.lang.AssertionError: Unable to restart (#6): CloudJettyRunner 
[url=http://127.0.0.1:55998/o_/yp/collection1]
at 
__randomizedtesting.SeedInfo.seed([86F6FDD729A0E381:EA2C20D875C8E79]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.RollingRestartTest.restartWithRolesTest(RollingRestartTest.java:104)
at 
org.apache.solr.cloud.RollingRestartTest.test(RollingRestartTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_51) - Build # 5137 - Failure!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5137/
Java: 64bit/jdk1.8.0_51 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionStateFormat2Test.test

Error Message:
Error from server at http://127.0.0.1:56521: Could not find collection : 
myExternColl

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:56521: Could not find collection : myExternColl
at 
__randomizedtesting.SeedInfo.seed([BC89C1774AF3255:839CA3CDDA535FAD]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:857)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:800)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.testZkNodeLocation(CollectionStateFormat2Test.java:84)
at 
org.apache.solr.cloud.CollectionStateFormat2Test.test(CollectionStateFormat2Test.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-11 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-6760:
-
Attachment: SOLR-6760.patch

Running ant test now, I actually expect this may pass..

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-08-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682410#comment-14682410
 ] 

Timothy Potter commented on SOLR-6736:
--

Just tuning in to this one ... what about adding individual files after a 
configset has been created? Is that envisioned for this API? If so, will that 
use the signing stuff of SOLR-7126? Personally, I think it's a pain to reload 
the whole config directory as a zip if I want to change one word in my 
protected words file for example.

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
 Attachments: SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
 SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
 SOLR-6736.patch, newzkconf.zip, test_private.pem, test_pub.der, 
 zkconfighandler.zip, zkconfighandler.zip


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip 
 http://localhost:8983/solr/admin/configs/mynewconf?sig=the-signature
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b24) - Build # 13820 - Failure!

2015-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13820/
Java: 32bit/jdk1.8.0_60-ea-b24 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
commitWithin did not work on node: http://127.0.0.1:43315/wpq/w/collection1 
expected:68 but was:67

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:43315/wpq/w/collection1 expected:68 but was:67
at 
__randomizedtesting.SeedInfo.seed([D32512954D9C468E:5B712D4FE3602B76]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:333)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682421#comment-14682421
 ] 

Karl Wright commented on LUCENE-6699:
-

If you create a branch I can generate patches against it.


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7826) Permission issues when creating cores with bin/solr

2015-08-11 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682420#comment-14682420
 ] 

Shawn Heisey commented on SOLR-7826:


I'm going to assume that the id command (/usr/bin/id on Ubuntu and 
redhat-based systems) is present in the system and that the short options on a 
commercial Unix behave like the gnu version.  On Linux, the id command is in 
the same package (coreutils) as ls so I think this is a safe assumption.

 Permission issues when creating cores with bin/solr
 ---

 Key: SOLR-7826
 URL: https://issues.apache.org/jira/browse/SOLR-7826
 Project: Solr
  Issue Type: Improvement
Reporter: Shawn Heisey
Priority: Minor

 Ran into an interesting situation on IRC today.
 Solr has been installed as a service using the shell script 
 install_solr_service.sh ... so it is running as an unprivileged user.
 User is running bin/solr create as root.  This causes permission problems, 
 because the script creates the core's instanceDir with root ownership, then 
 when Solr is instructed to actually create the core, it cannot create the 
 dataDir.
 Enhancement idea:  When the install script is used, leave breadcrumbs 
 somewhere so that the create core section of the main script can find it 
 and su to the user specified during install.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682432#comment-14682432
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695368 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695368 ]

LUCENE-6699: make branch

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-11 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6174:

Priority: Trivial  (was: Major)

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Trivial
 Attachments: LUCENE-6174.patch


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6174) Improve ant eclipse to select right JRE for building

2015-08-11 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682376#comment-14682376
 ] 

Dawid Weiss commented on LUCENE-6174:
-

Let me know if you'd like me to add it, [~thetaphi], I can take care of this.

 Improve ant eclipse to select right JRE for building
 --

 Key: LUCENE-6174
 URL: https://issues.apache.org/jira/browse/LUCENE-6174
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Attachments: LUCENE-6174.patch


 Whenever I run ant eclipse the setting choosing the right JVM is lost and 
 has to be reassigned in the project properties.
 In fact the classpath generator writes a new classpath file (as it should), 
 but this onl ycontains the default entry:
 {code:xml}
 classpathentry kind=con path=org.eclipse.jdt.launching.JRE_CONTAINER/
 {code}
 Instead it should preserve something like:
 {code:xml}
 classpathentry kind=con 
 path=org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.8.0_25/
 {code}
 We can either path this by a Ant property via command line or user can do 
 this with lucene/build.properties or per user. An alternative would be to 
 generate the name jdk1.8.0_25 by guessing from ANT's java.home. If this 
 name does not exist in eclipse it would produce an error and user would need 
 to add the correct JDK.
 I currently have the problem that my Eclipse uses Java 7 by default and 
 whenever I rebuild the eclipse project, the change to Java 8 in trunk is gone.
 When this is fixed, I could easily/automatically have the right JDK used by 
 eclipse for trunk (Java 8) and branch_5x (Java 7).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682437#comment-14682437
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695369 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695369 ]

LUCENE-6699: Karl's patch to extend geo3d apis to 3d rectangles

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682438#comment-14682438
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1695370 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1695370 ]

LUCENE-6699: initial 3D BKD implementation

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682439#comment-14682439
 ] 

Karl Wright commented on LUCENE-6699:
-

Ok, for a start -- the way you get X, Y, and Z ranges for a given planet model 
is via PlanetModel.getMinimumX(), getMaximumX(), getMinimumY(), getMaximumY(), 
getMinimumZ(), getMaximumZ().  The GeoShape interface does not provide a means 
of obtaining the PlanetModel, so you will need to pass this in to your 
constructor in addition to what you currently have.

Second, the following code:

{code}
+ double x = 
BKD3DTreeDocValuesFormat.decodeValue(BKD3DTreeDocValuesFormat.readInt(bytes.bytes,
 bytes.offset));
+ double y = 
BKD3DTreeDocValuesFormat.decodeValue(BKD3DTreeDocValuesFormat.readInt(bytes.bytes,
 bytes.offset+4));
+ double z = 
BKD3DTreeDocValuesFormat.decodeValue(BKD3DTreeDocValuesFormat.readInt(bytes.bytes,
 bytes.offset+8));
+ //return 
GeoUtils.pointInPolygon(polyLons, polyLats, lat, lon);
+ // nocommit fixme!
+ return true;
{code}

... should call GeoShape.isWithin(x,y,z) to determine membership within the 
shape.

Finally,

{code}
+   public BKD3DTreeReader.Relation 
compare(int xMin, int xMax, int yMin, int yMax, int zMin, int zMax) {
+ // nocommit fixme!
+ return 
BKD3DTreeReader.Relation.INSIDE;
+   }
{code}

... should do the following:

{code}
GeoArea xyzSolid = new XYZSolid(planetModel, xMin, xMax, yMin, yMax, zMin, 
zMax);
return xyzSolid.getRelationship(geoShape) == GeoArea.WHATEVER?xxx:yyy
{code}


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14682440#comment-14682440
 ] 

Michael McCandless commented on LUCENE-6699:


OK I created the branch and committed our two latest patches!

https://svn.apache.org/repos/asf/lucene/dev/branches/lucene6699

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >