[jira] [Commented] (LUCENE-4877) Fix analyzer factories to throw exception when arguments are invalid

2013-03-25 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612412#comment-13612412
 ] 

Steve Rowe commented on LUCENE-4877:


+1, patch looks good.

 Fix analyzer factories to throw exception when arguments are invalid
 

 Key: LUCENE-4877
 URL: https://issues.apache.org/jira/browse/LUCENE-4877
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Robert Muir
 Attachments: LUCENE-4877_one_solution_prototype.patch


 Currently if someone typos an argument someParamater=xyz instead of 
 someParameter=xyz, they get no exception and sometimes incorrect behavior.
 It would be way better if these factories threw exception on unknown params, 
 e.g. they removed the args they used and checked they were empty at the end.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [ANNOUNCE] Solr wiki editing change

2013-03-25 Thread Toke Eskildsen
Steve Rowe [sar...@gmail.com]:
 From now on, only people who appear on 
 http://wiki.apache.org/solr/ContributorsGroup will be able to 
 create/modify/delete wiki pages.

TokeEskildsen would like to be added to the list and would like spammers to 
suffer greatly. 
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4641) Schema should throw exception on illegal field parameters

2013-03-25 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612415#comment-13612415
 ] 

Steve Rowe commented on SOLR-4641:
--

+1, patch looks good.

 Schema should throw exception on illegal field parameters
 -

 Key: SOLR-4641
 URL: https://issues.apache.org/jira/browse/SOLR-4641
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Robert Muir
 Attachments: SOLR-4641.patch


 Currently FieldType does this correctly, but SchemaField does not.
 so for example simple typos like (one from solr's test configs itself) 
 omitOmitTermFrequencyAndPositions=true... on the field elements themselves 
 are silently ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Solr wiki editing change

2013-03-25 Thread Steve Rowe
Done. - Steve

On Mar 25, 2013, at 2:04 AM, Toke Eskildsen t...@statsbiblioteket.dk wrote:

 Steve Rowe [sar...@gmail.com]:
 From now on, only people who appear on 
 http://wiki.apache.org/solr/ContributorsGroup will be able to 
 create/modify/delete wiki pages.
 
 TokeEskildsen would like to be added to the list and would like spammers to 
 suffer greatly. 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4058) DIH should use the SolrCloudServer impl when running in SolrCloud mode.

2013-03-25 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612437#comment-13612437
 ] 

Mikhail Khludnev commented on SOLR-4058:


[~markrmil...@gmail.com], despite it sounds reasonable. can you tell me what's 
the profit in it? Why distributing updates can't be handled by 
DistributedUpdateProcessor under-neath? Can it be related to level of 
concurrency?  

 DIH should use the SolrCloudServer impl when running in SolrCloud mode.
 ---

 Key: SOLR-4058
 URL: https://issues.apache.org/jira/browse/SOLR-4058
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
Priority: Minor
 Fix For: 4.3




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Solr wiki editing change

2013-03-25 Thread Dawid Weiss
Hi Steve,

Can you add me to? We have a few pages which we maintain (search results
clustering related). My wiki user is DawidWeiss

Dawid

On Mon, Mar 25, 2013 at 4:18 AM, Steve Rowe sar...@gmail.com wrote:

 The wiki at http://wiki.apache.org/solr/ has come under attack by
 spammers more frequently of late, so the PMC has decided to lock it down in
 an attempt to reduce the work involved in tracking and removing spam.

 From now on, only people who appear on
 http://wiki.apache.org/solr/ContributorsGroup will be able to
 create/modify/delete wiki pages.

 Please request either on the solr-u...@lucene.apache.org or on
 dev@lucene.apache.org to have your wiki username added to the
 ContributorsGroup page - this is a one-time step.

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_15) - Build # 4856 - Failure!

2013-03-25 Thread Dawid Weiss
 I really think I've got it this time :)

Go Mark! Go Mark! Go Mark!... :)

D.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1460519 [2/3] - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/response/ core/src/java/org/apache/solr/rest/ core/src/java/org/apache/

2013-03-25 Thread Robert Muir
On Mon, Mar 25, 2013 at 1:08 AM, Steve Rowe sar...@gmail.com wrote:
 Robert,

 It's used to serialize similarity factories.  See getNamedPropertyValues() 
 just below.

 There are tests for similarity factory classes of the form 
 solr.SimpleFactoryClassName, e.g. TestSweetSpotSimilarityFactory, which 
 uses solr/core/src/test-files/solr/collection1/conf/schema-sweetspot.xml, 
 which declares:

   similarity class=solr.SchemaSimilarityFactory/

 When you say that's not even the package where our similarity factories go, 
 to which package are you referring?

 See IndexSchema.readSimilarity() for how it can work.


Right, i remember setting this part up which is why i took a look:

  final Object obj = loader.newInstance(((Element)
node).getAttribute(class), Object.class, search.similarities.);

this piece means that solr. - org.apache.solr.search.similarities.

But this 'normalize' is wrongt: like map org.apache.lucene.xxx, or
other packages of org.apache.solr.xxx to solr.xxx.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #811: POMs out of sync

2013-03-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/811/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch

Error Message:
Test Setup Failure: shard1 should have just been set up to be inconsistent - 
but it's still consistent. Leader:http://127.0.0.1:48869/collection1 Dead 
Guy:http://127.0.0.1:48866/collection1skip list:[CloudJettyRunner 
[url=http://127.0.0.1:48877/collection1], CloudJettyRunner 
[url=http://127.0.0.1:48877/collection1]]

Stack Trace:
java.lang.AssertionError: Test Setup Failure: shard1 should have just been set 
up to be inconsistent - but it's still consistent. 
Leader:http://127.0.0.1:48869/collection1 Dead 
Guy:http://127.0.0.1:48866/collection1skip list:[CloudJettyRunner 
[url=http://127.0.0.1:48877/collection1], CloudJettyRunner 
[url=http://127.0.0.1:48877/collection1]]
at 
__randomizedtesting.SeedInfo.seed([142BDA7829FAFF4B:95CD54605EA59F77]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:217)




Build Log:
[...truncated 23584 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [ANNOUNCE] Wiki editing change

2013-03-25 Thread Simon Willnauer
On Mon, Mar 25, 2013 at 4:16 AM, Steve Rowe sar...@gmail.com wrote:
 The wiki at http://wiki.apache.org/lucene-java/ has come under attack by 
 spammers more frequently of late, so the PMC has decided to lock it down in 
 an attempt to reduce the work involved in tracking and removing spam.

 From now on, only people who appear on 
 http://wiki.apache.org/lucene-java/ContributorsGroup will be able to 
 create/modify/delete wiki pages.

 Please request either on the java-u...@lucene.apache.org or on 
 dev@lucene.apache.org to have your wiki username added to the 
 ContributorsGroup page - this is a one-time step.

please add me to the list simonwillnauer

simon

 Steve
 -
 To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
 For additional commands, e-mail: java-user-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4878) Regular expression syntax with MultiFieldQueryParser causes assert/NPE

2013-03-25 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-4878:


Affects Version/s: 4.2
Fix Version/s: 4.2.1
   5.0

 Regular expression syntax with MultiFieldQueryParser causes assert/NPE
 --

 Key: LUCENE-4878
 URL: https://issues.apache.org/jira/browse/LUCENE-4878
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1, 4.2
Reporter: Adam Rauch
Assignee: Simon Willnauer
 Fix For: 5.0, 4.2.1

 Attachments: LUCENE-4878.patch


 Using regex syntax causes MultiFieldQueryParser.parse() to throw an 
 AssertionError (if asserts are on) or causes subsequent searches using the 
 returned Query instance to throw NullPointerException (if asserts are off). 
 Simon Willnauer's comment on the java-user alias: This is in-fact a bug in 
 the MultiFieldQueryParser [...] MultifieldQueryParser should override 
 getRegexpQuery but it doesn't

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4878) Regular expression syntax with MultiFieldQueryParser causes assert/NPE

2013-03-25 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned LUCENE-4878:
---

Assignee: Simon Willnauer

 Regular expression syntax with MultiFieldQueryParser causes assert/NPE
 --

 Key: LUCENE-4878
 URL: https://issues.apache.org/jira/browse/LUCENE-4878
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1
Reporter: Adam Rauch
Assignee: Simon Willnauer
 Attachments: LUCENE-4878.patch


 Using regex syntax causes MultiFieldQueryParser.parse() to throw an 
 AssertionError (if asserts are on) or causes subsequent searches using the 
 returned Query instance to throw NullPointerException (if asserts are off). 
 Simon Willnauer's comment on the java-user alias: This is in-fact a bug in 
 the MultiFieldQueryParser [...] MultifieldQueryParser should override 
 getRegexpQuery but it doesn't

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4878) Regular expression syntax with MultiFieldQueryParser causes assert/NPE

2013-03-25 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612460#comment-13612460
 ] 

Simon Willnauer commented on LUCENE-4878:
-

thanks for raising this.. I will upload a patch shortly

 Regular expression syntax with MultiFieldQueryParser causes assert/NPE
 --

 Key: LUCENE-4878
 URL: https://issues.apache.org/jira/browse/LUCENE-4878
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1, 4.2
Reporter: Adam Rauch
Assignee: Simon Willnauer
 Fix For: 5.0, 4.2.1

 Attachments: LUCENE-4878.patch


 Using regex syntax causes MultiFieldQueryParser.parse() to throw an 
 AssertionError (if asserts are on) or causes subsequent searches using the 
 returned Query instance to throw NullPointerException (if asserts are off). 
 Simon Willnauer's comment on the java-user alias: This is in-fact a bug in 
 the MultiFieldQueryParser [...] MultifieldQueryParser should override 
 getRegexpQuery but it doesn't

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4878) Regular expression syntax with MultiFieldQueryParser causes assert/NPE

2013-03-25 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-4878:


Attachment: LUCENE-4878.patch

here is a simple patch

 Regular expression syntax with MultiFieldQueryParser causes assert/NPE
 --

 Key: LUCENE-4878
 URL: https://issues.apache.org/jira/browse/LUCENE-4878
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1, 4.2
Reporter: Adam Rauch
Assignee: Simon Willnauer
 Fix For: 5.0, 4.2.1

 Attachments: LUCENE-4878.patch


 Using regex syntax causes MultiFieldQueryParser.parse() to throw an 
 AssertionError (if asserts are on) or causes subsequent searches using the 
 returned Query instance to throw NullPointerException (if asserts are off). 
 Simon Willnauer's comment on the java-user alias: This is in-fact a bug in 
 the MultiFieldQueryParser [...] MultifieldQueryParser should override 
 getRegexpQuery but it doesn't

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4872) BooleanWeight should decide how to execute minNrShouldMatch

2013-03-25 Thread Stefan Pohl (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612467#comment-13612467
 ] 

Stefan Pohl commented on LUCENE-4872:
-

Thanks, Mike, this behaves as expected. Now we have a sense of what trade-off 
we'd be going for if we agree on the current model, it is still a hard decision 
though, entailing questions like:
- Does it matter that queries that are anyway slow got 2-3 times slower?
- Are those queries representative to what users do?

A few suggestions for a better model that maybe go beyond the scope of this 
ticket:

A very conservative usage rule for MSMSumScorer would be to use it only if the 
constraint is at least one higher than the number of high-freq terms, then it 
will always kick butt and we'd get most bang of this scorer without having 
slow-downs. But we'd miss out on many cases where it would be faster and those 
might be the ones that are used in practice by users, and it is not clear (to 
me:-) what 'high-freq' means. If at all, this should be seen relative to the 
highest-freq subclause.

More generally, it seems to me the problem we're trying to solve here is 
identical to computing a cost. If the cost returned by Scorers correlates with 
execution time, then we could simply call the cost() method on BS and 
MSMSumScorer and use MSMSumScorer if it is significantly below the former 
(assuming there are no side-effects in doing these calls). So we'd defer the 
problem to the individual Scorers, which splits the problem up into smaller 
subproblems and the Scorers know themselves best about their implementation and 
behavior.

To make accurate decisions, we probably have to extend the cost-API to return 
more detailed information to base decision rules on, e.g. upper bound, lower 
bound (to be able to make conservative/speculative decisions) and estimate the 
number of returned docs *and* runtime-correlated cost (in some unit). For 
instance, MSMSumScorer's overall cost depends on both of the latter and can be 
split up into the following 2 stages:

1) Candidate generation = heap-based merge of clause subset, i.e. the same as 
for DisjSumScorer, but on a clause subset:
time to generate all docs from subScorer: correlates with sum over costs of 
#clauses-(mm-1) least-costly subScorers
# candidates = [max(...), min(sum(...), maxdoc)], where ... can be either an 
upper bound, lower bound or an estimate in between of the #candidates returned 
by the #clauses-(mm-1) subScorers
Even for TermScorer, the definition of these two measures are not identical due 
to the min(..., maxdoc).

2) Full scoring of candidates:
time to advance() and decode postings: (mm-1) * # candidates

The costs would still have to be weighted by the relative overhead of 1) 
heap-merging, 2) advance() + early-stopping; not sure, if constants are enough 
here.

While the scope of this topic seems large (modelling all scorers), I currently 
don't see a simpler way to make this reliably work for arbitrarily structured 
queries, think of MSM(subtree1, Disj(MSM(Conj(....

 BooleanWeight should decide how to execute minNrShouldMatch
 ---

 Key: LUCENE-4872
 URL: https://issues.apache.org/jira/browse/LUCENE-4872
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: core/search
Reporter: Robert Muir
 Fix For: 5.0, 4.3

 Attachments: crazyMinShouldMatch.tasks


 LUCENE-4571 adds a dedicated document-at-time scorer for minNrShouldMatch 
 which can use advance() behind the scenes. 
 In cases where you have some really common terms and some rare ones this can 
 be a huge performance improvement.
 On the other hand BooleanScorer might still be faster in some cases.
 We should think about what the logic should be here: one simple thing to do 
 is to always use the new scorer when minShouldMatch is set: thats where i'm 
 leaning. 
 But maybe we could have a smarter heuristic too, perhaps based on cost()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4872) BooleanWeight should decide how to execute minNrShouldMatch

2013-03-25 Thread Stefan Pohl (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612467#comment-13612467
 ] 

Stefan Pohl edited comment on LUCENE-4872 at 3/25/13 8:38 AM:
--

Thanks, Mike, this behaves as expected. Now we have a sense of what trade-off 
we'd be going for if we agree on the current model, it is still a hard decision 
though, entailing questions like:
- Does it matter that queries that are anyway slow got 2-3 times slower?
- Are those queries representative to what users do?

A few suggestions for a better model which maybe goes beyond the scope of this 
ticket:

A very conservative usage rule for MSMSumScorer would be to use it only if the 
constraint is at least one higher than the number of high-freq terms, then it 
will always kick butt and we'd get most bang of this scorer without having 
slow-downs. But we'd miss out on many cases where it would be faster and those 
might be the ones that are used in practice by users, and it is not clear (to 
me:-) what 'high-freq' means. If at all, this should be seen relative to the 
highest-freq subclause.

More generally, it seems to me the problem we're trying to solve here is 
identical to computing a cost. If the cost returned by Scorers correlates with 
execution time, then we could simply call the cost() method on BS and 
MSMSumScorer and use MSMSumScorer if it is significantly below the former 
(assuming there are no side-effects in doing these calls). So we'd defer the 
problem to the individual Scorers, which splits the problem up into smaller 
subproblems and the Scorers know themselves best about their implementation and 
behavior.

To make accurate decisions, we probably have to extend the cost-API to return 
more detailed information to base decision rules on, e.g. upper bound, lower 
bound (to be able to make conservative/speculative decisions) and estimate the 
number of returned docs *and* runtime-correlated cost (in some unit). For 
instance, MSMSumScorer's overall cost depends on both of the latter and can be 
split up into the following 2 stages:

# Candidate generation = heap-based merge of clause subset, i.e. the same as 
for DisjSumScorer, but on a clause subset:
time to generate all docs from subScorer: correlates with sum over costs of 
#clauses-(mm-1) least-costly subScorers
Number of candidates: [max(...), min(sum(...), maxdoc)], where ... can be 
either an upper bound, lower bound or an estimate in between of the #candidates 
returned by the #clauses-(mm-1) subScorers
Even for TermScorer, the definition of these two measures are not identical due 
to the min(..., maxdoc).
# Full scoring of candidates:
time to advance() and decode postings: (mm-1) * # candidates

The costs would still have to be weighted by the relative overhead of 1) 
heap-merging, 2) advance() + early-stopping; not sure, if constants are enough 
here.

While the scope of this topic seems large (modelling all scorers), I currently 
don't see a simpler way to make this reliably work for arbitrarily structured 
queries, think of MSM(Disj(MSM(Conj(...),...),...),subtree2,...).

  was (Author: spo):
Thanks, Mike, this behaves as expected. Now we have a sense of what 
trade-off we'd be going for if we agree on the current model, it is still a 
hard decision though, entailing questions like:
- Does it matter that queries that are anyway slow got 2-3 times slower?
- Are those queries representative to what users do?

A few suggestions for a better model that maybe go beyond the scope of this 
ticket:

A very conservative usage rule for MSMSumScorer would be to use it only if the 
constraint is at least one higher than the number of high-freq terms, then it 
will always kick butt and we'd get most bang of this scorer without having 
slow-downs. But we'd miss out on many cases where it would be faster and those 
might be the ones that are used in practice by users, and it is not clear (to 
me:-) what 'high-freq' means. If at all, this should be seen relative to the 
highest-freq subclause.

More generally, it seems to me the problem we're trying to solve here is 
identical to computing a cost. If the cost returned by Scorers correlates with 
execution time, then we could simply call the cost() method on BS and 
MSMSumScorer and use MSMSumScorer if it is significantly below the former 
(assuming there are no side-effects in doing these calls). So we'd defer the 
problem to the individual Scorers, which splits the problem up into smaller 
subproblems and the Scorers know themselves best about their implementation and 
behavior.

To make accurate decisions, we probably have to extend the cost-API to return 
more detailed information to base decision rules on, e.g. upper bound, lower 
bound (to be able to make conservative/speculative decisions) and estimate the 
number of returned docs *and* runtime-correlated cost (in 

[jira] [Commented] (LUCENE-4878) Regular expression syntax with MultiFieldQueryParser causes assert/NPE

2013-03-25 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612492#comment-13612492
 ] 

Robert Muir commented on LUCENE-4878:
-

While we are here can we change this in MultiTermQuery:

{code}
assert field != null;
{code}

to this
{code}
if (field == null) {
  throw new NullPointerException();
}
{code}



 Regular expression syntax with MultiFieldQueryParser causes assert/NPE
 --

 Key: LUCENE-4878
 URL: https://issues.apache.org/jira/browse/LUCENE-4878
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1, 4.2
Reporter: Adam Rauch
Assignee: Simon Willnauer
 Fix For: 5.0, 4.2.1

 Attachments: LUCENE-4878.patch


 Using regex syntax causes MultiFieldQueryParser.parse() to throw an 
 AssertionError (if asserts are on) or causes subsequent searches using the 
 returned Query instance to throw NullPointerException (if asserts are off). 
 Simon Willnauer's comment on the java-user alias: This is in-fact a bug in 
 the MultiFieldQueryParser [...] MultifieldQueryParser should override 
 getRegexpQuery but it doesn't

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4878) Regular expression syntax with MultiFieldQueryParser causes assert/NPE

2013-03-25 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-4878.
-

Resolution: Fixed

committed to trunk, 4.x branch and 4.2.1 bugfix branch. This should be in the 
next bugfix release coming pretty soon. Thanks again for reporting

 Regular expression syntax with MultiFieldQueryParser causes assert/NPE
 --

 Key: LUCENE-4878
 URL: https://issues.apache.org/jira/browse/LUCENE-4878
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1, 4.2
Reporter: Adam Rauch
Assignee: Simon Willnauer
 Fix For: 5.0, 4.2.1

 Attachments: LUCENE-4878.patch


 Using regex syntax causes MultiFieldQueryParser.parse() to throw an 
 AssertionError (if asserts are on) or causes subsequent searches using the 
 returned Query instance to throw NullPointerException (if asserts are off). 
 Simon Willnauer's comment on the java-user alias: This is in-fact a bug in 
 the MultiFieldQueryParser [...] MultifieldQueryParser should override 
 getRegexpQuery but it doesn't

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4863) Use FST to hold term in StemmerOverrideFilter

2013-03-25 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-4863:


Attachment: LUCENE-4863.patch

updated patch, fixing the typo and moving the ignoreCase into the map impl. I 
will commit this soon. Thanks for looking at it robert!

 Use FST to hold term in StemmerOverrideFilter
 -

 Key: LUCENE-4863
 URL: https://issues.apache.org/jira/browse/LUCENE-4863
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.2
Reporter: Simon Willnauer
 Fix For: 5.0, 4.3

 Attachments: LUCENE-4863.patch, LUCENE-4863.patch, LUCENE-4863.patch


 follow-up from LUCENE-4857

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4878) Regular expression syntax with MultiFieldQueryParser causes assert/NPE

2013-03-25 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612529#comment-13612529
 ] 

Simon Willnauer commented on LUCENE-4878:
-

robert, I agree I just committed a IAE to trunk, branch_4x and 4.2.1

 Regular expression syntax with MultiFieldQueryParser causes assert/NPE
 --

 Key: LUCENE-4878
 URL: https://issues.apache.org/jira/browse/LUCENE-4878
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1, 4.2
Reporter: Adam Rauch
Assignee: Simon Willnauer
 Fix For: 5.0, 4.2.1

 Attachments: LUCENE-4878.patch


 Using regex syntax causes MultiFieldQueryParser.parse() to throw an 
 AssertionError (if asserts are on) or causes subsequent searches using the 
 returned Query instance to throw NullPointerException (if asserts are off). 
 Simon Willnauer's comment on the java-user alias: This is in-fact a bug in 
 the MultiFieldQueryParser [...] MultifieldQueryParser should override 
 getRegexpQuery but it doesn't

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4863) Use FST to hold term in StemmerOverrideFilter

2013-03-25 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned LUCENE-4863:
---

Assignee: Simon Willnauer

 Use FST to hold term in StemmerOverrideFilter
 -

 Key: LUCENE-4863
 URL: https://issues.apache.org/jira/browse/LUCENE-4863
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.2
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.3

 Attachments: LUCENE-4863.patch, LUCENE-4863.patch, LUCENE-4863.patch


 follow-up from LUCENE-4857

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4619) Improve PreAnalyzedField query analysis

2013-03-25 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612533#comment-13612533
 ] 

Andrzej Bialecki  commented on SOLR-4619:
-

So it looks to me like the least controversial option is to put the list of 
preanalyzed fields in solrconfig in the specification of the URP. The trick 
with a magic field name sounds useful too - it would allow overriding the 
list of fields on a per-document basis. This could be also achieved by passing 
the list of fields via SolrParams - although it would affect all documents in a 
given update request.

Anyway, I think these are good ideas worth trying. I'll start working on a 
patch. Thanks for the comments!

 Improve PreAnalyzedField query analysis
 ---

 Key: SOLR-4619
 URL: https://issues.apache.org/jira/browse/SOLR-4619
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.0, 4.1, 4.2, 5.0, 4.2.1
Reporter: Andrzej Bialecki 
Assignee: Andrzej Bialecki 
 Fix For: 5.0

 Attachments: SOLR-4619.patch


 PreAnalyzed field extends plain FieldType and mistakenly uses the 
 DefaultAnalyzer as query analyzer, and doesn't allow for customization via 
 analyzer schema elements.
 Instead it should extend TextField and support all query analysis supported 
 by that type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4863) Use FST to hold term in StemmerOverrideFilter

2013-03-25 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-4863.
-

Resolution: Fixed

committed to 4.x (rev. 1460602) and trunk (rev. 1460580)

 Use FST to hold term in StemmerOverrideFilter
 -

 Key: LUCENE-4863
 URL: https://issues.apache.org/jira/browse/LUCENE-4863
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 4.2
Reporter: Simon Willnauer
Assignee: Simon Willnauer
 Fix For: 5.0, 4.3

 Attachments: LUCENE-4863.patch, LUCENE-4863.patch, LUCENE-4863.patch


 follow-up from LUCENE-4857

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4879) Filter stack traces on console output.

2013-03-25 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-4879:
---

 Summary: Filter stack traces on console output.
 Key: LUCENE-4879
 URL: https://issues.apache.org/jira/browse/LUCENE-4879
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 5.0, 4.3


We could filter stack traces similar to what ANT's JUnit task does. It'd remove 
some of the noise and make them shorter. I don't think the lack of stack 
filtering is particularly annoying and it's always to have an explicit view of 
what and where happened but since Robert requested this I'll add it.

We can always make it a (yet another) test.* option :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4879) Filter stack traces on console output.

2013-03-25 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-4879.
-

Resolution: Fixed

 Filter stack traces on console output.
 --

 Key: LUCENE-4879
 URL: https://issues.apache.org/jira/browse/LUCENE-4879
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 5.0, 4.3


 We could filter stack traces similar to what ANT's JUnit task does. It'd 
 remove some of the noise and make them shorter. I don't think the lack of 
 stack filtering is particularly annoying and it's always to have an explicit 
 view of what and where happened but since Robert requested this I'll add it.
 We can always make it a (yet another) test.* option :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4880) Difference in offset handling between IndexReader created by MemoryIndex and one created by RAMDirectory

2013-03-25 Thread Timothy Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Allison updated LUCENE-4880:


Attachment: MemoryIndexVsRamDirZeroLengthTermTest.java

 Difference in offset handling between IndexReader created by MemoryIndex and 
 one created by RAMDirectory
 

 Key: LUCENE-4880
 URL: https://issues.apache.org/jira/browse/LUCENE-4880
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.2
 Environment: Windows 7 (probably irrelevant)
Reporter: Timothy Allison
 Attachments: MemoryIndexVsRamDirZeroLengthTermTest.java


 MemoryIndex skips tokens that have length == 0 when building the index; the 
 result is that it does not increment the token offset (nor does it store the 
 position offsets if that option is set) for tokens of length == 0.  A regular 
 index (via, say, RAMDirectory) does not appear to do this.
 When using the ICUFoldingFilter, it is possible to have a term of zero length 
 (the \u0640 character separated by spaces).  If that occurs in a document, 
 the offsets returned at search time differ between the MemoryIndex and a 
 regular index.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4880) Difference in offset handling between IndexReader created by MemoryIndex and one created by RAMDirectory

2013-03-25 Thread Timothy Allison (JIRA)
Timothy Allison created LUCENE-4880:
---

 Summary: Difference in offset handling between IndexReader created 
by MemoryIndex and one created by RAMDirectory
 Key: LUCENE-4880
 URL: https://issues.apache.org/jira/browse/LUCENE-4880
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.2
 Environment: Windows 7 (probably irrelevant)
Reporter: Timothy Allison
 Attachments: MemoryIndexVsRamDirZeroLengthTermTest.java

MemoryIndex skips tokens that have length == 0 when building the index; the 
result is that it does not increment the token offset (nor does it store the 
position offsets if that option is set) for tokens of length == 0.  A regular 
index (via, say, RAMDirectory) does not appear to do this.

When using the ICUFoldingFilter, it is possible to have a term of zero length 
(the \u0640 character separated by spaces).  If that occurs in a document, the 
offsets returned at search time differ between the MemoryIndex and a regular 
index.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1460519 [2/3] - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/response/ core/src/java/org/apache/solr/rest/ core/src/java/org/apache/

2013-03-25 Thread Steve Rowe

On Mar 25, 2013, at 3:51 AM, Robert Muir rcm...@gmail.com wrote:
 But this 'normalize' is wrongt: like map org.apache.lucene.xxx, or
 other packages of org.apache.solr.xxx to solr.xxx.

No, it maps o.a.(l|s).what.ev.er.xxx to solr.xxx.

Here's the code again:

-
private static String normalizeSPIname(String fullyQualifiedName) {
  if (fullyQualifiedName.startsWith(org.apache.lucene.) || 
fullyQualifiedName.startsWith(org.apache.solr.)) {
return solr + 
fullyQualifiedName.substring(fullyQualifiedName.lastIndexOf('.'));
  }
  return fullyQualifiedName;
}
-

See the .lastIndexOf('.') part?

Steve
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Solr wiki editing change

2013-03-25 Thread Israel Ekpo
Please Add israelekpo to the ContributorsGroup

Thanks.



On Sun, Mar 24, 2013 at 11:18 PM, Steve Rowe sar...@gmail.com wrote:

 The wiki at http://wiki.apache.org/solr/ has come under attack by
 spammers more frequently of late, so the PMC has decided to lock it down in
 an attempt to reduce the work involved in tracking and removing spam.

 From now on, only people who appear on
 http://wiki.apache.org/solr/ContributorsGroup will be able to
 create/modify/delete wiki pages.

 Please request either on the solr-u...@lucene.apache.org or on
 dev@lucene.apache.org to have your wiki username added to the
 ContributorsGroup page - this is a one-time step.

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
°O°
Good Enough is not good enough.
To give anything less than your best is to sacrifice the gift.
Quality First. Measure Twice. Cut Once.
http://www.israelekpo.com/


Re: [ANNOUNCE] Solr wiki editing change

2013-03-25 Thread Erick Erickson
 Please Add israelekpo to the ContributorsGroup

Added to ContributorsGroup (Solr  Lucene)




On Mon, Mar 25, 2013 at 9:46 AM, Israel Ekpo israele...@gmail.com wrote:

 Please Add israelekpo to the ContributorsGroup

 Thanks.



 On Sun, Mar 24, 2013 at 11:18 PM, Steve Rowe sar...@gmail.com wrote:

 The wiki at http://wiki.apache.org/solr/ has come under attack by
 spammers more frequently of late, so the PMC has decided to lock it down in
 an attempt to reduce the work involved in tracking and removing spam.

 From now on, only people who appear on
 http://wiki.apache.org/solr/ContributorsGroup will be able to
 create/modify/delete wiki pages.

 Please request either on the solr-u...@lucene.apache.org or on
 dev@lucene.apache.org to have your wiki username added to the
 ContributorsGroup page - this is a one-time step.

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 °O°
 Good Enough is not good enough.
 To give anything less than your best is to sacrifice the gift.
 Quality First. Measure Twice. Cut Once.
 http://www.israelekpo.com/


[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-03-25 Thread Israel Ekpo (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612681#comment-13612681
 ] 

Israel Ekpo commented on SOLR-1913:
---

I will need to rewrite the plugin again from scratch.

The internals of Lucene and Solr has changed drastically since this was first 
implemented.

I will keep you informed within the next 2 weeks.

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.3

 Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Mini-proposal: Standalone Solr DIH and SolrCell jars

2013-03-25 Thread Dyer, James
Sameday it would be nice to see DIH be able to run in its own JVM for just the 
reason Jack mentions.  There are quite a few neat things like this that could 
be done with DIH, but I've tried to work more on improving the tests, fixing 
bugs, and generally making the code more attractive to developers.  I don't 
think DIH has a chance to really grow up until these types of things get done.

I know nothing about solr cell except a few people on the mailing list have 
been burned trying to run it in production only to learn that it doesn't scale. 
 At least that's the general gist I've heard: for prototyping purposes only.  
Maybe if it is re-architectured as a stand-alone app it would fare better?

James Dyer
Ingram Content Group
(615) 213-4311


-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org] 
Sent: Friday, March 22, 2013 9:07 PM
To: dev@lucene.apache.org
Subject: Re: Mini-proposal: Standalone Solr DIH and SolrCell jars

On 3/22/2013 7:04 PM, Jack Krupansky wrote:
 I wanted to get some preliminary feedback before filing this proposal as
 a Jira(s):
  
 Package Solr Data Import Handler and Solr Cell as standalone jars with
 command line interfaces to run as separate processes to promote more
 efficient distributed processing, both by separating them from the Solr
 JVM and allowing multiple instances running in parallel on multiple
 machines. And to make it easier for mere mortals to customize the
 ingestion code without diving deep into core Solr.

That's a really interesting idea.  You mentioned having them be grown-up
siblings of the SimplePostTool, which would imply that the jar would be
directly executable.  What would be the mechanism for configuring it and
getting DIH status?

An alternate idea, if it's feasible, would be that you could drop the
jar and its dependencies into a lib directory and embed into an index
update application.  Hopefully it is only tied to SolrJ, not deep Solr
or Lucene internals.  I haven't checked.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [ANNOUNCE] Wiki editing change

2013-03-25 Thread Jack Krupansky

Please add JackKrupansky. Thanks.

-- Jack Krupansky

-Original Message- 
From: Steve Rowe

Sent: Sunday, March 24, 2013 11:16 PM
To: dev@lucene.apache.org ; java-u...@lucene.apache.org
Subject: [ANNOUNCE] Wiki editing change

The wiki at http://wiki.apache.org/lucene-java/ has come under attack by 
spammers more frequently of late, so the PMC has decided to lock it down in 
an attempt to reduce the work involved in tracking and removing spam.


From now on, only people who appear on 
http://wiki.apache.org/lucene-java/ContributorsGroup will be able to 
create/modify/delete wiki pages.


Please request either on the java-u...@lucene.apache.org or on 
dev@lucene.apache.org to have your wiki username added to the 
ContributorsGroup page - this is a one-time step.


Steve
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Wiki editing change

2013-03-25 Thread Erick Erickson
 Please add JackKrupansky. Thanks
added to contributors group, Lucene and Solr


On Mon, Mar 25, 2013 at 10:45 AM, Jack Krupansky j...@basetechnology.comwrote:

 Please add JackKrupansky. Thanks.

 -- Jack Krupansky

 -Original Message- From: Steve Rowe
 Sent: Sunday, March 24, 2013 11:16 PM
 To: dev@lucene.apache.org ; java-u...@lucene.apache.org
 Subject: [ANNOUNCE] Wiki editing change


 The wiki at 
 http://wiki.apache.org/lucene-**java/http://wiki.apache.org/lucene-java/has 
 come under attack by spammers more frequently of late, so the PMC has
 decided to lock it down in an attempt to reduce the work involved in
 tracking and removing spam.

 From now on, only people who appear on http://wiki.apache.org/lucene-**
 java/ContributorsGrouphttp://wiki.apache.org/lucene-java/ContributorsGroupwill
  be able to create/modify/delete wiki pages.

 Please request either on the java-u...@lucene.apache.org or on
 dev@lucene.apache.org to have your wiki username added to the
 ContributorsGroup page - this is a one-time step.

 Steve
 --**--**-
 To unsubscribe, e-mail: 
 dev-unsubscribe@lucene.apache.**orgdev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

 --**--**-
 To unsubscribe, e-mail: 
 dev-unsubscribe@lucene.apache.**orgdev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [ANNOUNCE] Solr wiki editing change

2013-03-25 Thread Christian Moen
Hello,

Could you kindly add ChristianMoen?

Many thanks,

Christian

On Mar 25, 2013, at 12:18 PM, Steve Rowe sar...@gmail.com wrote:

 The wiki at http://wiki.apache.org/solr/ has come under attack by spammers 
 more frequently of late, so the PMC has decided to lock it down in an attempt 
 to reduce the work involved in tracking and removing spam.
 
 From now on, only people who appear on 
 http://wiki.apache.org/solr/ContributorsGroup will be able to 
 create/modify/delete wiki pages.
 
 Please request either on the solr-u...@lucene.apache.org or on 
 dev@lucene.apache.org to have your wiki username added to the 
 ContributorsGroup page - this is a one-time step.
 
 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Solr wiki editing change

2013-03-25 Thread Steve Rowe
On Mar 25, 2013, at 10:54 AM, Christian Moen c...@atilika.com wrote:
 Could you kindly add ChristianMoen?

Added to the Lucene and Solr ContributorsGroup.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4644) Implement spatial WITHIN query for RecursivePrefixTree

2013-03-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-4644:
-

Attachment: 
LUCENE-4644_Spatial_Within_predicate_for_RecursivePrefixTree.patch

Attached is a patch implementing #1  #2 from before.  By default you get #1 
behavior (slow but correct results), and if you want #2 (configurable buffer) 
you need to construct the WithinPrefixTreeFilter yourself.  I'll leave #3 to 
LUCENE-4869 and update its title  description a little.

I added tests for this too, including an explicit test for an indexed shape of 
multiple disjoint parts far away to ensure that a Within query encompassing 
only one of those parts is not considered a match.

[~ryantxu], can you please examine the patch, especially my API (javadocs 
etc.)?  I'd like to get this committed once you're satisfied.

 Implement spatial WITHIN query for RecursivePrefixTree
 --

 Key: LUCENE-4644
 URL: https://issues.apache.org/jira/browse/LUCENE-4644
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Attachments: 
 LUCENE-4644_Spatial_Within_predicate_for_RecursivePrefixTree.patch, 
 LUCENE-4644_Spatial_Within_predicate_for_RecursivePrefixTree.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4644) Implement spatial WITHIN query for RecursivePrefixTree

2013-03-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-4644:
-

Fix Version/s: 4.3

 Implement spatial WITHIN query for RecursivePrefixTree
 --

 Key: LUCENE-4644
 URL: https://issues.apache.org/jira/browse/LUCENE-4644
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 4.3

 Attachments: 
 LUCENE-4644_Spatial_Within_predicate_for_RecursivePrefixTree.patch, 
 LUCENE-4644_Spatial_Within_predicate_for_RecursivePrefixTree.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b82) - Build # 4823 - Failure!

2013-03-25 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4823/
Java: 32bit/jdk1.8.0-ea-b82 -client -XX:+UseG1GC -XX:MarkStackSize=256K

1 tests failed.
REGRESSION:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([4FEAD0A68057F1E4:720BF9C7C745EC24]:0)
at 
org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter$StemmerOverrideMap.getBytesReader(StemmerOverrideFilter.java:109)
at 
org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter.init(StemmerOverrideFilter.java:62)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at 
org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.createComponent(TestRandomChains.java:769)
at 
org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.newFilterChain(TestRandomChains.java:884)
at 
org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.toString(TestRandomChains.java:758)
at java.lang.String.valueOf(String.java:2896)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:995)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:487)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (LUCENE-4864) Add AsyncFSDirectory to work around Windows issues with NIOFS (Lucene 5.0 only)

2013-03-25 Thread Michael Poindexter (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612846#comment-13612846
 ] 

Michael Poindexter commented on LUCENE-4864:


I figured out how to run the benchmark utility and ran a few tests on Windows.  
Results are not promising.  When I'm back I'll post the complete results here 
for posterity (I still want to run one or two more tests before then), but I 
think this issue can probably be closed as won't fix.

 Add AsyncFSDirectory to work around Windows issues with NIOFS (Lucene 5.0 
 only)
 ---

 Key: LUCENE-4864
 URL: https://issues.apache.org/jira/browse/LUCENE-4864
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Affects Versions: 5.0
Reporter: Michael Poindexter
 Attachments: LUCENE-4864.patch, LUCENE-4864.patch


 On LUCENE-4848 a new directory implementation was proposed that uses 
 AsyncFileChannel to make a sync-less directory implementation (only needed 
 for IndexInput). The problem on Windows is that positional reads are 
 impossible without overlapping (async) I/O, so FileChannel in the JDK has to 
 syncronize all reads, because they consist of an atomic seek and atomic read.
 AsyncFSDirectoty would not have this issue, but has to take care of thread 
 management, because you need a separate thread to get notified when the read 
 is done. This involves overhead, but might still be better than the 
 synchronization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4881) Add a set iterator to SentinalIntSet

2013-03-25 Thread David Smiley (JIRA)
David Smiley created LUCENE-4881:


 Summary: Add a set iterator to SentinalIntSet
 Key: LUCENE-4881
 URL: https://issues.apache.org/jira/browse/LUCENE-4881
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: David Smiley


I'm working on code that needs a hash based int Set.  It will need to iterate 
over the values, but SentinalIntSet doesn't have this utility feature.  It 
should be pretty easy to add.

FYI this is an out-growth of a question I posed to the dev list, examining 3 
different int hash sets out there: SentinalIntSet, IntHashSet (in Lucene facet 
module) and the 3rd party IntOpenHashSet (HPPC) -- see 
http://lucene.472066.n3.nabble.com/IntHashSet-SentinelIntSet-SortedIntDocSet-td4037516.html
  I decided to go for SentinalIntSet because it's already in Lucene-core, 
adding the method I need should be easy, and it has a nice lean implementation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4645) Implement spatial CONTAINS for RecursivePrefixTree

2013-03-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-4645:
-

Fix Version/s: 4.3

 Implement spatial CONTAINS for RecursivePrefixTree
 --

 Key: LUCENE-4645
 URL: https://issues.apache.org/jira/browse/LUCENE-4645
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 4.3




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4381) Query-time multi-word synonym expansion

2013-03-25 Thread Nolan Lawson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612858#comment-13612858
 ] 

Nolan Lawson commented on SOLR-4381:


[~otis]: OK, I've updated everything in [the GitHub Issues 
page|https://github.com/healthonnet/hon-lucene-synonyms/issues?state=open].  If 
you're willing to put in work, then please do send me a pull request!  :)  
Looking forward to it.

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-4381
 URL: https://issues.apache.org/jira/browse/SOLR-4381
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Nolan Lawson
Priority: Minor
  Labels: multi-word, queryparser, synonyms
 Fix For: 4.3

 Attachments: SOLR-4381-2.patch, SOLR-4381.patch


 This is an issue that seems to come up perennially.
 The [Solr 
 docs|http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory]
  caution that index-time synonym expansion should be preferred to query-time 
 synonym expansion, due to the way multi-word synonyms are treated and how IDF 
 values can be boosted artificially. But query-time expansion should have huge 
 benefits, given that changes to the synonyms don't require re-indexing, the 
 index size stays the same, and the IDF values for the documents don't get 
 permanently altered.
 The proposed solution is to move the synonym expansion logic from the 
 analysis chain (either query- or index-type) and into a new QueryParser.  See 
 the attached patch for an implementation.
 The core Lucene functionality is untouched.  Instead, the EDismaxQParser is 
 extended, and synonym expansion is done on-the-fly.  Queries are parsed into 
 a lattice (i.e. all possible synonym combinations), while individual 
 components of the query are still handled by the EDismaxQParser itself.
 It's not an ideal solution by any stretch. But it's nice and self-contained, 
 so it invites experimentation and improvement.  And I think it fits in well 
 with the merry band of misfit query parsers, like {{func}} and {{frange}}.
 More details about this solution can be found in [this blog 
 post|http://nolanlawson.com/2012/10/31/better-synonym-handling-in-solr/] and 
 [the Github page for the 
 code|https://github.com/healthonnet/hon-lucene-synonyms].
 At the risk of tooting my own horn, I also think this patch sufficiently 
 fixes SOLR-3390 (highlighting problems with multi-word synonyms) and 
 LUCENE-4499 (better support for multi-word synonyms).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1460519 [2/3] - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/response/ core/src/java/org/apache/solr/rest/ core/src/java/org/apache/

2013-03-25 Thread Robert Muir
Right... This is wrong to do though.
On Mar 25, 2013 6:00 AM, Steve Rowe sar...@gmail.com wrote:


 On Mar 25, 2013, at 3:51 AM, Robert Muir rcm...@gmail.com wrote:
  But this 'normalize' is wrongt: like map org.apache.lucene.xxx, or
  other packages of org.apache.solr.xxx to solr.xxx.

 No, it maps o.a.(l|s).what.ev.er.xxx to solr.xxx.

 Here's the code again:

 -
 private static String normalizeSPIname(String fullyQualifiedName) {
   if (fullyQualifiedName.startsWith(org.apache.lucene.) ||
 fullyQualifiedName.startsWith(org.apache.solr.)) {
 return solr +
 fullyQualifiedName.substring(fullyQualifiedName.lastIndexOf('.'));
   }
   return fullyQualifiedName;
 }
 -

 See the .lastIndexOf('.') part?

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Created] (SOLR-4642) QueryResultKey, bug in filter hashCode

2013-03-25 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-4642:


 Summary: QueryResultKey, bug in filter hashCode
 Key: SOLR-4642
 URL: https://issues.apache.org/jira/browse/SOLR-4642
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.2
Reporter: Joel Bernstein
 Fix For: 4.3


Looks like the QueryResultKey has a bug when it creates the hashCode for the 
filters. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4642) QueryResultKey, bug in filter hashCode

2013-03-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4642:
-

Attachment: SOLR-4642.patch

 QueryResultKey, bug in filter hashCode
 --

 Key: SOLR-4642
 URL: https://issues.apache.org/jira/browse/SOLR-4642
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.2
Reporter: Joel Bernstein
 Fix For: 4.3

 Attachments: SOLR-4642.patch


 Looks like the QueryResultKey has a bug when it creates the hashCode for the 
 filters. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-4623) Add REST API methods to get all remaining schema information, and also to return the full live schema in json, xml, and schema.xml formats

2013-03-25 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir reopened SOLR-4623:
---


Reopening to ensure my comments are taken seriously

 Add REST API methods to get all remaining schema information, and also to 
 return the full live schema in json, xml, and schema.xml formats
 --

 Key: SOLR-4623
 URL: https://issues.apache.org/jira/browse/SOLR-4623
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Affects Versions: 4.2
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.3

 Attachments: JSONResponseWriter.output.json, 
 SchemaXmlResponseWriter.output.xml, SOLR-4623.patch, 
 XMLResponseWriter.output.xml


 Each remaining schema component (after field types, fields, dynamic fields, 
 copy fields were added by SOLR-4503) should be available from the schema REST 
 API: name, version, default query operator, similarity, default search field, 
 and unique key.
 It should be possible to get the entire live schema back with a single 
 request, and schema.xml format should be one of the supported response 
 formats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4642) QueryResultKey, bug in filter hashCode

2013-03-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612889#comment-13612889
 ] 

Joel Bernstein commented on SOLR-4642:
--

This bug is in 4x and trunk. Unless I'm missing something, the code is calling 
hashCode on the list of Queries rather then the individual queries.

 QueryResultKey, bug in filter hashCode
 --

 Key: SOLR-4642
 URL: https://issues.apache.org/jira/browse/SOLR-4642
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.2
Reporter: Joel Bernstein
 Fix For: 4.3

 Attachments: SOLR-4642.patch


 Looks like the QueryResultKey has a bug when it creates the hashCode for the 
 filters. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4642) QueryResultKey, bug in filter hashCode

2013-03-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4642:
--

Fix Version/s: 4.2.1
   5.0
 Assignee: Mark Miller

Nice catch Joel!

 QueryResultKey, bug in filter hashCode
 --

 Key: SOLR-4642
 URL: https://issues.apache.org/jira/browse/SOLR-4642
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.2
Reporter: Joel Bernstein
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4642.patch


 Looks like the QueryResultKey has a bug when it creates the hashCode for the 
 filters. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4874) Remove FilterTerms.intersect

2013-03-25 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612908#comment-13612908
 ] 

Adrien Grand commented on LUCENE-4874:
--

Although DocIdSetIterator.advance is abstract, it describes a default 
implementation that many classes that extend DocsEnum/DocsAndPositionsEnum 
duplicate. Maybe we should just provide a default implementation for advance, 
this would save copy-pastes.

 Remove FilterTerms.intersect
 

 Key: LUCENE-4874
 URL: https://issues.apache.org/jira/browse/LUCENE-4874
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 Terms.intersect is an optional method. The fact that it is overridden in 
 FilterTerms forces any non-trivial class that extends FilterTerms to override 
 intersect in order this method to have a correct behavior. If FilterTerms did 
 not override this method and used the default impl, we would not have this 
 problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4642) QueryResultKey, bug in filter hashCode

2013-03-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612909#comment-13612909
 ] 

Joel Bernstein commented on SOLR-4642:
--

Thanks!

 QueryResultKey, bug in filter hashCode
 --

 Key: SOLR-4642
 URL: https://issues.apache.org/jira/browse/SOLR-4642
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.2
Reporter: Joel Bernstein
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4642.patch


 Looks like the QueryResultKey has a bug when it creates the hashCode for the 
 filters. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Wiki editing change

2013-03-25 Thread Erik Hatcher
Please add me: ErikHatcher

Thanks, Steve and others, for combating this spam issue.

Erik

On Mar 24, 2013, at 23:16 , Steve Rowe wrote:

 The wiki at http://wiki.apache.org/lucene-java/ has come under attack by 
 spammers more frequently of late, so the PMC has decided to lock it down in 
 an attempt to reduce the work involved in tracking and removing spam.
 
 From now on, only people who appear on 
 http://wiki.apache.org/lucene-java/ContributorsGroup will be able to 
 create/modify/delete wiki pages.
 
 Please request either on the java-u...@lucene.apache.org or on 
 dev@lucene.apache.org to have your wiki username added to the 
 ContributorsGroup page - this is a one-time step.
 
 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [ANNOUNCE] Wiki editing change

2013-03-25 Thread Uwe Schindler
I am on the AdminGroup of Lucene, but not in Solr. Can we make this consistent? 
I think I am in both projects allowed to contribute, but want to be sure.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Steve Rowe [mailto:sar...@gmail.com]
 Sent: Monday, March 25, 2013 4:16 AM
 To: dev@lucene.apache.org; java-u...@lucene.apache.org
 Subject: [ANNOUNCE] Wiki editing change
 
 The wiki at http://wiki.apache.org/lucene-java/ has come under attack by
 spammers more frequently of late, so the PMC has decided to lock it down in
 an attempt to reduce the work involved in tracking and removing spam.
 
 From now on, only people who appear on http://wiki.apache.org/lucene-
 java/ContributorsGroup will be able to create/modify/delete wiki pages.
 
 Please request either on the java-u...@lucene.apache.org or on
 dev@lucene.apache.org to have your wiki username added to the
 ContributorsGroup page - this is a one-time step.
 
 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Wiki editing change

2013-03-25 Thread Steve Rowe
On Mar 25, 2013, at 1:58 PM, Erik Hatcher erik.hatc...@gmail.com wrote:
 Please add me: ErikHatcher

Added to solr and lucene AdminGroup.

On Mar 25, 2013, at 2:02 PM, Uwe Schindler u...@thetaphi.de wrote:
 I am on the AdminGroup of Lucene, but not in Solr. Can we make this 
 consistent? I think I am in both projects allowed to contribute, but want to 
 be sure.

Added to solr AdminGroup.
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4642) QueryResultKey, bug in filter hashCode

2013-03-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4642:
--

Attachment: SOLR-4642.patch

I've added a super simple test for this.

 QueryResultKey, bug in filter hashCode
 --

 Key: SOLR-4642
 URL: https://issues.apache.org/jira/browse/SOLR-4642
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.2
Reporter: Joel Bernstein
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4642.patch, SOLR-4642.patch


 Looks like the QueryResultKey has a bug when it creates the hashCode for the 
 filters. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4642) QueryResultKey, bug in filter hashCode

2013-03-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4642.
---

Resolution: Fixed

Thanks Joel! Tossed this in 4.2.1 as well - kind of a nasty bug really.

 QueryResultKey, bug in filter hashCode
 --

 Key: SOLR-4642
 URL: https://issues.apache.org/jira/browse/SOLR-4642
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.2
Reporter: Joel Bernstein
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4642.patch, SOLR-4642.patch


 Looks like the QueryResultKey has a bug when it creates the hashCode for the 
 filters. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene/Solr 4.2.1

2013-03-25 Thread Mark Miller
So I hope to put the artifacts up tonight for the first rc. Sorry for the 
delay, wanted to let some previous fixes bake just a little.

- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Wiki editing change

2013-03-25 Thread Ryan McKinley
please add me too:
ryantxu


On Mon, Mar 25, 2013 at 11:12 AM, Steve Rowe sar...@gmail.com wrote:

 On Mar 25, 2013, at 1:58 PM, Erik Hatcher erik.hatc...@gmail.com wrote:
  Please add me: ErikHatcher

 Added to solr and lucene AdminGroup.

 On Mar 25, 2013, at 2:02 PM, Uwe Schindler u...@thetaphi.de wrote:
  I am on the AdminGroup of Lucene, but not in Solr. Can we make this
 consistent? I think I am in both projects allowed to contribute, but want
 to be sure.

 Added to solr AdminGroup.
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [ANNOUNCE] Wiki editing change

2013-03-25 Thread Steve Rowe
On Mar 25, 2013, at 2:46 PM, Ryan McKinley ryan...@gmail.com wrote:
 please add me too:
 ryantxu

Added to solr and lucene AdminGroup.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1460519 [2/3] - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/response/ core/src/java/org/apache/solr/rest/ core/src/java/org/apache/

2013-03-25 Thread Steve Rowe
Robert,

Would you mind responding in some form other than haiku?

What's wrong to do?

What should be done?

Steve

On Mar 25, 2013, at 1:28 PM, Robert Muir rcm...@gmail.com wrote:

 Right... This is wrong to do though.
 
 On Mar 25, 2013 6:00 AM, Steve Rowe sar...@gmail.com wrote:
 
 On Mar 25, 2013, at 3:51 AM, Robert Muir rcm...@gmail.com wrote:
  But this 'normalize' is wrongt: like map org.apache.lucene.xxx, or
  other packages of org.apache.solr.xxx to solr.xxx.
 
 No, it maps o.a.(l|s).what.ev.er.xxx to solr.xxx.
 
 Here's the code again:
 
 -
 private static String normalizeSPIname(String fullyQualifiedName) {
   if (fullyQualifiedName.startsWith(org.apache.lucene.) || 
 fullyQualifiedName.startsWith(org.apache.solr.)) {
 return solr + 
 fullyQualifiedName.substring(fullyQualifiedName.lastIndexOf('.'));
   }
   return fullyQualifiedName;
 }
 -
 
 See the .lastIndexOf('.') part?
 
 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4623) Add REST API methods to get all remaining schema information, and also to return the full live schema in json, xml, and schema.xml formats

2013-03-25 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612993#comment-13612993
 ] 

Steve Rowe commented on SOLR-4623:
--

Robert, I replied to you on the mailing list, and I tried to contact you on 
#lucene IRC.

You haven't responded in any meaningful way.

So please help me understand what you don't like and how you think it ought to 
be fixed.

 Add REST API methods to get all remaining schema information, and also to 
 return the full live schema in json, xml, and schema.xml formats
 --

 Key: SOLR-4623
 URL: https://issues.apache.org/jira/browse/SOLR-4623
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Affects Versions: 4.2
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.3

 Attachments: JSONResponseWriter.output.json, 
 SchemaXmlResponseWriter.output.xml, SOLR-4623.patch, 
 XMLResponseWriter.output.xml


 Each remaining schema component (after field types, fields, dynamic fields, 
 copy fields were added by SOLR-4503) should be available from the schema REST 
 API: name, version, default query operator, similarity, default search field, 
 and unique key.
 It should be possible to get the entire live schema back with a single 
 request, and schema.xml format should be one of the supported response 
 formats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



The JIRA commit tag bot.

2013-03-25 Thread Mark Miller
So the bot flooded the list on Friday. It was enough mail to turn me off of the 
whole thing.

With some time gone by, I'm ready to start looking into bringing JIRA tags back 
and what other options I have in terms of how to approach it as well as looking 
into more limitations to prevent any bad behavior.

It will probably be a little while before I'm comfortable depending on the 
solution chosen, but I will make sure we have some form of JIRA tagging again 
before long.

- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b82) - Build # 4823 - Failure!

2013-03-25 Thread Michael McCandless
I'll fix ...

Mike McCandless

http://blog.mikemccandless.com


On Mon, Mar 25, 2013 at 12:26 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4823/
 Java: 32bit/jdk1.8.0-ea-b82 -client -XX:+UseG1GC -XX:MarkStackSize=256K

 1 tests failed.
 REGRESSION:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at 
 __randomizedtesting.SeedInfo.seed([4FEAD0A68057F1E4:720BF9C7C745EC24]:0)
 at 
 org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter$StemmerOverrideMap.getBytesReader(StemmerOverrideFilter.java:109)
 at 
 org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter.init(StemmerOverrideFilter.java:62)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
 at 
 org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.createComponent(TestRandomChains.java:769)
 at 
 org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.newFilterChain(TestRandomChains.java:884)
 at 
 org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.toString(TestRandomChains.java:758)
 at java.lang.String.valueOf(String.java:2896)
 at java.lang.StringBuilder.append(StringBuilder.java:131)
 at 
 org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:995)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:487)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 

[jira] [Updated] (SOLR-4640) RecoveryZkTest and sometimes other tests leave a Directory un-closed.

2013-03-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4640:
--

Fix Version/s: 4.2.1

 RecoveryZkTest and sometimes other tests leave a Directory un-closed.
 -

 Key: SOLR-4640
 URL: https://issues.apache.org/jira/browse/SOLR-4640
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4640) RecoveryZkTest and sometimes other tests leave a Directory un-closed.

2013-03-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4640.
---

Resolution: Fixed

 RecoveryZkTest and sometimes other tests leave a Directory un-closed.
 -

 Key: SOLR-4640
 URL: https://issues.apache.org/jira/browse/SOLR-4640
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0, 4.2.1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: The JIRA commit tag bot.

2013-03-25 Thread Eric Pugh
For what it's worth, while yes the bot went crazy, in general I do love the 
JIRA tagging.

Eric

On Mar 25, 2013, at 3:15 PM, Mark Miller wrote:

 So the bot flooded the list on Friday. It was enough mail to turn me off of 
 the whole thing.
 
 With some time gone by, I'm ready to start looking into bringing JIRA tags 
 back and what other options I have in terms of how to approach it as well as 
 looking into more limitations to prevent any bad behavior.
 
 It will probably be a little while before I'm comfortable depending on the 
 solution chosen, but I will make sure we have some form of JIRA tagging again 
 before long.
 
 - Mark
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 

-
Eric Pugh | Principal | OpenSource Connections, LLC | 434.466.1467 | 
http://www.opensourceconnections.com
Co-Author: Apache Solr 3 Enterprise Search Server available from 
http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
This e-mail and all contents, including attachments, is considered to be 
Company Confidential unless explicitly stated otherwise, regardless of whether 
attachments are marked as such.














-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b82) - Build # 4823 - Failure!

2013-03-25 Thread Simon Willnauer
thanks mike!

On Mon, Mar 25, 2013 at 8:19 PM, Michael McCandless
luc...@mikemccandless.com wrote:
 I'll fix ...

 Mike McCandless

 http://blog.mikemccandless.com


 On Mon, Mar 25, 2013 at 12:26 PM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4823/
 Java: 32bit/jdk1.8.0-ea-b82 -client -XX:+UseG1GC -XX:MarkStackSize=256K

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at 
 __randomizedtesting.SeedInfo.seed([4FEAD0A68057F1E4:720BF9C7C745EC24]:0)
 at 
 org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter$StemmerOverrideMap.getBytesReader(StemmerOverrideFilter.java:109)
 at 
 org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter.init(StemmerOverrideFilter.java:62)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
 at 
 org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.createComponent(TestRandomChains.java:769)
 at 
 org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.newFilterChain(TestRandomChains.java:884)
 at 
 org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.toString(TestRandomChains.java:758)
 at java.lang.String.valueOf(String.java:2896)
 at java.lang.StringBuilder.append(StringBuilder.java:131)
 at 
 org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:995)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:487)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 

[jira] [Commented] (LUCENE-4872) BooleanWeight should decide how to execute minNrShouldMatch

2013-03-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613035#comment-13613035
 ] 

Michael McCandless commented on LUCENE-4872:


I really don't really know what the typical/common use cases are for
minShouldMatch.

I agree we should err towards BS2, since it can be insanely faster
while BS1 can only be ~3X faster (on super-slow queries to begin
with), in this test anyway.

A more accurate cost model for scorers would be awesome!  This could
be a general framework that we'd be able to use for various forms for
query optimizing (which we don't do today or do with heuristics), eg
things like whether to apply a filter (AND) high vs low, whether to
use BS1 or BS2 for pure conjunctions, when to split a PhraseQuery into
conjunction + position checking, flattening of nested boolean
queries, MultiTermQuery rewrite method, etc.  But probably we should
explore this on a new issue.


 BooleanWeight should decide how to execute minNrShouldMatch
 ---

 Key: LUCENE-4872
 URL: https://issues.apache.org/jira/browse/LUCENE-4872
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: core/search
Reporter: Robert Muir
 Fix For: 5.0, 4.3

 Attachments: crazyMinShouldMatch.tasks


 LUCENE-4571 adds a dedicated document-at-time scorer for minNrShouldMatch 
 which can use advance() behind the scenes. 
 In cases where you have some really common terms and some rare ones this can 
 be a huge performance improvement.
 On the other hand BooleanScorer might still be faster in some cases.
 We should think about what the logic should be here: one simple thing to do 
 is to always use the new scorer when minShouldMatch is set: thats where i'm 
 leaning. 
 But maybe we could have a smarter heuristic too, perhaps based on cost()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4613) Move checkDistributed to SearchHandler

2013-03-25 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613108#comment-13613108
 ] 

Ryan Ernst commented on SOLR-4613:
--

I can see the motivation there, but in this case it means a ShardHandler must 
be created per request to determine if it is distributed or not.  IMO, it is 
very nice if you want to have one request that handles distributed requests to 
set it up there.

 Move checkDistributed to SearchHandler
 --

 Key: SOLR-4613
 URL: https://issues.apache.org/jira/browse/SOLR-4613
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Ryan Ernst
 Attachments: SOLR-4613.patch


 Currently a ShardHandler is created for a request even for non distributed 
 requests.  The checkDistributed function on ShardHandler has no special state 
 kept in the ShardHandler.  Historically it used to be in QueryComponent, but 
 it seems like SearchHandler would be the right place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4613) Move checkDistributed to SearchHandler

2013-03-25 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613108#comment-13613108
 ] 

Ryan Ernst edited comment on SOLR-4613 at 3/25/13 8:52 PM:
---

I can see the motivation there, but in this case it means a ShardHandler must 
be created per request to determine if it is distributed or not.  IMO, it is 
very nice if you want to have one request handler for distributed requests to 
set it up there.

  was (Author: rjernst):
I can see the motivation there, but in this case it means a ShardHandler 
must be created per request to determine if it is distributed or not.  IMO, it 
is very nice if you want to have one request that handles distributed requests 
to set it up there.
  
 Move checkDistributed to SearchHandler
 --

 Key: SOLR-4613
 URL: https://issues.apache.org/jira/browse/SOLR-4613
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Ryan Ernst
 Attachments: SOLR-4613.patch


 Currently a ShardHandler is created for a request even for non distributed 
 requests.  The checkDistributed function on ShardHandler has no special state 
 kept in the ShardHandler.  Historically it used to be in QueryComponent, but 
 it seems like SearchHandler would be the right place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4643) Refactor shard handler (and factory) to make pieces more pluggable

2013-03-25 Thread Ryan Ernst (JIRA)
Ryan Ernst created SOLR-4643:


 Summary: Refactor shard handler (and factory) to make pieces more 
pluggable
 Key: SOLR-4643
 URL: https://issues.apache.org/jira/browse/SOLR-4643
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst


Over the past few weeks I've been trying to write my own shard handler/factory, 
and it is a bit of a pain.  The pieces that I don't want to reimplement are 
tied very closely with those that I do.

I believe the current design is as follows:

ShardHandlerFactory - created once, shared across cores (except in some legacy 
case where it is per core?).  This contains the heavyweight stuff like 
threadpool for parallelizing requests and httpclient.  It also is what keeps a 
solrj loadbalancer object.

ShardHandler - created per request, it has the logic for determining if a 
request is distributed, and sending the requests in parallel (using an executor 
from the parent factory object).  It also has the knowledge of how to send 
requests and parse the response embedded within the parallelization piece 
(through solrj code).

I've attempted to address some of the ease of plug-ability:
https://issues.apache.org/jira/browse/SOLR-4544
This was an attempt to get to reuse the code for parallelizing the requests, 
but still plug in code for making the requests.  It sort of works, but was just 
a stop gap measure.  You still cannot format the request or parse the response 
without reimplementing ShardHandler.

https://issues.apache.org/jira/browse/SOLR-4613
Here I was trying to only require creating a shard handler when the request is 
distributed, instead of every request just to find out if it is distributed.

At this point I thought I would create a jira to write down a proposal for how 
to do this refactoring, instead of continuing with piecemeal/out of context 
jiras.


I view this shard handler business as needing the following:
1. Something to parallelize the requests.  Most people should never have to 
replace this (if anyone?).  It contains the thread pool and execution service 
and is global (like the shard handler factory now).

2. Something that knows how to talk to the shards.  This includes formatting 
the request and parsing the response. This could probably be per core or even 
per request handler?

3. Something to do load balancing.  This could probably be in 2, although I 
could see it being separate for easier plugging of LB without having to handle 
request/response format or vice versa.  It would contain the http client for 
talking to hosts, and so probably still be global.

I would love to get consensus on the design of this before going off and doing 
it, and suggestions for how to break this into smaller pieces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Opening up FieldCacheImpl

2013-03-25 Thread David Smiley (@MITRE.org)
Interesting conversation. So if hypothetically Solr's FileFloatSource /
ExternalFileField didn't yet exist and we were instead talking about how to
implement such a thing on the latest 4.x code, then how basically might it
work?  I can see how to implement a Solr CodecFactory ( a SchemaAware one) ,
then a DocValuesProducer.  The CodecFactory implements
NamedInitializedPlugin and can thus get its config info that way.  That's
one approach.  But it's not clear to me where one would wrap AtomicReader
with FilterAtomicReader to use that approach.

~ David


Robert Muir wrote
 On Sat, Mar 23, 2013 at 7:25 AM, Alan Woodward lt;

 alan@.co

 gt; wrote:
 I think instead FieldCache should actually be completely package
 private and hidden behind a UninvertingFilterReader and accessible via
 the existing AtomicReader docValues methods.

 Aha, right, because SegmentCoreReaders already caches XXXDocValues
 instances (without using WeakReferences or anything like that).

 I should explain my motivation here.  I want to store various scoring
 factors externally to Lucene, but make them available via a ValueSource
 to CustomScoreQueries - essentially a generalisation of FileFloatSource
 to any external data source.  FFS already has a bunch of code copied from
 FieldCache, which was why my first thought was to open it up a bit and
 extend it, rather than copy and paste again.

 But it sounds as though a nicer way of doing this would be to create a
 new DocValuesProducer that talks to the external data source, and then
 access it through the AR docValues methods.  Does that sound plausible? 
 Is SPI going to make it difficult to pass parameters to a custom
 DVProducer (data location, host/port, other DV fields to use as primary
 key lookups, etc)?

 
 its not involved if you implement via FilterAtomicReader. its only
 involved for reading things that are actually written into the index.
 
 -
 To unsubscribe, e-mail: 

 dev-unsubscribe@.apache

 For additional commands, e-mail: 

 dev-help@.apache





-
 Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Opening-up-FieldCacheImpl-tp4050537p4051217.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4643) Refactor shard handler (and factory) to make pieces more pluggable

2013-03-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613221#comment-13613221
 ] 

Mark Miller commented on SOLR-4643:
---

Fire away.

 Refactor shard handler (and factory) to make pieces more pluggable
 --

 Key: SOLR-4643
 URL: https://issues.apache.org/jira/browse/SOLR-4643
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan Ernst

 Over the past few weeks I've been trying to write my own shard 
 handler/factory, and it is a bit of a pain.  The pieces that I don't want to 
 reimplement are tied very closely with those that I do.
 I believe the current design is as follows:
 ShardHandlerFactory - created once, shared across cores (except in some 
 legacy case where it is per core?).  This contains the heavyweight stuff 
 like threadpool for parallelizing requests and httpclient.  It also is what 
 keeps a solrj loadbalancer object.
 ShardHandler - created per request, it has the logic for determining if a 
 request is distributed, and sending the requests in parallel (using an 
 executor from the parent factory object).  It also has the knowledge of how 
 to send requests and parse the response embedded within the parallelization 
 piece (through solrj code).
 I've attempted to address some of the ease of plug-ability:
 https://issues.apache.org/jira/browse/SOLR-4544
 This was an attempt to get to reuse the code for parallelizing the requests, 
 but still plug in code for making the requests.  It sort of works, but was 
 just a stop gap measure.  You still cannot format the request or parse the 
 response without reimplementing ShardHandler.
 https://issues.apache.org/jira/browse/SOLR-4613
 Here I was trying to only require creating a shard handler when the request 
 is distributed, instead of every request just to find out if it is 
 distributed.
 At this point I thought I would create a jira to write down a proposal for 
 how to do this refactoring, instead of continuing with piecemeal/out of 
 context jiras.
 I view this shard handler business as needing the following:
 1. Something to parallelize the requests.  Most people should never have to 
 replace this (if anyone?).  It contains the thread pool and execution service 
 and is global (like the shard handler factory now).
 2. Something that knows how to talk to the shards.  This includes formatting 
 the request and parsing the response. This could probably be per core or even 
 per request handler?
 3. Something to do load balancing.  This could probably be in 2, although I 
 could see it being separate for easier plugging of LB without having to 
 handle request/response format or vice versa.  It would contain the http 
 client for talking to hosts, and so probably still be global.
 I would love to get consensus on the design of this before going off and 
 doing it, and suggestions for how to break this into smaller pieces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: The JIRA commit tag bot.

2013-03-25 Thread Erick Erickson
Agree with Eric. The bot's value exceeds an infrequent glitch IMO


On Mon, Mar 25, 2013 at 3:29 PM, Eric Pugh
ep...@opensourceconnections.comwrote:

 For what it's worth, while yes the bot went crazy, in general I do love
 the JIRA tagging.

 Eric

 On Mar 25, 2013, at 3:15 PM, Mark Miller wrote:

  So the bot flooded the list on Friday. It was enough mail to turn me off
 of the whole thing.
 
  With some time gone by, I'm ready to start looking into bringing JIRA
 tags back and what other options I have in terms of how to approach it as
 well as looking into more limitations to prevent any bad behavior.
 
  It will probably be a little while before I'm comfortable depending on
 the solution chosen, but I will make sure we have some form of JIRA tagging
 again before long.
 
  - Mark
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 

 -
 Eric Pugh | Principal | OpenSource Connections, LLC | 434.466.1467 |
 http://www.opensourceconnections.com
 Co-Author: Apache Solr 3 Enterprise Search Server available from
 http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
 This e-mail and all contents, including attachments, is considered to be
 Company Confidential unless explicitly stated otherwise, regardless of
 whether attachments are marked as such.














 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-4632) transientCacheSize is not retained when persisting solr.xml

2013-03-25 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613326#comment-13613326
 ] 

Erick Erickson commented on SOLR-4632:
--

Hmmm, why did you remove the check in CoreContainer when persisting the 
transientCacheSize around line 1,300 (warning, I've made some other changes so 
the file lines may not match).

It seems incorrect to me to persist Integer.MAX_VALUE if nothing has ever been 
specified, just let the default value happen next time the file is read rather 
than leave the user wondering where the heck that came from.

You don't need to put up a new patch, I'll just change this in SOLR-4615 unless 
you convince me that this check is really a bad idea



 transientCacheSize is not retained when persisting solr.xml
 ---

 Key: SOLR-4632
 URL: https://issues.apache.org/jira/browse/SOLR-4632
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.2
Reporter: dfdeshom
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.3

 Attachments: SOLR-4632.txt


 transientCacheSize is not persisted solr.xml when creating a new core. I was 
 able to reproduce this using the following solr.xml file:
 {code:xml}
 ?xml version=1.0 encoding=UTF-8 ?
 solr persistent=true
   cores transientCacheSize=21 defaultCoreName=collection1 
 adminPath=/admin/cores zkClientTimeout=${zkClientTimeout:15000} 
 hostPort=8983 hostContext=solr
 core name=collection1 collection=collection1/
   /cores
 /solr
 {code}
 I created a new core:
 {code} curl 
 http://localhost:8983/solr/admin/cores?action=createinstanceDir=collection1transient=truename=tmp5loadOnStartup=false{code}
 The resulting solr.xml file has the new core added, but is missing the 
 transientCacheSize attribute.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #285: POMs out of sync

2013-03-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/285/

No tests ran.

Build Log:
[...truncated 11270 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4632) transientCacheSize is not retained when persisting solr.xml

2013-03-25 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613349#comment-13613349
 ] 

Erick Erickson commented on SOLR-4632:
--

Looking more closely, transientCacheSize is persisted when it's value is other 
than Integer.MAX_VALUE. So I don't think this is a problem after all.

 transientCacheSize is not retained when persisting solr.xml
 ---

 Key: SOLR-4632
 URL: https://issues.apache.org/jira/browse/SOLR-4632
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.2
Reporter: dfdeshom
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.3

 Attachments: SOLR-4632.txt


 transientCacheSize is not persisted solr.xml when creating a new core. I was 
 able to reproduce this using the following solr.xml file:
 {code:xml}
 ?xml version=1.0 encoding=UTF-8 ?
 solr persistent=true
   cores transientCacheSize=21 defaultCoreName=collection1 
 adminPath=/admin/cores zkClientTimeout=${zkClientTimeout:15000} 
 hostPort=8983 hostContext=solr
 core name=collection1 collection=collection1/
   /cores
 /solr
 {code}
 I created a new core:
 {code} curl 
 http://localhost:8983/solr/admin/cores?action=createinstanceDir=collection1transient=truename=tmp5loadOnStartup=false{code}
 The resulting solr.xml file has the new core added, but is missing the 
 transientCacheSize attribute.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Opening up FieldCacheImpl

2013-03-25 Thread Robert Muir
I don't think codec would be where you'd plugin for a filterreader that
exposes external data as fake fields. That's because its all about what
encoding indexwriter uses to write. I think solr has an indexreaderfactory
if you want to e.g. wrap readers with filteratomicreaders.
On Mar 25, 2013 2:30 PM, David Smiley (@MITRE.org) dsmi...@mitre.org
wrote:

 Interesting conversation. So if hypothetically Solr's FileFloatSource /
 ExternalFileField didn't yet exist and we were instead talking about how to
 implement such a thing on the latest 4.x code, then how basically might it
 work?  I can see how to implement a Solr CodecFactory ( a SchemaAware one)
 ,
 then a DocValuesProducer.  The CodecFactory implements
 NamedInitializedPlugin and can thus get its config info that way.  That's
 one approach.  But it's not clear to me where one would wrap AtomicReader
 with FilterAtomicReader to use that approach.

 ~ David


 Robert Muir wrote
  On Sat, Mar 23, 2013 at 7:25 AM, Alan Woodward lt;

  alan@.co

  gt; wrote:
  I think instead FieldCache should actually be completely package
  private and hidden behind a UninvertingFilterReader and accessible via
  the existing AtomicReader docValues methods.
 
  Aha, right, because SegmentCoreReaders already caches XXXDocValues
  instances (without using WeakReferences or anything like that).
 
  I should explain my motivation here.  I want to store various scoring
  factors externally to Lucene, but make them available via a ValueSource
  to CustomScoreQueries - essentially a generalisation of FileFloatSource
  to any external data source.  FFS already has a bunch of code copied
 from
  FieldCache, which was why my first thought was to open it up a bit and
  extend it, rather than copy and paste again.
 
  But it sounds as though a nicer way of doing this would be to create a
  new DocValuesProducer that talks to the external data source, and then
  access it through the AR docValues methods.  Does that sound plausible?
  Is SPI going to make it difficult to pass parameters to a custom
  DVProducer (data location, host/port, other DV fields to use as primary
  key lookups, etc)?
 
 
  its not involved if you implement via FilterAtomicReader. its only
  involved for reading things that are actually written into the index.
 
  -
  To unsubscribe, e-mail:

  dev-unsubscribe@.apache

  For additional commands, e-mail:

  dev-help@.apache





 -
  Author:
 http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Opening-up-FieldCacheImpl-tp4050537p4051217.html
 Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (LUCENE-4879) Filter stack traces on console output.

2013-03-25 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613415#comment-13613415
 ] 

Robert Muir commented on LUCENE-4879:
-

Thank you very much for this! This is a huge improvement, stacktraces for a 
simple test failure are like 3 lines long instead of 45 now and can be seen 
without scrolling (with room for even a few debugging prints too!)

 Filter stack traces on console output.
 --

 Key: LUCENE-4879
 URL: https://issues.apache.org/jira/browse/LUCENE-4879
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/test
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 5.0, 4.3


 We could filter stack traces similar to what ANT's JUnit task does. It'd 
 remove some of the noise and make them shorter. I don't think the lack of 
 stack filtering is particularly annoying and it's always to have an explicit 
 view of what and where happened but since Robert requested this I'll add it.
 We can always make it a (yet another) test.* option :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b82) - Build # 4823 - Failure!

2013-03-25 Thread Robert Muir
finally the annoyances of this test provide us some benefit :)

On Mon, Mar 25, 2013 at 12:54 PM, Simon Willnauer
simon.willna...@gmail.com wrote:
 thanks mike!

 On Mon, Mar 25, 2013 at 8:19 PM, Michael McCandless
 luc...@mikemccandless.com wrote:
 I'll fix ...

 Mike McCandless

 http://blog.mikemccandless.com


 On Mon, Mar 25, 2013 at 12:26 PM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/4823/
 Java: 32bit/jdk1.8.0-ea-b82 -client -XX:+UseG1GC -XX:MarkStackSize=256K

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at 
 __randomizedtesting.SeedInfo.seed([4FEAD0A68057F1E4:720BF9C7C745EC24]:0)
 at 
 org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter$StemmerOverrideMap.getBytesReader(StemmerOverrideFilter.java:109)
 at 
 org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter.init(StemmerOverrideFilter.java:62)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
 at 
 org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.createComponent(TestRandomChains.java:769)
 at 
 org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.newFilterChain(TestRandomChains.java:884)
 at 
 org.apache.lucene.analysis.core.TestRandomChains$MockRandomAnalyzer.toString(TestRandomChains.java:758)
 at java.lang.String.valueOf(String.java:2896)
 at java.lang.StringBuilder.append(StringBuilder.java:131)
 at 
 org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:995)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:487)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #812: POMs out of sync

2013-03-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/812/

No tests ran.

Build Log:
[...truncated 11720 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: svn commit: r1460519 [2/3] - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/response/ core/src/java/org/apache/solr/rest/ core/src/java/org/apache/

2013-03-25 Thread Robert Muir
Well there are several bugs, resulting from the over-aggressive
normalization combined with normalizing *always* despite this comment:

  // Only normalize factory names

So consider the case someone has
similarity class=org.apache.lucene.search.similarities.BM25Similarity/

which is allowed (it uses the anonymous factory). In this case its
bogusly normalized to solr.BM25Similarity which is invalid and won't
be loaded by IndexSchema, since it only looks for solr. in
org.apache.solr.search.similarities.

I think a patch like the following is a good start, but we should
review the other uses of the same code-dup'ed function in IndexSchema
and ensure there are not similar bugs:

I'm sorry if i came off terse or as a haiku, its not a big deal, I
just want it to work correctly.

Index: solr/core/src/java/org/apache/solr/schema/SimilarityFactory.java
===
--- solr/core/src/java/org/apache/solr/schema/SimilarityFactory.java
(revision
1460952)
+++ solr/core/src/java/org/apache/solr/schema/SimilarityFactory.java(working
copy)
@@ -51,9 +51,9 @@
   public abstract Similarity getSimilarity();


-  private static String normalizeSPIname(String fullyQualifiedName) {
-if (fullyQualifiedName.startsWith(org.apache.lucene.) ||
fullyQualifiedName.startsWith(org.apache.solr.)) {
-  return solr +
fullyQualifiedName.substring(fullyQualifiedName.lastIndexOf('.'));
+  private static String normalizeName(String fullyQualifiedName) {
+if (fullyQualifiedName.startsWith(org.apache.solr.search.similarities.))
{
+  return solr +
fullyQualifiedName.substring(org.apache.solr.search.similarities.length());
 }
 return fullyQualifiedName;
   }
@@ -66,10 +66,10 @@
   className = getSimilarity().getClass().getName();
 } else {
   // Only normalize factory names
-  className = normalizeSPIname(className);
+  className = normalizeName(className);
 }
 SimpleOrderedMapObject props = new SimpleOrderedMapObject();
-props.add(CLASS_NAME, normalizeSPIname(className));
+props.add(CLASS_NAME, className);
 if (null != params) {
   IteratorString iter = params.getParameterNamesIterator();
   while (iter.hasNext()) {


On Mon, Mar 25, 2013 at 12:04 PM, Steve Rowe sar...@gmail.com wrote:
 Robert,

 Would you mind responding in some form other than haiku?

 What's wrong to do?

 What should be done?

 Steve

 On Mar 25, 2013, at 1:28 PM, Robert Muir rcm...@gmail.com wrote:

 Right... This is wrong to do though.

 On Mar 25, 2013 6:00 AM, Steve Rowe sar...@gmail.com wrote:

 On Mar 25, 2013, at 3:51 AM, Robert Muir rcm...@gmail.com wrote:
  But this 'normalize' is wrongt: like map org.apache.lucene.xxx, or
  other packages of org.apache.solr.xxx to solr.xxx.

 No, it maps o.a.(l|s).what.ev.er.xxx to solr.xxx.

 Here's the code again:

 -
 private static String normalizeSPIname(String fullyQualifiedName) {
   if (fullyQualifiedName.startsWith(org.apache.lucene.) || 
 fullyQualifiedName.startsWith(org.apache.solr.)) {
 return solr + 
 fullyQualifiedName.substring(fullyQualifiedName.lastIndexOf('.'));
   }
   return fullyQualifiedName;
 }
 -

 See the .lastIndexOf('.') part?

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-25 Thread crocket (JIRA)
crocket created LUCENE-4882:
---

 Summary: FacetsAccumulator.java:185 throws NullPointerException if 
it's given an empty CategoryPath.
 Key: LUCENE-4882
 URL: https://issues.apache.org/jira/browse/LUCENE-4882
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.2
Reporter: crocket
Priority: Critical


When I want to count root categories, I used to pass new CategoryPath(new 
String[0]) to a CountFacetRequest.

Since upgrading lucene from 4.1 to 4.2, that threw 
ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
CountFacetRequest instead, and this time I got NullPointerException.

It all originates from FacetsAccumulator.java:185

Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-25 Thread crocket (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

crocket updated LUCENE-4882:


Description: 
When I wanted to count root categories, I used to pass new CategoryPath(new 
String[0]) to a CountFacetRequest.

Since upgrading lucene from 4.1 to 4.2, that threw 
ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
CountFacetRequest instead, and this time I got NullPointerException.

It all originates from FacetsAccumulator.java:185

Does someone conspire to prevent others from counting root categories?

  was:
When I want to count root categories, I used to pass new CategoryPath(new 
String[0]) to a CountFacetRequest.

Since upgrading lucene from 4.1 to 4.2, that threw 
ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
CountFacetRequest instead, and this time I got NullPointerException.

It all originates from FacetsAccumulator.java:185

Does someone conspire to prevent others from counting root categories?


 FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
 CategoryPath.
 ---

 Key: LUCENE-4882
 URL: https://issues.apache.org/jira/browse/LUCENE-4882
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.2
Reporter: crocket
Priority: Critical

 When I wanted to count root categories, I used to pass new CategoryPath(new 
 String[0]) to a CountFacetRequest.
 Since upgrading lucene from 4.1 to 4.2, that threw 
 ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
 CountFacetRequest instead, and this time I got NullPointerException.
 It all originates from FacetsAccumulator.java:185
 Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Shawn Heisey as Lucene/Solr committer

2013-03-25 Thread Otis Gospodnetic
Welcome Shawn.  I'm most impressed by the five children part.  I have
only two and it ain't easy... (is this getting indexed and will my kids
be able to find this N years from now?)  You just didn't tell us when you
sleep, which is what I've been wondering...

Otis
--
http://sematext.com/





On Tue, Mar 19, 2013 at 12:31 AM, Steve Rowe sar...@gmail.com wrote:

 I'm pleased to announce that Shawn Heisey has accepted the PMC's
 invitation to become a committer.

 Shawn, it's tradition that you introduce yourself with a brief bio.

 Once your account has been created - could take a few days - you'll be
 able to add yourself to committers section of the Who We Are page on the
 website: http://lucene.apache.org/whoweare.html (use the ASF CMS
 bookmarklet at the bottom of the page here: 
 https://cms.apache.org/#bookmark - more info here 
 http://www.apache.org/dev/cms.html).

 Check out the ASF dev page - lots of useful links: 
 http://www.apache.org/dev/.

 Congratulations and welcome!

 Steve


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[VOTE] Lucene/Solr 4.2.1

2013-03-25 Thread Mark Miller
http://people.apache.org/~markrmiller/lucene_solr_4_2_1r1460810/

Thanks for voting!

Smoke tester passes for me,

+1.

-- 
- Mark

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4237) Implement index aliasing

2013-03-25 Thread Otis Gospodnetic (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otis Gospodnetic resolved SOLR-4237.


Resolution: Duplicate

I think this is being implemented in SOLR-4497.

 Implement index aliasing
 

 Key: SOLR-4237
 URL: https://issues.apache.org/jira/browse/SOLR-4237
 Project: Solr
  Issue Type: New Feature
Reporter: Otis Gospodnetic
 Fix For: 4.3


 This is handy for searching log indices and in all other situations where 
 indices are added (and possibly deleted) over time.  Index aliasing allows 
 one to map an arbitrary set of indices to an alias and avoid needing to 
 change the search application to point it to new indices.
 See http://search-lucene.com/m/YBn4w1UAbEB
 It may also be worth thinking about using aliases when indexing.  This 
 question comes up once in a while on the ElasticSearch mailing list for 
 example.
 See 
 http://search-lucene.com/?q=index+time+aliasfc_project=ElasticSearchfc_type=mail+_hash_+user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4497) Collection Aliasing.

2013-03-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4497.
---

   Resolution: Fixed
Fix Version/s: (was: 4.3)
   4.2
   5.0

 Collection Aliasing.
 

 Key: SOLR-4497
 URL: https://issues.apache.org/jira/browse/SOLR-4497
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.2

 Attachments: CDH-4497.patch, SOLR-4497.patch


 We should bring back the old aliasing feature, but for SolrCloud and with the 
 ability to alias one collection to many.
 The old alias feature was of more limited use and had some problems, so we 
 dropped it, but I think we can do this in a more useful way with SolrCloud, 
 and at a level were it's not invasive to the CoreContainer.
 Initially, the search side will allowing mapping a single alias to multiple 
 collections, but the index side will only support mapping a single alias to a 
 single collection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4644) SyncSliceTest often fails trying to setup an inconsistent state on, generally only on Apache Jenkins.

2013-03-25 Thread Mark Miller (JIRA)
Mark Miller created SOLR-4644:
-

 Summary: SyncSliceTest often fails trying to setup an inconsistent 
state on, generally only on Apache Jenkins.
 Key: SOLR-4644
 URL: https://issues.apache.org/jira/browse/SOLR-4644
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0


java.lang.AssertionError: Test Setup Failure: shard1 should have just been set 
up to be inconsistent - but it's still consistent. 
Leader:http://127.0.0.1:58076/gj_mz/in/collection1 Dead 
Guy:http://127.0.0.1:64555/gj_mz/in/collection1skip list:[CloudJettyRunner 
[url=http://127.0.0.1:18606/gj_mz/in/collection1], CloudJettyRunner 
[url=http://127.0.0.1:10847/gj_mz/in/collection1]]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4644) SyncSliceTest often fails trying to setup an inconsistent state, generally only on Apache Jenkins.

2013-03-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4644:
--

Summary: SyncSliceTest often fails trying to setup an inconsistent state, 
generally only on Apache Jenkins.  (was: SyncSliceTest often fails trying to 
setup an inconsistent state on, generally only on Apache Jenkins.)

 SyncSliceTest often fails trying to setup an inconsistent state, generally 
 only on Apache Jenkins.
 --

 Key: SOLR-4644
 URL: https://issues.apache.org/jira/browse/SOLR-4644
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.3, 5.0


 java.lang.AssertionError: Test Setup Failure: shard1 should have just been 
 set up to be inconsistent - but it's still consistent. 
 Leader:http://127.0.0.1:58076/gj_mz/in/collection1 Dead 
 Guy:http://127.0.0.1:64555/gj_mz/in/collection1skip list:[CloudJettyRunner 
 [url=http://127.0.0.1:18606/gj_mz/in/collection1], CloudJettyRunner 
 [url=http://127.0.0.1:10847/gj_mz/in/collection1]]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-25 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613466#comment-13613466
 ] 

Shai Erera commented on LUCENE-4882:


bq. Does someone conspire to prevent others from counting root categories?

Hehe, no, no conspiracy. I'll look into it!

 FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
 CategoryPath.
 ---

 Key: LUCENE-4882
 URL: https://issues.apache.org/jira/browse/LUCENE-4882
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.2
Reporter: crocket
Priority: Critical

 When I wanted to count root categories, I used to pass new CategoryPath(new 
 String[0]) to a CountFacetRequest.
 Since upgrading lucene from 4.1 to 4.2, that threw 
 ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
 CountFacetRequest instead, and this time I got NullPointerException.
 It all originates from FacetsAccumulator.java:185
 Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-25 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-4882:
---

Attachment: LUCENE-4882.patch

Patch adds a test and fix. I'll commit later.

 FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
 CategoryPath.
 ---

 Key: LUCENE-4882
 URL: https://issues.apache.org/jira/browse/LUCENE-4882
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.2
Reporter: crocket
Priority: Critical
 Attachments: LUCENE-4882.patch


 When I wanted to count root categories, I used to pass new CategoryPath(new 
 String[0]) to a CountFacetRequest.
 Since upgrading lucene from 4.1 to 4.2, that threw 
 ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
 CountFacetRequest instead, and this time I got NullPointerException.
 It all originates from FacetsAccumulator.java:185
 Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-25 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613506#comment-13613506
 ] 

crocket commented on LUCENE-4882:
-

Thanks for a quick response, man.

 FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
 CategoryPath.
 ---

 Key: LUCENE-4882
 URL: https://issues.apache.org/jira/browse/LUCENE-4882
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.2
Reporter: crocket
Priority: Critical
 Attachments: LUCENE-4882.patch


 When I wanted to count root categories, I used to pass new CategoryPath(new 
 String[0]) to a CountFacetRequest.
 Since upgrading lucene from 4.1 to 4.2, that threw 
 ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
 CountFacetRequest instead, and this time I got NullPointerException.
 It all originates from FacetsAccumulator.java:185
 Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-25 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-4882.


   Resolution: Fixed
Fix Version/s: 4.3
   5.0
 Assignee: Shai Erera
Lucene Fields: New,Patch Available  (was: New)

Committed a fix to trunk and 4x. Thanks for reporting crocket!

If you cannot wait until 4.3 (and cannot work with 4x directly), you can use 
StandardFacetsAccumulator as an alternative, but note that it's slower than 
FacetsAccumulator. Or, create your own FacetsAccumulator and copy accumulate + 
fix.

 FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
 CategoryPath.
 ---

 Key: LUCENE-4882
 URL: https://issues.apache.org/jira/browse/LUCENE-4882
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.2
Reporter: crocket
Assignee: Shai Erera
Priority: Critical
 Fix For: 5.0, 4.3

 Attachments: LUCENE-4882.patch


 When I wanted to count root categories, I used to pass new CategoryPath(new 
 String[0]) to a CountFacetRequest.
 Since upgrading lucene from 4.1 to 4.2, that threw 
 ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
 CountFacetRequest instead, and this time I got NullPointerException.
 It all originates from FacetsAccumulator.java:185
 Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-25 Thread crocket (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613518#comment-13613518
 ] 

crocket commented on LUCENE-4882:
-

What about 4.2.1?

And when will 4.3 be released?

 FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
 CategoryPath.
 ---

 Key: LUCENE-4882
 URL: https://issues.apache.org/jira/browse/LUCENE-4882
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.2
Reporter: crocket
Assignee: Shai Erera
Priority: Critical
 Fix For: 5.0, 4.3

 Attachments: LUCENE-4882.patch


 When I wanted to count root categories, I used to pass new CategoryPath(new 
 String[0]) to a CountFacetRequest.
 Since upgrading lucene from 4.1 to 4.2, that threw 
 ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
 CountFacetRequest instead, and this time I got NullPointerException.
 It all originates from FacetsAccumulator.java:185
 Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4882) FacetsAccumulator.java:185 throws NullPointerException if it's given an empty CategoryPath.

2013-03-25 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613519#comment-13613519
 ] 

Shai Erera commented on LUCENE-4882:


Unfortunately it won't make it into 4.2.1, and it's likely that 4.3 will be 
released before 4.2.2 (though it will take some time since we just cut 4.2).

 FacetsAccumulator.java:185 throws NullPointerException if it's given an empty 
 CategoryPath.
 ---

 Key: LUCENE-4882
 URL: https://issues.apache.org/jira/browse/LUCENE-4882
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Affects Versions: 4.2
Reporter: crocket
Assignee: Shai Erera
Priority: Critical
 Fix For: 5.0, 4.3

 Attachments: LUCENE-4882.patch


 When I wanted to count root categories, I used to pass new CategoryPath(new 
 String[0]) to a CountFacetRequest.
 Since upgrading lucene from 4.1 to 4.2, that threw 
 ArrayIndexOfOutBoundsException, so I passed CategoryPath.EMPTY to a 
 CountFacetRequest instead, and this time I got NullPointerException.
 It all originates from FacetsAccumulator.java:185
 Does someone conspire to prevent others from counting root categories?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org