[jira] [Commented] (SOLR-6892) Improve the way update processors are used and make it simpler

2015-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363996#comment-14363996
 ] 

ASF subversion and git services commented on SOLR-6892:
---

Commit 1667134 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1667134 ]

SOLR-6892: Improve the way update processors are used and make it simpler

 Improve the way update processors are used and make it simpler
 --

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6892.patch


 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * A new tag {{updateProcessor}}  becomes a toplevel tag and it will be 
 equivalent to the {{processor}} tag inside 
 {{updateRequestProcessorChain}} . The only difference is that it should 
 require a {{name}} attribute. The {{updateProcessorChain}} tag will 
 continue to exist and it should be possible to define {{processor}} inside 
 as well . It should also be possible to reference a named URP in a chain.
 * processors will be added in the request with their names . Example 
 {{processor=a,b,c}} ,  {{post-processor=x,y,z}} . This creates an implicit 
 chain of the named URPs the order they are specified
 * There are multiple request parameters supported by update request 
 ** processor : This chain is executed executed at the leader right before the 
 LogUpdateProcessorFactory + DistributedUpdateProcessorFactory . The replicas 
 will not execute this. 
 ** post-processor : This chain is executed right before the 
 RunUpdateProcessor in all replicas , including the leader
 * What happens to the update.chain parameter ? {{update.chain}} will be 
 honored . The implicit chain is created by merging both the update.chain and 
 the request params. {{post-processor}} will be inserted right before the 
 {{RunUpdateProcessorFactory}} in the chain.   and {{processor}} will be 
 inserted right before the 
 LogUpdateProcessorFactory,DistributedUpdateProcessorFactory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7249) Solr engine misses null-values in OR null part for eDisMax parser

2015-03-16 Thread Arsen Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arsen Li updated SOLR-7249:
---
Description: 
Solr engine misses null-values in OR null part for eDisMax parser
For example, I have following query:

((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451

full query path visible in Solr Admin panel is

select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true

debug part of response is below:
--
rawquerystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
objectId:40105451,
querystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
objectId:40105451,
parsedquery: +((+MatchAllDocsQuery(*:*) -text:area) area:[100 TO 300]) 
+objectId:40105451,
parsedquery_toString: +((+*:* -text:area) area:[100 TO 300]) +objectId: 
\u0001\u\u\u\u\u\u0013\u000fkk,
explain: {
  40105451: \n14.3509865 = (MATCH) sum of:\n  0.034590688 = (MATCH) 
product of:\n0.069181375 = (MATCH) sum of:\n  0.069181375 = (MATCH) sum 
of:\n0.069181375 = (MATCH) MatchAllDocsQuery, product of:\n  
0.069181375 = queryNorm\n0.5 = coord(1/2)\n  14.316396 = (MATCH) 
weight(objectId: \u0001\u\u\u\u\u\u0013\u000fkk in 1109978) 
[DefaultSimilarity], result of:\n14.316396 = score(doc=1109978,freq=1.0), 
product of:\n  0.9952025 = queryWeight, product of:\n14.38541 = 
idf(docFreq=1, maxDocs=1300888)\n0.069181375 = queryNorm\n  
14.38541 = fieldWeight in 1109978, product of:\n1.0 = tf(freq=1.0), 
with freq of:\n  1.0 = termFreq=1.0\n14.38541 = idf(docFreq=1, 
maxDocs=1300888)\n1.0 = fieldNorm(doc=1109978)\n
},
QParser: LuceneQParser,
...
--

so, it should return record if area between 100 and 300 or area not declared.

it works ok for default parser, but when I set edismax checkbox checked in 
Solr admin panel - it returns nothing (area for objectId=40105451 is null). 

Request path is following
select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true

debug response is below
--
 rawquerystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
objectId:40105451,
querystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
objectId:40105451,
parsedquery: (+(+((+DisjunctionMaxQuery((text:*\\:*)) 
-DisjunctionMaxQuery((text:area))) area:[100 TO 300]) 
+objectId:40105451))/no_coord,
parsedquery_toString: +(+((+(text:*\\:*) -(text:area)) area:[100 TO 
300]) +objectId: \u0001\u\u\u\u\u\u0013\u000fkk),
explain: {},
QParser: ExtendedDismaxQParser,
altquerystring: null,
boost_queries: null,
parsed_boost_queries: [],
boostfuncs: null,
--

However, when I move query from q field to q.alt field - it works ok, query 
is

select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true

note, asterisks are not saved by editor, refer to 
http://stackoverflow.com/questions/29059460/solr-misses-or-null-query-when-parsing-by-edismax-parser
if needed more accurate syntax

  was:
Solr engine misses null-values in OR null part for eDisMax parser
For example, I have following query:

((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451

full query path visible in Solr Admin panel is

select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true

so, it should return record if area between 100 and 300 or area not declared.

it works ok for default parser, but when I set edismax checkbox checked in 
Solr admin panel - it returns nothing (area for objectId=40105451 is null). 

Request path is following
select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true

However, when I move query from q field to q.alt field - it works ok, query 
is

select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true

note, asterisks are not saved by editor, refer to 
http://stackoverflow.com/questions/29059460/solr-misses-or-null-query-when-parsing-by-edismax-parser
if needed more accurate syntax


 Solr engine misses null-values in OR null part for eDisMax parser
 ---

 Key: SOLR-7249
 URL: https://issues.apache.org/jira/browse/SOLR-7249
 Project: Solr
  Issue Type: Bug
  

[jira] [Commented] (SOLR-6350) Percentiles in StatsComponent

2015-03-16 Thread Xu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364256#comment-14364256
 ] 

Xu Zhang commented on SOLR-6350:


Thanks Hoss. 

I think I screwed it up. Really sorry about it. Will fix it tonight.

 Percentiles in StatsComponent
 -

 Key: SOLR-6350
 URL: https://issues.apache.org/jira/browse/SOLR-6350
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6350-Xu.patch, SOLR-6350-Xu.patch, 
 SOLR-6350-xu.patch, SOLR-6350-xu.patch, SOLR-6350.patch, SOLR-6350.patch


 Add an option to compute user specified percentiles when computing stats
 Example...
 {noformat}
 stats.field={!percentiles='1,2,98,99,99.999'}price
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6350) Percentiles in StatsComponent

2015-03-16 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364303#comment-14364303
 ] 

Hoss Man commented on SOLR-6350:


bq. I think I screwed it up. Really sorry about it. Will fix it tonight.

no worries - i appreciate all your effort here ... I trust you to update your 
own patch to trunk more then i trust me -- i just want to make sure we aren't 
losing track of anything.

 Percentiles in StatsComponent
 -

 Key: SOLR-6350
 URL: https://issues.apache.org/jira/browse/SOLR-6350
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6350-Xu.patch, SOLR-6350-Xu.patch, 
 SOLR-6350-xu.patch, SOLR-6350-xu.patch, SOLR-6350.patch, SOLR-6350.patch


 Add an option to compute user specified percentiles when computing stats
 Example...
 {noformat}
 stats.field={!percentiles='1,2,98,99,99.999'}price
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6361) Optimized AnalyzinSuggester#topoSortStates()

2015-03-16 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364054#comment-14364054
 ] 

Michael McCandless commented on LUCENE-6361:


I'm not sure this is safe?  It used to be O(1) to get the next state to work on 
but with this patch it becomes O(N)? (N = number of states in the incoming 
automaton), albeit with a tiny constant in front of the N since bitsets can 
scan very quickly...

So then the loop becomes O(N^2) when it's O(N) now?

I realize the automata we see in this method are supposed to be very small 
(graph expansions of one suggestion after analysis) but still...

 Optimized AnalyzinSuggester#topoSortStates()
 

 Key: LUCENE-6361
 URL: https://issues.apache.org/jira/browse/LUCENE-6361
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Affects Versions: 5.0
Reporter: Markus Heiden
Priority: Minor
 Attachments: topoSortStates.patch


 Optimized implementation of AnalyzinSuggester#topoSortStates().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7249) Solr engine misses null-values in OR null part for eDisMax parser

2015-03-16 Thread Arsen Li (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364071#comment-14364071
 ] 

Arsen Li commented on SOLR-7249:


Jack, Eric, sorry for my ignorance. SOLR/Apache community/site is big, so I bit 
lost here :)

I updated issue description by adding both parsers debug output (only most 
meaningful part)
Also, I am bit confused seeing that both parsers showing me text:area in 
debug (not sure is this expected or not)

Jack, thanks for the point about -area, I tried different cases - same result 
(LuceneQParser finds needed record, ExtendedDismaxQParser - not)

PS: going to raise this issue on users list as should be done before.

 Solr engine misses null-values in OR null part for eDisMax parser
 ---

 Key: SOLR-7249
 URL: https://issues.apache.org/jira/browse/SOLR-7249
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.10.3
 Environment: Windows 7
 CentOS 6.6
Reporter: Arsen Li

 Solr engine misses null-values in OR null part for eDisMax parser
 For example, I have following query:
 ((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451
 full query path visible in Solr Admin panel is
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true
 debug part of response is below:
 --
 rawquerystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
 objectId:40105451,
 querystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
 objectId:40105451,
 parsedquery: +((+MatchAllDocsQuery(*:*) -text:area) area:[100 TO 300]) 
 +objectId:40105451,
 parsedquery_toString: +((+*:* -text:area) area:[100 TO 300]) 
 +objectId: \u0001\u\u\u\u\u\u0013\u000fkk,
 explain: {
   40105451: \n14.3509865 = (MATCH) sum of:\n  0.034590688 = (MATCH) 
 product of:\n0.069181375 = (MATCH) sum of:\n  0.069181375 = (MATCH) 
 sum of:\n0.069181375 = (MATCH) MatchAllDocsQuery, product of:\n   
0.069181375 = queryNorm\n0.5 = coord(1/2)\n  14.316396 = (MATCH) 
 weight(objectId: \u0001\u\u\u\u\u\u0013\u000fkk in 
 1109978) [DefaultSimilarity], result of:\n14.316396 = 
 score(doc=1109978,freq=1.0), product of:\n  0.9952025 = queryWeight, 
 product of:\n14.38541 = idf(docFreq=1, maxDocs=1300888)\n
 0.069181375 = queryNorm\n  14.38541 = fieldWeight in 1109978, product 
 of:\n1.0 = tf(freq=1.0), with freq of:\n  1.0 = 
 termFreq=1.0\n14.38541 = idf(docFreq=1, maxDocs=1300888)\n1.0 
 = fieldNorm(doc=1109978)\n
 },
 QParser: LuceneQParser,
 ...
 --
 so, it should return record if area between 100 and 300 or area not declared.
 it works ok for default parser, but when I set edismax checkbox checked in 
 Solr admin panel - it returns nothing (area for objectId=40105451 is null). 
 Request path is following
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true
 debug response is below
 --
  rawquerystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
 objectId:40105451,
 querystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
 objectId:40105451,
 parsedquery: (+(+((+DisjunctionMaxQuery((text:*\\:*)) 
 -DisjunctionMaxQuery((text:area))) area:[100 TO 300]) 
 +objectId:40105451))/no_coord,
 parsedquery_toString: +(+((+(text:*\\:*) -(text:area)) area:[100 TO 
 300]) +objectId: \u0001\u\u\u\u\u\u0013\u000fkk),
 explain: {},
 QParser: ExtendedDismaxQParser,
 altquerystring: null,
 boost_queries: null,
 parsed_boost_queries: [],
 boostfuncs: null,
 --
 However, when I move query from q field to q.alt field - it works ok, 
 query is
 select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true
 note, asterisks are not saved by editor, refer to 
 http://stackoverflow.com/questions/29059460/solr-misses-or-null-query-when-parsing-by-edismax-parser
 if needed more accurate syntax



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Changes to the Solr website - what kind of review process is required?

2015-03-16 Thread Shawn Heisey
I have some changes I'd like to make to the Solr website, but I'm not
entirely sure what kind of review process that should go through. 
Should I open an issue in Jira, or just discuss it here?

Here are the changes I'd like to do:

http://apaste.info/pc5

I think part of the reason that we are having so many people start a
request for support by opening an issue in Jira is that the info about
the issue tracker is listed *before* the mailing lists on the resources
page.  My patch moves that whole section so it's after the mailing list
info.  This may not entirely solve the problem, but I think it will help.

There are also some changes on the solr-user mailing list info and the
IRC channel info.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6350) Percentiles in StatsComponent

2015-03-16 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364243#comment-14364243
 ] 

Hoss Man commented on SOLR-6350:


Actaully .. Xu: it looks like your latest patch doesn't include the nocommits i 
added based on my cursory review of your earlier work -- i'm confused as to 
what happened here ... did you ignore my changes and generate a completely new 
patch from trunk?

(there's not a lot of info lost there, it's easy to revive those notes -- i 
just want to make sure i understand what's going on ensure and that moving 
forward we don't have deviating work ... going back to my earlier comment about 
patch naming conventions: patches with the same name should build/evolve from 
the earlier versions, and include those existing changes)

 Percentiles in StatsComponent
 -

 Key: SOLR-6350
 URL: https://issues.apache.org/jira/browse/SOLR-6350
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6350-Xu.patch, SOLR-6350-Xu.patch, 
 SOLR-6350-xu.patch, SOLR-6350-xu.patch, SOLR-6350.patch, SOLR-6350.patch


 Add an option to compute user specified percentiles when computing stats
 Example...
 {noformat}
 stats.field={!percentiles='1,2,98,99,99.999'}price
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Changes to the Solr website - what kind of review process is required?

2015-03-16 Thread Erick Erickson
+1 to that, I never made the connection between usage problems in
JIRAs and the location of the two on the resources page...

Not quite sure how to make the changes though..

Erick

On Mon, Mar 16, 2015 at 3:49 PM, Shawn Heisey apa...@elyograg.org wrote:
 I have some changes I'd like to make to the Solr website, but I'm not
 entirely sure what kind of review process that should go through.
 Should I open an issue in Jira, or just discuss it here?

 Here are the changes I'd like to do:

 http://apaste.info/pc5

 I think part of the reason that we are having so many people start a
 request for support by opening an issue in Jira is that the info about
 the issue tracker is listed *before* the mailing lists on the resources
 page.  My patch moves that whole section so it's after the mailing list
 info.  This may not entirely solve the problem, but I think it will help.

 There are also some changes on the solr-user mailing list info and the
 IRC channel info.

 Thanks,
 Shawn


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Changes to the Solr website - what kind of review process is required?

2015-03-16 Thread Jack Krupansky
It would also re nice to have a user problem report web page that had a
few fields like Solr release, query, error message or stack trace, query
response, expected response, etc. And maybe a few bullet points like Add
debugQuery to get detailed response for troubleshooting, Be sure to
restart Solr after changing schema or config, and Be sure to reindex data
after changing schema, etc. IOW, make it super easy to send a user problem
report, increase the likelihood that it will have relevant information, and
increase the likelihood that the user can resolve the problem themselves.

-- Jack Krupansky

On Mon, Mar 16, 2015 at 8:05 PM, Erick Erickson erickerick...@gmail.com
wrote:

 +1 to that, I never made the connection between usage problems in
 JIRAs and the location of the two on the resources page...

 Not quite sure how to make the changes though..

 Erick

 On Mon, Mar 16, 2015 at 3:49 PM, Shawn Heisey apa...@elyograg.org wrote:
  I have some changes I'd like to make to the Solr website, but I'm not
  entirely sure what kind of review process that should go through.
  Should I open an issue in Jira, or just discuss it here?
 
  Here are the changes I'd like to do:
 
  http://apaste.info/pc5
 
  I think part of the reason that we are having so many people start a
  request for support by opening an issue in Jira is that the info about
  the issue tracker is listed *before* the mailing lists on the resources
  page.  My patch moves that whole section so it's after the mailing list
  info.  This may not entirely solve the problem, but I think it will help.
 
  There are also some changes on the solr-user mailing list info and the
  IRC channel info.
 
  Thanks,
  Shawn
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_31) - Build # 4552 - Still Failing!

2015-03-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4552/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=15476, name=collection1, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=15476, name=collection1, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:54276: Could not find collection : 
awholynewstresscollection_collection1_0
at __randomizedtesting.SeedInfo.seed([197EF25814E43784]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:584)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:236)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:228)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:370)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1067)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:892)




Build Log:
[...truncated 9738 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.CollectionsAPIDistributedZkTest
 197EF25814E43784-001\init-core-data-001
   [junit4]   2 3201562 T15109 oas.SolrTestCaseJ4.buildSSLConfig Randomized 
ssl (false) and clientAuth (true)
   [junit4]   2 3201562 T15109 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2 3201582 T15109 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 3201583 T15110 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 3201696 T15109 oasc.ZkTestServer.run start zk server on 
port:54229
   [junit4]   2 3201697 T15109 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 3201699 T15109 oascc.ConnectionManager.waitForConnected 
Waiting for client to connect to ZooKeeper
   [junit4]   2 3201708 T15117 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@1a72781 name:ZooKeeperConnection 
Watcher:127.0.0.1:54229 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 3201709 T15109 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 3201709 T15109 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 3201710 T15109 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 3201716 T15109 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 3201718 T15109 oascc.ConnectionManager.waitForConnected 
Waiting for client to connect to ZooKeeper
   [junit4]   2 3201720 T15120 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@1a2b52c name:ZooKeeperConnection 
Watcher:127.0.0.1:54229/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2 3201720 T15109 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 3201721 T15109 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 3201721 T15109 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 3201725 T15109 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 3201729 T15109 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 3201731 T15109 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 3201733 T15109 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 3201734 T15109 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 3201743 T15109 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2783 - Still Failing

2015-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2783/

4 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
shard1 is not consistent.  Got 122 from 
http://127.0.0.1:44173/collection1lastClient and got 56 from 
http://127.0.0.1:44189/collection1

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 122 from 
http://127.0.0.1:44173/collection1lastClient and got 56 from 
http://127.0.0.1:44189/collection1
at 
__randomizedtesting.SeedInfo.seed([D4A9CEFDB19098CB:5CFDF1271F6CF533]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-6345) null check all term/fields in queries

2015-03-16 Thread Lee Hinman (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lee Hinman updated LUCENE-6345:
---
Attachment: LUCENE-6345.patch

Here's a patch that adds a lot of null checks to Querys as well as things like 
{{BooleanClause}}.

It doesn't add tests for every single query for this (yet), though I see there 
are some already for {{FilteredQuery}}.

Should I work on adding tests for every query type for this, or are adding the 
checks alone sufficient?

 null check all term/fields in queries
 -

 Key: LUCENE-6345
 URL: https://issues.apache.org/jira/browse/LUCENE-6345
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6345.patch


 See the mail thread is this lucene 4.1.0 bug in PerFieldPostingsFormat.
 If anyone seriously thinks adding a null check to ctor will cause measurable 
 slowdown to things like regexp or wildcards, they should have their head 
 examined.
 All queries should just check this crap in ctor and throw exceptions if 
 parameters are invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Changes to the Solr website - what kind of review process is required?

2015-03-16 Thread Alexandre Rafalovitch
Or make a whole book on troubleshooting Solr and write a special
chapter on asking good questions on the mailing list. And then make
that chapter free as a sampler or something.

If only somebody would figure out how to do that. I'd give them my
money for the rest of the book. Wouldn't you?

So, if somebody :-) will actually eventually get to do it, you heard
it here first.

Regards,
   Alex.

Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 16 March 2015 at 20:22, Jack Krupansky jack.krupan...@gmail.com wrote:
 It would also re nice to have a user problem report web page that had a
 few fields like Solr release, query, error message or stack trace, query
 response, expected response, etc. And maybe a few bullet points like Add
 debugQuery to get detailed response for troubleshooting, Be sure to
 restart Solr after changing schema or config, and Be sure to reindex data
 after changing schema, etc. IOW, make it super easy to send a user problem
 report, increase the likelihood that it will have relevant information, and
 increase the likelihood that the user can resolve the problem themselves.

 -- Jack Krupansky

 On Mon, Mar 16, 2015 at 8:05 PM, Erick Erickson erickerick...@gmail.com
 wrote:

 +1 to that, I never made the connection between usage problems in
 JIRAs and the location of the two on the resources page...

 Not quite sure how to make the changes though..

 Erick

 On Mon, Mar 16, 2015 at 3:49 PM, Shawn Heisey apa...@elyograg.org wrote:
  I have some changes I'd like to make to the Solr website, but I'm not
  entirely sure what kind of review process that should go through.
  Should I open an issue in Jira, or just discuss it here?
 
  Here are the changes I'd like to do:
 
  http://apaste.info/pc5
 
  I think part of the reason that we are having so many people start a
  request for support by opening an issue in Jira is that the info about
  the issue tracker is listed *before* the mailing lists on the resources
  page.  My patch moves that whole section so it's after the mailing list
  info.  This may not entirely solve the problem, but I think it will
  help.
 
  There are also some changes on the solr-user mailing list info and the
  IRC channel info.
 
  Thanks,
  Shawn
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6345) null check all term/fields in queries

2015-03-16 Thread Lee Hinman (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lee Hinman updated LUCENE-6345:
---
Attachment: LUCENE-6345.patch

Updated patch that re-adds an assert that I removed mistakenly.

 null check all term/fields in queries
 -

 Key: LUCENE-6345
 URL: https://issues.apache.org/jira/browse/LUCENE-6345
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6345.patch, LUCENE-6345.patch


 See the mail thread is this lucene 4.1.0 bug in PerFieldPostingsFormat.
 If anyone seriously thinks adding a null check to ctor will cause measurable 
 slowdown to things like regexp or wildcards, they should have their head 
 examined.
 All queries should just check this crap in ctor and throw exceptions if 
 parameters are invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7250) In spellcheck.extendedResults=true freq value of suggestion differs from it actual origFreq

2015-03-16 Thread Wasim (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wasim updated SOLR-7250:

Description: 
Original frequency is not matching with suggestion frequency in SOLR

Output for whs is - (73) which is a suggestion of who is varies than its 
actual original frequency (94)
For your reference attaching two images of the output

My schema.xml

field name=gram type=textSpell indexed=true stored=true 
required=true multiValued=false/
field name=gram_ci type=textSpellCi indexed=true stored=false 
multiValued=false/

copyField source=gram dest=gram_ci/

fieldType name=textSpell class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType
fieldType name=textSpellCi class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType

solrconfig.xml

searchComponent name=spellcheck class=solr.SpellCheckComponent
str name=queryAnalyzerFieldTypetextSpellCi/str
lst name=spellchecker
str name=namedefault/str
str name=fieldgram_ci/str
str name=classnamesolr.DirectSolrSpellChecker/str
str name=distanceMeasureinternal/str
float name=accuracy0.5/float
int name=maxEdits2/int
int name=minPrefix0/int
int name=maxInspections5/int
int name=minQueryLength2/int
float name=maxQueryFrequency0.99/float
str name=comparatorClassfreq/str
float name=thresholdTokenFrequency0.0/float
/lst
/searchComponent
requestHandler name=/spell class=solr.SearchHandler startup=lazy
lst name=defaults
str name=dfgram_ci/str
str name=spellcheck.dictionarydefault/str
str name=spellcheckon/str
str name=spellcheck.extendedResultstrue/str
str name=spellcheck.count15/str
str name=spellcheck.alternativeTermCount10/str
str name=spellcheck.onlyMorePopularfalse/str
/lst
arr name=last-components
strspellcheck/str
/arr
/requestHandler


For more information have a look at this
http://stackoverflow.com/questions/28857915/original-frequency-is-not-matching-with-suggestion-frequency-in-solr

  was:
Original frequency is not matching with suggestion frequency in SOLR

Output for whs is - (73) which is a suggestion of who is varies than its 
actual original frequency (94)
For your reference attaching two images of the output

My schema.xml

field name=gram type=textSpell indexed=true stored=true 
required=true multiValued=false/
field name=gram_ci type=textSpellCi indexed=true stored=false 
multiValued=false/

copyField source=gram dest=gram_ci/

fieldType name=textSpell class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType
fieldType name=textSpellCi class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType

solrconfig.xml

searchComponent name=spellcheck 

[jira] [Created] (SOLR-7250) In spellcheck.extendedResults=true freq value of suggestion differs from it actual origFreq

2015-03-16 Thread Wasim (JIRA)
Wasim created SOLR-7250:
---

 Summary: In spellcheck.extendedResults=true freq value of 
suggestion differs from it actual origFreq 
 Key: SOLR-7250
 URL: https://issues.apache.org/jira/browse/SOLR-7250
 Project: Solr
  Issue Type: New Feature
 Environment: solr 4.10.4
Reporter: Wasim


Original frequency is not matching with suggestion frequency in SOLR

Output for whs is - (73) which is a suggestion of who is varies than its 
actual original frequency (94)
For your reference attaching two images of the output

My schema.xml

field name=gram type=textSpell indexed=true stored=true 
required=true multiValued=false/
field name=gram_ci type=textSpellCi indexed=true stored=false 
multiValued=false/

copyField source=gram dest=gram_ci/

fieldType name=textSpell class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType
fieldType name=textSpellCi class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType

solrconfig.xml

searchComponent name=spellcheck class=solr.SpellCheckComponent
str name=queryAnalyzerFieldTypetextSpellCi/str
lst name=spellchecker
str name=namedefault/str
str name=fieldgram_ci/str
str name=classnamesolr.DirectSolrSpellChecker/str
str name=distanceMeasureinternal/str
float name=accuracy0.5/float
int name=maxEdits2/int
int name=minPrefix0/int
int name=maxInspections5/int
int name=minQueryLength2/int
float name=maxQueryFrequency0.99/float
str name=comparatorClassfreq/str
float name=thresholdTokenFrequency0.0/float
/lst
/searchComponent
requestHandler name=/spell class=solr.SearchHandler startup=lazy
lst name=defaults
str name=dfgram_ci/str
str name=spellcheck.dictionarydefault/str
str name=spellcheckon/str
str name=spellcheck.extendedResultstrue/str
str name=spellcheck.count15/str
str name=spellcheck.alternativeTermCount10/str
str name=spellcheck.onlyMorePopularfalse/str
/lst
arr name=last-components
strspellcheck/str
/arr
/requestHandler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-16 Thread Paolo Cappuccini (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363054#comment-14363054
 ] 

Paolo Cappuccini commented on SOLR-7247:


Shalin, perhaps i'm missing something but in file 
org.apache.solr.update.SolrIndexSplitter.java i see at line 194 (ver. 4.10.3) :

hash = hashRouter.sliceHash(idString, null, null, null);

Internally , in this way, routeField is ignored becuause collection and doc is 
missing and no params are specified.

Infact in my case split is done, but in wrong way!

Wheni said is broken i meant that it works in wrong way (i didn't mean that 
it throws expeption ).

But after the splitting docs are in wrong shard and future searches can fail in 
logic because search use other parameters.


 sliceHash for compositeIdRouter is not coherent with routing
 

 Key: SOLR-7247
 URL: https://issues.apache.org/jira/browse/SOLR-7247
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Paolo Cappuccini

 in CompositeIdRouter the function sliceHash check routeField configured for 
 collection.
 This make me to guess that intended behaviour is manage alternative field to  
 id field to hash documents.
 But the signature of this method is very general ( can take id, doc or 
 params) and it is used in different ways from different functionality.
 In my opinion it should have overloads instead of a weak internal logic. One 
 overload with doc and collection and another one with id , params and 
 collections.
 In any case , if \_route_ is not available by params , collection 
 should be mandatory and in case of RouteField, also doc should be mandatory.
 This will break SplitIndex but it will save coherence of data.
 If i configure routeField i noticed that is broken the DeleteCommand (this 
 pass to sliceHash only id and params ) and SolrIndexSplitter ( this pass 
 only id )
 It should be forbidden to specify RouteField to compositeIdRouter or 
 implements related functionality to make possible to hash documents based on 
 RouteField.
 in case of DeleteCommand command the workaround is to specify _route_ param 
 in request but in case of Index Splitting is not possible any workaround.
 In this case it should be passed entire document during splitting (doc 
 parameter) or build params with proper \_route_ parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-16 Thread Paolo Cappuccini (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363057#comment-14363057
 ] 

Paolo Cappuccini commented on SOLR-7247:


Shalin, perhaps i'm missing something but in file 
org.apache.solr.update.SolrIndexSplitter.java i see at line 194 (ver. 4.10.3) :

hash = hashRouter.sliceHash(idString, null, null, null);

Internally , in this way, routeField is ignored becuause collection and doc is 
missing and no params are specified.

Infact in my case split is done, but in wrong way!

Wheni said is broken i meant that it works in wrong way (i didn't mean that 
it throws expeption ).

But after the splitting docs are in wrong shard and future searches can fail in 
logic because search use other parameters.


 sliceHash for compositeIdRouter is not coherent with routing
 

 Key: SOLR-7247
 URL: https://issues.apache.org/jira/browse/SOLR-7247
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Paolo Cappuccini

 in CompositeIdRouter the function sliceHash check routeField configured for 
 collection.
 This make me to guess that intended behaviour is manage alternative field to  
 id field to hash documents.
 But the signature of this method is very general ( can take id, doc or 
 params) and it is used in different ways from different functionality.
 In my opinion it should have overloads instead of a weak internal logic. One 
 overload with doc and collection and another one with id , params and 
 collections.
 In any case , if \_route_ is not available by params , collection 
 should be mandatory and in case of RouteField, also doc should be mandatory.
 This will break SplitIndex but it will save coherence of data.
 If i configure routeField i noticed that is broken the DeleteCommand (this 
 pass to sliceHash only id and params ) and SolrIndexSplitter ( this pass 
 only id )
 It should be forbidden to specify RouteField to compositeIdRouter or 
 implements related functionality to make possible to hash documents based on 
 RouteField.
 in case of DeleteCommand command the workaround is to specify _route_ param 
 in request but in case of Index Splitting is not possible any workaround.
 In this case it should be passed entire document during splitting (doc 
 parameter) or build params with proper \_route_ parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-7248:

Attachment: SOLR-7248.patch

Updated patch which compares both 'base_url' and 'name'

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-7248.patch, SOLR-7248.patch


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7250) In spellcheck.extendedResults=true freq value of suggestion differs from it actual origFreq

2015-03-16 Thread Wasim (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wasim updated SOLR-7250:

Description: 
Original frequency is not matching with suggestion frequency in SOLR

Output for whs is - (73) which is a suggestion of who is varies than its 
actual original frequency (94)
For your reference attaching two images of the output

My schema.xml

field name=gram type=textSpell indexed=true stored=true 
required=true multiValued=false/
field name=gram_ci type=textSpellCi indexed=true stored=false 
multiValued=false/

copyField source=gram dest=gram_ci/

fieldType name=textSpell class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType
fieldType name=textSpellCi class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType

solrconfig.xml

searchComponent name=spellcheck class=solr.SpellCheckComponent
str name=queryAnalyzerFieldTypetextSpellCi/str
lst name=spellchecker
str name=namedefault/str
str name=fieldgram_ci/str
str name=classnamesolr.DirectSolrSpellChecker/str
str name=distanceMeasureinternal/str
float name=accuracy0.5/float
int name=maxEdits2/int
int name=minPrefix0/int
int name=maxInspections5/int
int name=minQueryLength2/int
float name=maxQueryFrequency0.99/float
str name=comparatorClassfreq/str
float name=thresholdTokenFrequency0.0/float
/lst
/searchComponent
requestHandler name=/spell class=solr.SearchHandler startup=lazy
lst name=defaults
str name=dfgram_ci/str
str name=spellcheck.dictionarydefault/str
str name=spellcheckon/str
str name=spellcheck.extendedResultstrue/str
str name=spellcheck.count15/str
str name=spellcheck.alternativeTermCount10/str
str name=spellcheck.onlyMorePopularfalse/str
/lst
arr name=last-components
strspellcheck/str
/arr
/requestHandler

  was:
Original frequency is not matching with suggestion frequency in SOLR

Output for whs is - (73) which is a suggestion of who is varies than its 
actual original frequency (94)
For your reference attaching two images of the output

My schema.xml

field name=gram type=textSpell indexed=true stored=true 
required=true multiValued=false/
field name=gram_ci type=textSpellCi indexed=true stored=false 
multiValued=false/

copyField source=gram dest=gram_ci/

fieldType name=textSpell class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType
fieldType name=textSpellCi class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.ShingleFilterFactory maxShingleSize=5 
minShingleSize=2 outputUnigrams=true/
/analyzer
/fieldType

solrconfig.xml

searchComponent name=spellcheck class=solr.SpellCheckComponent
str name=queryAnalyzerFieldTypetextSpellCi/str
lst name=spellchecker
str name=namedefault/str
str name=fieldgram_ci/str
str name=classnamesolr.DirectSolrSpellChecker/str
str name=distanceMeasureinternal/str
 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2780 - Still Failing

2015-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2780/

4 tests failed.
REGRESSION:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=699, 
name=SocketProxy-Response-52218:15344, state=RUNNABLE, 
group=TGRP-LeaderFailoverAfterPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=699, name=SocketProxy-Response-52218:15344, 
state=RUNNABLE, group=TGRP-LeaderFailoverAfterPartitionTest]
at 
__randomizedtesting.SeedInfo.seed([3ADA2BE38610F935:B28E143928EC94CD]:0)
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([3ADA2BE38610F935]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1101)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:341)


FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:13864/c8n_1x3_commits_shard1_replica3

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:13864/c8n_1x3_commits_shard1_replica3
at 
__randomizedtesting.SeedInfo.seed([3ADA2BE38610F935:B28E143928EC94CD]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:598)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:236)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:228)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-7248:

Attachment: SOLR-7248.patch

Patch which checks both coreNodeName and base_url to verify if the core is 
already present in the clusterstate. 

Also refactored that method.

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-7248.patch


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7249) Solr engine misses null-values in OR null part when eDisMax parser

2015-03-16 Thread Arsen Li (JIRA)
Arsen Li created SOLR-7249:
--

 Summary: Solr engine misses null-values in OR null part when 
eDisMax parser
 Key: SOLR-7249
 URL: https://issues.apache.org/jira/browse/SOLR-7249
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.10.3
 Environment: Windows 7
CentOS 6.6
Reporter: Arsen Li


Solr engine misses null-values in OR null part when eDisMax parser
For example, I have following query:

((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451

full query path visible in Solr Admin panel is

select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true

so, it should return record if area between 100 and 300 or area not declared.

it works ok for default parser, but when I set edismax checkbox checked in 
Solr admin panel - it returns nothing (area for objectId=40105451 is null). 

Request path is following
select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true

However, when I move query from q field to q.alt field - it works ok, query 
is

select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-7248:
---

 Summary: In legacyCloud=false mode we should check if the core was 
hosted on the same node before registering it 
 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.0.1


Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy

Currently we check if the same coreNodeName is present in clusterstate before 
registering it. We should make this check more stringent and allow a core to be 
registered only if it the coreNodeName is present and if it's on the same node.

This will ensure that junk replica folders lying around on old nodes don't end 
up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 2B tests

2015-03-16 Thread Shawn Heisey
On 3/15/2015 2:58 PM, Michael McCandless wrote:
 I confirmed 2B tests are passing on 4.10.x.  Took 17 hours to run ...
 this is the command I run, for future reference:
 
   ant test -Dtests.monster=true -Dtests.heapsize=30g -Dtests.jvms=1
 -Dtests.workDir=/p/tmp

Thanks for your help on LUCENE-6002.  I would probably be still
scratching my head without your assistance.  The changes I made could
still be probably be improved, especially the comments in junit annotations.

Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363152#comment-14363152
 ] 

Noble Paul commented on SOLR-7248:
--

Why is there a timed loop? Why not a single attempt?

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-7248.patch


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-16 Thread Paolo Cappuccini (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Cappuccini updated SOLR-7247:
---
Comment: was deleted

(was: Shalin, perhaps i'm missing something but in file 
org.apache.solr.update.SolrIndexSplitter.java i see at line 194 (ver. 4.10.3) :

hash = hashRouter.sliceHash(idString, null, null, null);

Internally , in this way, routeField is ignored becuause collection and doc is 
missing and no params are specified.

Infact in my case split is done, but in wrong way!

Wheni said is broken i meant that it works in wrong way (i didn't mean that 
it throws expeption ).

But after the splitting docs are in wrong shard and future searches can fail in 
logic because search use other parameters.
)

 sliceHash for compositeIdRouter is not coherent with routing
 

 Key: SOLR-7247
 URL: https://issues.apache.org/jira/browse/SOLR-7247
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Paolo Cappuccini

 in CompositeIdRouter the function sliceHash check routeField configured for 
 collection.
 This make me to guess that intended behaviour is manage alternative field to  
 id field to hash documents.
 But the signature of this method is very general ( can take id, doc or 
 params) and it is used in different ways from different functionality.
 In my opinion it should have overloads instead of a weak internal logic. One 
 overload with doc and collection and another one with id , params and 
 collections.
 In any case , if \_route_ is not available by params , collection 
 should be mandatory and in case of RouteField, also doc should be mandatory.
 This will break SplitIndex but it will save coherence of data.
 If i configure routeField i noticed that is broken the DeleteCommand (this 
 pass to sliceHash only id and params ) and SolrIndexSplitter ( this pass 
 only id )
 It should be forbidden to specify RouteField to compositeIdRouter or 
 implements related functionality to make possible to hash documents based on 
 RouteField.
 in case of DeleteCommand command the workaround is to specify _route_ param 
 in request but in case of Index Splitting is not possible any workaround.
 In this case it should be passed entire document during splitting (doc 
 parameter) or build params with proper \_route_ parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b22) - Build # 11822 - Failure!

2015-03-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11822/
Java: 64bit/jdk1.8.0_40-ea-b22 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.facet.TestJsonFacets

Error Message:
Clean up static fields (in @AfterClass?), your test seems to hang on to 
approximately 11,310,136 bytes (threshold is 10,485,760). Field reference sizes 
(counted individually):   - 11,907,616 bytes, private static 
org.apache.solr.SolrTestCaseHS$SolrInstances 
org.apache.solr.search.facet.TestJsonFacets.servers   - 296 bytes, public 
static org.junit.rules.TestRule org.apache.solr.SolrTestCaseJ4.solrClassRules   
- 216 bytes, protected static java.lang.String 
org.apache.solr.SolrTestCaseJ4.testSolrHome   - 144 bytes, private static 
java.lang.String org.apache.solr.SolrTestCaseJ4.factoryProp   - 80 bytes, 
private static java.lang.String org.apache.solr.SolrTestCaseJ4.coreName

Stack Trace:
junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?), 
your test seems to hang on to approximately 11,310,136 bytes (threshold is 
10,485,760). Field reference sizes (counted individually):
  - 11,907,616 bytes, private static 
org.apache.solr.SolrTestCaseHS$SolrInstances 
org.apache.solr.search.facet.TestJsonFacets.servers
  - 296 bytes, public static org.junit.rules.TestRule 
org.apache.solr.SolrTestCaseJ4.solrClassRules
  - 216 bytes, protected static java.lang.String 
org.apache.solr.SolrTestCaseJ4.testSolrHome
  - 144 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.factoryProp
  - 80 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.coreName
at __randomizedtesting.SeedInfo.seed([D4F5A51CAD8422F0]:0)
at 
com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:127)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10836 lines...]
   [junit4] Suite: org.apache.solr.search.facet.TestJsonFacets
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J1/temp/solr.search.facet.TestJsonFacets
 D4F5A51CAD8422F0-001/init-core-data-001
   [junit4]   2 2029299 T12224 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 2029299 T12224 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/'
   [junit4]   2 2029299 T12224 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/.svn/'
 to classloader
   [junit4]   2 2029300 T12224 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2 2029300 T12224 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2 2029319 T12224 oasc.SolrConfig.refreshRequestParams current 
version of requestparams : -1
   [junit4]   2 2029325 T12224 oasc.SolrConfig.init Using Lucene 
MatchVersion: 5.1.0
   [junit4]   2 2029336 T12224 oasc.SolrConfig.init Loaded SolrConfig: 
solrconfig-tlog.xml
   [junit4]   2 2029336 T12224 oass.IndexSchema.readSchema Reading Solr Schema 
from 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/schema_latest.xml
   [junit4]   2 2029340 T12224 oass.IndexSchema.readSchema [null] Schema 
name=example
   [junit4]   2 2029407 T12224 oass.AbstractSpatialFieldType.init WARN units 
parameter is deprecated, please use distanceUnits instead for field types with 
class SpatialRecursivePrefixTreeFieldType
   [junit4]   2 2029410 T12224 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 2029414 T12224 oass.FileExchangeRateProvider.reload 

[jira] [Issue Comment Deleted] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7248:
-
Comment: was deleted

(was: Why is there a timed loop? Why not a single attempt?)

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-7248.patch


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363158#comment-14363158
 ] 

Noble Paul commented on SOLR-7248:
--

compare both baseurl and corename

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-7248.patch


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-16 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363165#comment-14363165
 ] 

Shalin Shekhar Mangar commented on SOLR-7247:
-

See CoreAdminHandler.handleSplitAction where it does:
{code}
Object routerObj = collection.get(DOC_ROUTER); // for back-compat with Solr 4.4
if (routerObj != null  routerObj instanceof Map) {
  Map routerProps = (Map) routerObj;
  routeFieldName = (String) routerProps.get(field);
}
{code}
which is then passed to SplitIndexCommand:
{code}
SplitIndexCommand cmd = new SplitIndexCommand(req, paths, newCores, ranges, 
router, routeFieldName, splitKey);
  core.getUpdateHandler().split(cmd);
{code}

Then SolrIndexSplitter has the following in the constructor:
{code}
routeFieldName = cmd.routeFieldName;
if (routeFieldName == null) {
  field = searcher.getSchema().getUniqueKeyField();
} else  {
  field = searcher.getSchema().getField(routeFieldName);
}
{code}

So the field used by SolrIndexSplitter is the right field whether it is the 
uniqueKey or the router.field. The idString is just a bad name because the 
splitter code was written first and the support for router.field was added 
later. There's also a test ShardSplitTest.splitByRouteFieldTest to assert that 
doc counts match as expected. If you have found otherwise, can you please 
detail how you tested, what results were expected and what were actually found.

 sliceHash for compositeIdRouter is not coherent with routing
 

 Key: SOLR-7247
 URL: https://issues.apache.org/jira/browse/SOLR-7247
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Paolo Cappuccini

 in CompositeIdRouter the function sliceHash check routeField configured for 
 collection.
 This make me to guess that intended behaviour is manage alternative field to  
 id field to hash documents.
 But the signature of this method is very general ( can take id, doc or 
 params) and it is used in different ways from different functionality.
 In my opinion it should have overloads instead of a weak internal logic. One 
 overload with doc and collection and another one with id , params and 
 collections.
 In any case , if \_route_ is not available by params , collection 
 should be mandatory and in case of RouteField, also doc should be mandatory.
 This will break SplitIndex but it will save coherence of data.
 If i configure routeField i noticed that is broken the DeleteCommand (this 
 pass to sliceHash only id and params ) and SolrIndexSplitter ( this pass 
 only id )
 It should be forbidden to specify RouteField to compositeIdRouter or 
 implements related functionality to make possible to hash documents based on 
 RouteField.
 in case of DeleteCommand command the workaround is to specify _route_ param 
 in request but in case of Index Splitting is not possible any workaround.
 In this case it should be passed entire document during splitting (doc 
 parameter) or build params with proper \_route_ parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40-ea-b22) - Build # 11987 - Failure!

2015-03-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11987/
Java: 32bit/jdk1.8.0_40-ea-b22 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.lucene.index.TestStressNRT.test

Error Message:
MockDirectoryWrapper: cannot close: there are still open files: 
{_lm_LuceneVarGapDocFreqInterval_0.tib=1, _li.cfs=1, _lt.cfs=1, 
_ln_LuceneVarGapDocFreqInterval_0.tib=1, 
_lp_LuceneVarGapDocFreqInterval_0.tib=1, _lo.fdt=1, 
_ll_LuceneVarGapDocFreqInterval_0.doc=1, _ld.cfs=1, 
_lm_LuceneVarGapDocFreqInterval_0.doc=1, _lm.fdt=1, 
_lj_LuceneVarGapDocFreqInterval_0.doc=1, 
_lo_LuceneVarGapDocFreqInterval_0.doc=1, _lj.fdt=1, _ls.cfs=1, _lh.cfs=1, 
_lp_LuceneVarGapDocFreqInterval_0.doc=1, _lq.fdt=1, 
_lq_LuceneVarGapDocFreqInterval_0.doc=1, 
_lo_LuceneVarGapDocFreqInterval_0.tib=1, 
_lj_LuceneVarGapDocFreqInterval_0.tib=1, _lp.fdt=1, _lg.cfs=1, _ll.fdt=1, 
_ln_LuceneVarGapDocFreqInterval_0.doc=1, 
_ll_LuceneVarGapDocFreqInterval_0.tib=1, 
_lq_LuceneVarGapDocFreqInterval_0.tib=1, _ln.fdt=1, _le.cfs=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
open files: {_lm_LuceneVarGapDocFreqInterval_0.tib=1, _li.cfs=1, _lt.cfs=1, 
_ln_LuceneVarGapDocFreqInterval_0.tib=1, 
_lp_LuceneVarGapDocFreqInterval_0.tib=1, _lo.fdt=1, 
_ll_LuceneVarGapDocFreqInterval_0.doc=1, _ld.cfs=1, 
_lm_LuceneVarGapDocFreqInterval_0.doc=1, _lm.fdt=1, 
_lj_LuceneVarGapDocFreqInterval_0.doc=1, 
_lo_LuceneVarGapDocFreqInterval_0.doc=1, _lj.fdt=1, _ls.cfs=1, _lh.cfs=1, 
_lp_LuceneVarGapDocFreqInterval_0.doc=1, _lq.fdt=1, 
_lq_LuceneVarGapDocFreqInterval_0.doc=1, 
_lo_LuceneVarGapDocFreqInterval_0.tib=1, 
_lj_LuceneVarGapDocFreqInterval_0.tib=1, _lp.fdt=1, _lg.cfs=1, _ll.fdt=1, 
_ln_LuceneVarGapDocFreqInterval_0.doc=1, 
_ll_LuceneVarGapDocFreqInterval_0.tib=1, 
_lq_LuceneVarGapDocFreqInterval_0.tib=1, _ln.fdt=1, _le.cfs=1}
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:747)
at org.apache.lucene.index.TestStressNRT.test(TestStressNRT.java:398)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-7249) Solr engine misses null-values in OR null part for eDisMax parser

2015-03-16 Thread Arsen Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arsen Li updated SOLR-7249:
---
Description: 
Solr engine misses null-values in OR null part for eDisMax parser
For example, I have following query:

((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451

full query path visible in Solr Admin panel is

select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true

so, it should return record if area between 100 and 300 or area not declared.

it works ok for default parser, but when I set edismax checkbox checked in 
Solr admin panel - it returns nothing (area for objectId=40105451 is null). 

Request path is following
select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true

However, when I move query from q field to q.alt field - it works ok, query 
is

select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true

note, asterisks are not saved by editor, refer to 
http://stackoverflow.com/questions/29059460/solr-misses-or-null-query-when-parsing-by-edismax-parser
if needed more accurate syntax

  was:
Solr engine misses null-values in OR null part for eDisMax parser
For example, I have following query:

((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451

full query path visible in Solr Admin panel is

select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true

so, it should return record if area between 100 and 300 or area not declared.

it works ok for default parser, but when I set edismax checkbox checked in 
Solr admin panel - it returns nothing (area for objectId=40105451 is null). 

Request path is following
select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true

However, when I move query from q field to q.alt field - it works ok, query 
is

select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true



 Solr engine misses null-values in OR null part for eDisMax parser
 ---

 Key: SOLR-7249
 URL: https://issues.apache.org/jira/browse/SOLR-7249
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.10.3
 Environment: Windows 7
 CentOS 6.6
Reporter: Arsen Li

 Solr engine misses null-values in OR null part for eDisMax parser
 For example, I have following query:
 ((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451
 full query path visible in Solr Admin panel is
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true
 so, it should return record if area between 100 and 300 or area not declared.
 it works ok for default parser, but when I set edismax checkbox checked in 
 Solr admin panel - it returns nothing (area for objectId=40105451 is null). 
 Request path is following
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true
 However, when I move query from q field to q.alt field - it works ok, 
 query is
 select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true
 note, asterisks are not saved by editor, refer to 
 http://stackoverflow.com/questions/29059460/solr-misses-or-null-query-when-parsing-by-edismax-parser
 if needed more accurate syntax



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-7248:

Fix Version/s: 5.1

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1, 5.0.1


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-7248:

Fix Version/s: (was: 5.0.1)

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7249) Solr engine misses null-values in OR null part for eDisMax parser

2015-03-16 Thread Arsen Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arsen Li updated SOLR-7249:
---
Description: 
Solr engine misses null-values in OR null part for eDisMax parser
For example, I have following query:

((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451

full query path visible in Solr Admin panel is

select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true

so, it should return record if area between 100 and 300 or area not declared.

it works ok for default parser, but when I set edismax checkbox checked in 
Solr admin panel - it returns nothing (area for objectId=40105451 is null). 

Request path is following
select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true

However, when I move query from q field to q.alt field - it works ok, query 
is

select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true


  was:
Solr engine misses null-values in OR null part when eDisMax parser
For example, I have following query:

((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451

full query path visible in Solr Admin panel is

select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true

so, it should return record if area between 100 and 300 or area not declared.

it works ok for default parser, but when I set edismax checkbox checked in 
Solr admin panel - it returns nothing (area for objectId=40105451 is null). 

Request path is following
select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true

However, when I move query from q field to q.alt field - it works ok, query 
is

select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true


Summary: Solr engine misses null-values in OR null part for eDisMax 
parser  (was: Solr engine misses null-values in OR null part when eDisMax 
parser)

 Solr engine misses null-values in OR null part for eDisMax parser
 ---

 Key: SOLR-7249
 URL: https://issues.apache.org/jira/browse/SOLR-7249
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.10.3
 Environment: Windows 7
 CentOS 6.6
Reporter: Arsen Li

 Solr engine misses null-values in OR null part for eDisMax parser
 For example, I have following query:
 ((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451
 full query path visible in Solr Admin panel is
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true
 so, it should return record if area between 100 and 300 or area not declared.
 it works ok for default parser, but when I set edismax checkbox checked in 
 Solr admin panel - it returns nothing (area for objectId=40105451 is null). 
 Request path is following
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true
 However, when I move query from q field to q.alt field - it works ok, 
 query is
 select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7249) Solr engine misses null-values in OR null part for eDisMax parser

2015-03-16 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363383#comment-14363383
 ] 

Jack Krupansky commented on SOLR-7249:
--

It's best to pursue this type of issue on the Solr user list first.

Have you added debugQuery=true to your request and looked at the parsed_query 
in the response? That shows how your query is actually interpreted.

You wrote AND -area, but that probably should be NOT area or simply -area.

 Solr engine misses null-values in OR null part for eDisMax parser
 ---

 Key: SOLR-7249
 URL: https://issues.apache.org/jira/browse/SOLR-7249
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.10.3
 Environment: Windows 7
 CentOS 6.6
Reporter: Arsen Li

 Solr engine misses null-values in OR null part for eDisMax parser
 For example, I have following query:
 ((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451
 full query path visible in Solr Admin panel is
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true
 so, it should return record if area between 100 and 300 or area not declared.
 it works ok for default parser, but when I set edismax checkbox checked in 
 Solr admin panel - it returns nothing (area for objectId=40105451 is null). 
 Request path is following
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true
 However, when I move query from q field to q.alt field - it works ok, 
 query is
 select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true
 note, asterisks are not saved by editor, refer to 
 http://stackoverflow.com/questions/29059460/solr-misses-or-null-query-when-parsing-by-edismax-parser
 if needed more accurate syntax



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6361) Optimized AnalyzinSuggester#topoSortStates()

2015-03-16 Thread Markus Heiden (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Heiden updated LUCENE-6361:
--
Description: Optimized implementation of AnalyzinSuggester#topoSortStates().

 Optimized AnalyzinSuggester#topoSortStates()
 

 Key: LUCENE-6361
 URL: https://issues.apache.org/jira/browse/LUCENE-6361
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Affects Versions: 5.0
Reporter: Markus Heiden
Priority: Minor

 Optimized implementation of AnalyzinSuggester#topoSortStates().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 2B tests

2015-03-16 Thread Michael Wechner
thank you very much for clarifying

Michael

Am 16.03.15 um 17:24 schrieb Michael McCandless:
 The 2B tests are tests that confirm that Lucene's limits are working
 correctly, e.g. 2B docs, huge FSTs, many terms, many postings, etc.

 They are very slow to run and very heap-consuming so we don't run them
 by default when you run ant test.

 Look for the @Monster annotation to see all of them...

 Mike McCandless

 http://blog.mikemccandless.com


 On Sun, Mar 15, 2015 at 11:42 PM, Michael Wechner
 michael.wech...@wyona.com wrote:
 what are the 2B tests? I guess the entry point is

 lucene/core/src/test/org/apache/lucene/index/Test2BTerms.java

 or where would you start to learn more about these tests?

 Thanks

 Michael


 Am 15.03.15 um 21:58 schrieb Michael McCandless:
 I confirmed 2B tests are passing on 4.10.x.  Took 17 hours to run ...
 this is the command I run, for future reference:

   ant test -Dtests.monster=true -Dtests.heapsize=30g -Dtests.jvms=1
 -Dtests.workDir=/p/tmp

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7062) CLUSTERSTATUS returns a collection with state=active, even though the collection could not be created due to a missing configSet

2015-03-16 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-7062:
---
Attachment: SOLR-7062.patch

Added test reproducing issue in this issue and SOLR-7053. The reason is 
preRegister call in CoreContainer creating record in ZK. Core create is failed 
with exception but state in ZK remains active and inconsistent. There 2 options 
to solve this: rollback ZK data if core create was failed, check that configSet 
exists before creating core and throw exception. Implemented last one check.

 CLUSTERSTATUS returns a collection with state=active, even though the 
 collection could not be created due to a missing configSet
 

 Key: SOLR-7062
 URL: https://issues.apache.org/jira/browse/SOLR-7062
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3
Reporter: Ng Agi
  Labels: solrcloud
 Attachments: SOLR-7062.patch


 A collection can not be created, if its configSet does not exist. 
 Nevertheless, a subsequent CLUSTERSTATUS CollectionAdminRequest returns this 
 collection with a state=active.
 See log below.
 {noformat}
 [INFO] Overseer Collection Processor: Get the message 
 id:/overseer/collection-queue-work/qn-000110 message:{
   operation:createcollection,
   fromApi:true,
   name:blueprint_media_comments,
   collection.configName:elastic,
   numShards:1,
   property.dataDir:data,
   property.instanceDir:cores/blueprint_media_comments}
 [WARNING] OverseerCollectionProcessor.processMessage : createcollection , {
   operation:createcollection,
   fromApi:true,
   name:blueprint_media_comments,
   collection.configName:elastic,
   numShards:1,
   property.dataDir:data,
   property.instanceDir:cores/blueprint_media_comments}
 [INFO] creating collections conf node /collections/blueprint_media_comments 
 [INFO] makePath: /collections/blueprint_media_comments
 [INFO] Got user-level KeeperException when processing 
 sessionid:0x14b315b0f4a000e type:create cxid:0x2f2e zxid:0x2f4 txntype:-1 
 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists for 
 /overseer
 [INFO] LatchChildWatcher fired on path: /overseer/queue state: SyncConnected 
 type NodeChildrenChanged
 [INFO] building a new collection: blueprint_media_comments
 [INFO] Create collection blueprint_media_comments with shards [shard1]
 [INFO] A cluster state change: WatchedEvent state:SyncConnected 
 type:NodeDataChanged path:/clusterstate.json, has occurred - updating... 
 (live nodes size: 1)
 [INFO] Creating SolrCores for new collection blueprint_media_comments, 
 shardNames [shard1] , replicationFactor : 1
 [INFO] Creating shard blueprint_media_comments_shard1_replica1 as part of 
 slice shard1 of collection blueprint_media_comments on localhost:44080_solr
 [INFO] core create command 
 qt=/admin/coresproperty.dataDir=datacollection.configName=elasticname=blueprint_media_comments_shard1_replica1action=CREATEnumShards=1collection=blueprint_media_commentsshard=shard1wt=javabinversion=2property.instanceDir=cores/blueprint_media_comments
 [INFO] publishing core=blueprint_media_comments_shard1_replica1 state=down 
 collection=blueprint_media_comments
 [INFO] LatchChildWatcher fired on path: /overseer/queue state: SyncConnected 
 type NodeChildrenChanged
 [INFO] look for our core node name
 [INFO] Update state numShards=1 message={
   core:blueprint_media_comments_shard1_replica1,
   roles:null,
   base_url:http://localhost:44080/solr;,
   node_name:localhost:44080_solr,
   numShards:1,
   state:down,
   shard:shard1,
   collection:blueprint_media_comments,
   operation:state}
 [INFO] A cluster state change: WatchedEvent state:SyncConnected 
 type:NodeDataChanged path:/clusterstate.json, has occurred - updating... 
 (live nodes size: 1)
 [INFO] waiting to find shard id in clusterstate for 
 blueprint_media_comments_shard1_replica1
 [INFO] Check for collection zkNode:blueprint_media_comments
 [INFO] Collection zkNode exists
 [INFO] Load collection config from:/collections/blueprint_media_comments
 [ERROR] Specified config does not exist in ZooKeeper:elastic
 [ERROR] Error creating core [blueprint_media_comments_shard1_replica1]: 
 Specified config does not exist in ZooKeeper:elastic
 org.apache.solr.common.cloud.ZooKeeperException: Specified config does not 
 exist in ZooKeeper:elastic
   at 
 org.apache.solr.common.cloud.ZkStateReader.readConfigName(ZkStateReader.java:160)
   at 
 org.apache.solr.cloud.CloudConfigSetService.createCoreResourceLoader(CloudConfigSetService.java:37)
   at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
   at 

[jira] [Updated] (LUCENE-6361) Optimized AnalyzinSuggester#topoSortStates()

2015-03-16 Thread Markus Heiden (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Heiden updated LUCENE-6361:
--
Attachment: topoSortStates.patch

 Optimized AnalyzinSuggester#topoSortStates()
 

 Key: LUCENE-6361
 URL: https://issues.apache.org/jira/browse/LUCENE-6361
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Affects Versions: 5.0
Reporter: Markus Heiden
Priority: Minor
 Attachments: topoSortStates.patch


 Optimized implementation of AnalyzinSuggester#topoSortStates().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5579) Spatial, enhance RPT to differentiate confirmed from non-confirmed hits, then validate with SDV

2015-03-16 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-5579:
-
Attachment: LUCENE-5579_SPT_leaf_covered.patch

This first patch is the first step / phase, which differentiates covered leaf 
cells (cells that were within the indexed shape) from other cells which are 
approximated leaf cells.  The next phase will be augmenting the Intersects 
filter and possibly others to collect exact hits in conjunction with the fuzzy 
hits.

During the benchmarking I learned some interesting things:
* Quad tree is 50% of the size of Geohash!  This observation is for non-point 
data, since that's what's relevant to all this hit-confirmation business / leaf 
cells.  For point data, it'd be the other way around.
* Leaf pruning shaves 45%!  So much for my plans to phase that out -- it's key.
* Differentiating leaf types (Covered vs Approximated) add 4%.
* A more restrained leaf pruning that doesn't prune covered leaves larger than 
those at the target/detail level yields 36% shaving (not as good as 45% -- 
expected).  That is... we're adding these covered leaf bytes to subsequently 
make exact results checking better so we don't want to be too liberal in 
removing them.  There's a trade-off here.

The attached patch includes some refactoring to share common logic between 
Contains  AVPTF (the base of Within, Intersects, and heatmap).  I need to add 
a configurable flag to indicate if leaves should be differentiated in the first 
place, since you might not want that, and another flag to adjust how much 
pruning of the covered leaves happens.  Both flags should be safe to change 
without any re-indexing; it could be changed whenever. Obviously if you don't 
have the covered leaf differentiation then you won't get the full benefit later 
when we have exact match collection, just partial.

 Spatial, enhance RPT to differentiate confirmed from non-confirmed hits, then 
 validate with SDV
 ---

 Key: LUCENE-5579
 URL: https://issues.apache.org/jira/browse/LUCENE-5579
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
 Attachments: LUCENE-5579_SPT_leaf_covered.patch


 If a cell is within the query shape (doesn't straddle the edge), then you can 
 be sure that all documents it matches are a confirmed hit. But if some 
 documents are only on the edge cells, then those documents could be validated 
 against SerializedDVStrategy for precise spatial search. This should be 
 *much* faster than using RPT and SerializedDVStrategy independently on the 
 same search, particularly when a lot of documents match.
 Perhaps this'll be a new RPT subclass, or maybe an optional configuration of 
 RPT.  This issue is just for the Intersects predicate, which will apply to 
 Disjoint.  Until resolved in other issues, the other predicates can be 
 handled in a naive/slow way by creating a filter that combines RPT's filter 
 and SerializedDVStrategy's filter using BitsFilteredDocIdSet.
 One thing I'm not sure of is how to expose to Lucene-spatial users the 
 underlying functionality such that they can put other query/filters 
 in-between RPT and the SerializedDVStrategy.  Maybe that'll be done by simply 
 ensuring the predicate filters have this capability and are public.
 It would be ideal to implement this capability _after_ the PrefixTree term 
 encoding is modified to differentiate edge leaf-cells from non-edge leaf 
 cells. This distinction will allow the code here to make more confirmed 
 matches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 2B tests

2015-03-16 Thread Michael McCandless
The 2B tests are tests that confirm that Lucene's limits are working
correctly, e.g. 2B docs, huge FSTs, many terms, many postings, etc.

They are very slow to run and very heap-consuming so we don't run them
by default when you run ant test.

Look for the @Monster annotation to see all of them...

Mike McCandless

http://blog.mikemccandless.com


On Sun, Mar 15, 2015 at 11:42 PM, Michael Wechner
michael.wech...@wyona.com wrote:
 what are the 2B tests? I guess the entry point is

 lucene/core/src/test/org/apache/lucene/index/Test2BTerms.java

 or where would you start to learn more about these tests?

 Thanks

 Michael


 Am 15.03.15 um 21:58 schrieb Michael McCandless:
 I confirmed 2B tests are passing on 4.10.x.  Took 17 hours to run ...
 this is the command I run, for future reference:

   ant test -Dtests.monster=true -Dtests.heapsize=30g -Dtests.jvms=1
 -Dtests.workDir=/p/tmp

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6361) Optimized AnalyzinSuggester#topoSortStates()

2015-03-16 Thread Markus Heiden (JIRA)
Markus Heiden created LUCENE-6361:
-

 Summary: Optimized AnalyzinSuggester#topoSortStates()
 Key: LUCENE-6361
 URL: https://issues.apache.org/jira/browse/LUCENE-6361
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Affects Versions: 5.0
Reporter: Markus Heiden
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7251) In legacyCloud=false, If a create collection fails, information about that collection should not be persisted in ZK

2015-03-16 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-7251:
---

 Summary: In legacyCloud=false, If a create collection fails, 
information about that collection should not be persisted in ZK
 Key: SOLR-7251
 URL: https://issues.apache.org/jira/browse/SOLR-7251
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
 Fix For: Trunk, 5.1


In legacyCloud=false mode, if a CREATE collection fails ( bad config or 
something ) information about that collection will still be written out to the 
state.json file.

When you try re-creating it says that the collection already exists.

CREATE collection should no write the collection information to state.json 
unless the creation succeeds. This is what happens when no legacyCloud is not 
set to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 788 - Still Failing

2015-03-16 Thread Michael McCandless
Reproduces ... I'll dig.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Mar 16, 2015 at 8:40 AM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/788/

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.codecs.lucene49.TestLucene49NormsFormat.testUndeadNorms

 Error Message:
 expected:0 but was:121

 Stack Trace:
 java.lang.AssertionError: expected:0 but was:121
 at 
 __randomizedtesting.SeedInfo.seed([A05EC27B0D94C2A8:AD8B8616D87AD314]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:472)
 at org.junit.Assert.assertEquals(Assert.java:456)
 at 
 org.apache.lucene.index.BaseNormsFormatTestCase.testUndeadNorms(BaseNormsFormatTestCase.java:397)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:745)




 Build Log:
 [...truncated 4911 lines...]
[junit4] Suite: org.apache.lucene.codecs.lucene49.TestLucene49NormsFormat
[junit4]   2 NOTE: download the large Jenkins line-docs file by running 
 'ant get-jenkins-line-docs' in the lucene directory.
[junit4]   2 NOTE: 

[jira] [Created] (SOLR-7252) Need to sort the facet field values for a particular field in my custom order

2015-03-16 Thread Lewin Joy (JIRA)
Lewin Joy created SOLR-7252:
---

 Summary: Need to sort the facet field values for a particular 
field in my custom order
 Key: SOLR-7252
 URL: https://issues.apache.org/jira/browse/SOLR-7252
 Project: Solr
  Issue Type: Improvement
Reporter: Lewin Joy


Hi,

I have a requirement where a list of values from a facet field needs to be 
ordered on custom values. The only option i found was to order by the count for 
that facet field.

I need something like this:

Facet: Brand
   Nike (21)
   Reebok (100)
   asics (45)
   Fila (84)

Notice that the facet values are not sorted by count. But instead, sorted by my 
custom sorting requirement.
We want this sorting done in the solr layer rather than the Front end as the 
requirement keeps changing and we don't want to hard code this sorting in front 
end.

Please help. Is this possible to do? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363427#comment-14363427
 ] 

Varun Thacker commented on SOLR-7248:
-

bq. We may want to first add and improve them behind legacyCloud as an option 
and once we have confidence in them, move them to default? 

+1.

What are the things that you have in mind that we could add to make ZK as truth?

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-7248.patch, SOLR-7248.patch


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363441#comment-14363441
 ] 

Mark Miller commented on SOLR-7248:
---

Just simple stuff - some of it may already happen with legacyCloud=true, but I 
know there are not enough tests for it, nor is it completely done.

Basically though, you shouldn't be able to create a core for a collection if 
that collection does not exist. So for example, on startup, any core that is 
part not part of a collection in zk should be removed. Likewise, if ZooKeeper 
says a node should host a SolrCore and it does not, it should be created (given 
their is a leader to replicate from or when using a shared filesystem). 

Basically, all the individual Solr instances should make the appropriate local 
adjustments to stay in sync with what ZooKeeper describes as the current 
cluster.

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-7248.patch, SOLR-7248.patch


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 2B tests

2015-03-16 Thread Shawn Heisey
On 3/16/2015 10:24 AM, Michael McCandless wrote:
 Look for the @Monster annotation to see all of them...

Tangent question:  Is there any way to ask JUnit to *only* run tests
with a certain annotation, so that we could ask it to only run @Monster
tests (or some other annotation like @Weekly)?

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7199) core loading should succeed irrespective of errors in loading certain components

2015-03-16 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363480#comment-14363480
 ] 

Hoss Man commented on SOLR-7199:



I got 1 line into this patch and it already scares the shit out of me...

{noformat}
-  assertTrue(hasInitException(QueryElevationComponent));
+  LocalSolrQueryRequest req = lrf.makeRequest(qt, /elevate);
+  try {
+h.query(req);
+fail(Error expected);
+  } catch (Exception e) {
+assertTrue(e.getMessage().contains(Error initializing plugin of 
type));
+  }
+
{noformat}

...so now, instead of a broken configuration giving a clera  monitorable 
error, and the core preventing users from trying to do things while it's in a 
broken state, the only way to know that a handler isn't available is to hit it 
with a query and get a run time error from the request?

so if i have a dozen handlers, i have to query each one with a real query to 
find out that they had an init error?  this is a terrible idea.

Solr use to work this way, way way back in the day -- and it was terrible.
We worked REALLY hard to put pandora back in the box with SOLR-179.  We should 
not go back down this road again.

bq.  In SolrCloud, the collection is totally gone and there is no way to 
resurrect it using any commands .

this is not true -- in solr cloud, the user can fix the configs and do an 
entire collection reload.


bq. If the core is loaded , I can at least use config commands to correct those 
mistakes .

if an API broke the configs so that the core can't load and needs to be 
fixed, then we should harden those APIs so that they can't break the configs 
-- the API request itself should fail.  Alternatively: if there are other ways 
things can fail, but we want config APIs to be available to fix them, then 
those APIs should be (re)designed so that they can be used even if the core is 
down.

bq. In short, Solr should try to make the best effort to make the core 
available with whatever components available. 

I strongly disagree -- you are assuming you know what is best for _all_ users 
when they have already said this is hte config i want to run -- if Solr can't 
run the config they have said they want to run, then Solr should fail fast and 
hard.

the *ONLY* possible way i could remotely get on board with  an idea like this 
is if it wasn't the default behavior, and users had to go out of their way to 
say if this handler/component/plugin doesn't load, please be graceful and 
still startup the rest of my plugins ... and we already have a convention like 
this with {{lazy=true}} ... if you want to make more things support lazy=true 
as an option, that's a decent idea worth discussing, but i'm a huge -1 to this 
patch as written.



 core loading should succeed irrespective of errors in loading certain 
 components
 

 Key: SOLR-7199
 URL: https://issues.apache.org/jira/browse/SOLR-7199
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
 Attachments: SOLR-7199.patch


 If a certain component has some error , the core fails to load completely. 
 This was fine in standalone mode. We could always restart the node after 
 making corrections. In SolrCloud, the collection is totally gone and there is 
 no way to resurrect it using any commands . If the core is loaded , I can at 
 least use config commands to correct those mistakes .
 In short, Solr should try to make the best effort to make the core available 
 with whatever components available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7248) In legacyCloud=false mode we should check if the core was hosted on the same node before registering it

2015-03-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363337#comment-14363337
 ] 

Mark Miller commented on SOLR-7248:
---

In fact, I think we need to reconsider legacyCloud.

We have reserved the right to add ZK == truth features by default in 5 releases.

We may want to first add and improve them behind legacyCloud as an option and 
once we have confidence in them, move them to default? Or we may want to keep 
everything behind legacyCloud for all of 5. I would prefer to start doing zk == 
truth by default - when you don't support pre configuring SolrCore's (As we say 
we won't in 5.0 CHANGES.txt), most of these changes are really fixing what a 
user would perceive as a bug.

 In legacyCloud=false mode we should check if the core was hosted on the same 
 node before registering it 
 

 Key: SOLR-7248
 URL: https://issues.apache.org/jira/browse/SOLR-7248
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-7248.patch, SOLR-7248.patch


 Related discussion here - http://markmail.org/message/n32mxbv42hzuneyy
 Currently we check if the same coreNodeName is present in clusterstate before 
 registering it. We should make this check more stringent and allow a core to 
 be registered only if it the coreNodeName is present and if it's on the same 
 node.
 This will ensure that junk replica folders lying around on old nodes don't 
 end up registering themselves when the node gets bounced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Spellcheck component with extendedResult...

2015-03-16 Thread wazzzy
Github user wazzzy commented on the pull request:

https://github.com/apache/lucene-solr/pull/92#issuecomment-81728884
  
extendedResults=true shows freq but the freq of suggestion differs from its 
origFreq Please have a look at it 
http://stackoverflow.com/questions/28857915/original-frequency-is-not-matching-with-suggestion-frequency-in-solr


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 788 - Still Failing

2015-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/788/

1 tests failed.
REGRESSION:  
org.apache.lucene.codecs.lucene49.TestLucene49NormsFormat.testUndeadNorms

Error Message:
expected:0 but was:121

Stack Trace:
java.lang.AssertionError: expected:0 but was:121
at 
__randomizedtesting.SeedInfo.seed([A05EC27B0D94C2A8:AD8B8616D87AD314]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.index.BaseNormsFormatTestCase.testUndeadNorms(BaseNormsFormatTestCase.java:397)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 4911 lines...]
   [junit4] Suite: org.apache.lucene.codecs.lucene49.TestLucene49NormsFormat
   [junit4]   2 NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestLucene49NormsFormat -Dtests.method=testUndeadNorms 
-Dtests.seed=A05EC27B0D94C2A8 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 

[jira] [Commented] (SOLR-7249) Solr engine misses null-values in OR null part for eDisMax parser

2015-03-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363391#comment-14363391
 ] 

Erick Erickson commented on SOLR-7249:
--

Please raise usage issues on the user's list before raising a JIRA to confirm 
that what you're seeing is really a code issue. If so _then_ raise a JIRA.

In this case, attach debug=query to the output and you'll see that the parsed 
output is very much different than what you expect. It is not surprising at all 
that the output from a standard query is different than that from edismax as 
edismax is spreading the terms across a bunch of fields _as clauses_.

 Solr engine misses null-values in OR null part for eDisMax parser
 ---

 Key: SOLR-7249
 URL: https://issues.apache.org/jira/browse/SOLR-7249
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.10.3
 Environment: Windows 7
 CentOS 6.6
Reporter: Arsen Li

 Solr engine misses null-values in OR null part for eDisMax parser
 For example, I have following query:
 ((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451
 full query path visible in Solr Admin panel is
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true
 so, it should return record if area between 100 and 300 or area not declared.
 it works ok for default parser, but when I set edismax checkbox checked in 
 Solr admin panel - it returns nothing (area for objectId=40105451 is null). 
 Request path is following
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true
 However, when I move query from q field to q.alt field - it works ok, 
 query is
 select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true
 note, asterisks are not saved by editor, refer to 
 http://stackoverflow.com/questions/29059460/solr-misses-or-null-query-when-parsing-by-edismax-parser
 if needed more accurate syntax



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 788 - Still Failing

2015-03-16 Thread Robert Muir
It seems the test assumes after it does forceMerge(1), that it can
open a reader that has no deletes. but this is not the case.

with its mockrandom mergepolicy, it ends out with a single-segment
index like this:
leaf: _ak(5.1.0):c5727/988:delGen=1 maxDoc=5727, deleted=988

On Mon, Mar 16, 2015 at 9:38 AM, Michael McCandless
luc...@mikemccandless.com wrote:
 Reproduces ... I'll dig.

 Mike McCandless

 http://blog.mikemccandless.com


 On Mon, Mar 16, 2015 at 8:40 AM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/788/

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.codecs.lucene49.TestLucene49NormsFormat.testUndeadNorms

 Error Message:
 expected:0 but was:121

 Stack Trace:
 java.lang.AssertionError: expected:0 but was:121
 at 
 __randomizedtesting.SeedInfo.seed([A05EC27B0D94C2A8:AD8B8616D87AD314]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:472)
 at org.junit.Assert.assertEquals(Assert.java:456)
 at 
 org.apache.lucene.index.BaseNormsFormatTestCase.testUndeadNorms(BaseNormsFormatTestCase.java:397)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 

[jira] [Commented] (SOLR-7143) MoreLikeThis Query Parser does not handle multiple field names

2015-03-16 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363554#comment-14363554
 ] 

Anshum Gupta commented on SOLR-7143:


You've also added support for 
bq. {!mlt qf=field1 qf=field2}

Let's not do this specifically for this issue but have wider support for multi 
valued local params.

Also, parseLocalParamsSplitted returns a new String array but it should instead 
just return null like getParam(paramName). Returning a non-null value always 
ensures that the code flow never hits:
{code}
String[] qf = parseLocalParamsSplitted(qf); // Never returns null

if (qf != null) { }
else { // Never gets here }
{code}

 MoreLikeThis Query Parser does not handle multiple field names
 --

 Key: SOLR-7143
 URL: https://issues.apache.org/jira/browse/SOLR-7143
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 5.0
Reporter: Jens Wille
Assignee: Anshum Gupta
 Attachments: SOLR-7143.patch


 The newly introduced MoreLikeThis Query Parser (SOLR-6248) does not return 
 any results when supplied with multiple fields in the {{qf}} parameter.
 To reproduce within the techproducts example, compare:
 {code}
 curl 
 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name%7DMA147LL/A'
 curl 
 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=features%7DMA147LL/A'
 curl 
 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name,features%7DMA147LL/A'
 {code}
 The first two queries return 8 and 5 results, respectively. The third query 
 doesn't return any results (not even the matched document).
 In contrast, the MoreLikeThis Handler works as expected (accounting for the 
 default {{mintf}} and {{mindf}} values in SimpleMLTQParser):
 {code}
 curl 
 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=namemlt.mintf=1mlt.mindf=1'
 curl 
 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=featuresmlt.mintf=1mlt.mindf=1'
 curl 
 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=name,featuresmlt.mintf=1mlt.mindf=1'
 {code}
 After adding the following line to 
 {{example/techproducts/solr/techproducts/conf/solrconfig.xml}}:
 {code:language=XML}
 requestHandler name=/mlt class=solr.MoreLikeThisHandler /
 {code}
 The first two queries return 7 and 4 results, respectively (excluding the 
 matched document). The third query returns 7 results, as one would expect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7199) core loading should succeed irrespective of errors in loading certain components

2015-03-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363647#comment-14363647
 ] 

Mark Miller commented on SOLR-7199:
---

On the face of it, this seems like a terrible idea.

 core loading should succeed irrespective of errors in loading certain 
 components
 

 Key: SOLR-7199
 URL: https://issues.apache.org/jira/browse/SOLR-7199
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
 Attachments: SOLR-7199.patch


 If a certain component has some error , the core fails to load completely. 
 This was fine in standalone mode. We could always restart the node after 
 making corrections. In SolrCloud, the collection is totally gone and there is 
 no way to resurrect it using any commands . If the core is loaded , I can at 
 least use config commands to correct those mistakes .
 In short, Solr should try to make the best effort to make the core available 
 with whatever components available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6350) Percentiles in StatsComponent

2015-03-16 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363524#comment-14363524
 ] 

Hoss Man commented on SOLR-6350:


(FYI, haven't looked at latest patch, just replying to comments)

bq. This also shows the edge case: when user asking percentiles for empty 
document set, we will give NaN.

I think we should probably return 'null' for each percentile in that case?

bq. For example, we have a test case which will test all stats combinations, I 
just exclude percentiles right now, which is quite awful. 

On the test side, we can just add a map of the input params for each stat 
(for most it will be true for percentiles it will be the comma seperated 
string)

I'm still not really comfortable with how those inpts are parsed though ... 
ultimately i'd like to refactor all of that stuff and push it down into the 
StatsValuesFactories (so each factor has an API returning what Stats it 
supports, failures are produced if you request an unsupported stat) -- but for 
now, maybe we can just introduce a {{boolean parseParams(StatsField)}} into 
each Stat - most Stat instances could use a default impl that would look 
something like...

{code}
/** return value of true means user is requesting this stat */
boolean parseParams(StatsField sf) {
  return sf.getLocalParams().getBool(this.getName());
}
{code}

...but percentiles could be more interesting? ...

{code}
/** return value of true means user is requesting this stat */
boolean parseParams(StatsField sf) {
  String input = sf.getLocalParams().get(this.getName());
  if (null ! = input) {
sf.setTDigetsOptions(input);
return true;
  }
  return false;
}
{code}

...what do you think?

bq. And another thing is I didn't do too much performance tests around this. 
There are plenty of parameters for Tdigest. I just pick a default number and 
ArrayDigest. 

Yeah, i definitely think we should make those options configurable via another 
local param {{percentilOptions=...}} (or maybe a suffix on the list of 
percentiles?)




 Percentiles in StatsComponent
 -

 Key: SOLR-6350
 URL: https://issues.apache.org/jira/browse/SOLR-6350
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6350-Xu.patch, SOLR-6350-Xu.patch, 
 SOLR-6350-xu.patch, SOLR-6350-xu.patch, SOLR-6350.patch, SOLR-6350.patch


 Add an option to compute user specified percentiles when computing stats
 Example...
 {noformat}
 stats.field={!percentiles='1,2,98,99,99.999'}price
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2781 - Still Failing

2015-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2781/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:23736/c8n_1x3_commits_shard1_replica3

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:23736/c8n_1x3_commits_shard1_replica3
at 
__randomizedtesting.SeedInfo.seed([BBC412B5ED0C45FD:33902D6F43F02805]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:598)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:236)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:228)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-6350) Percentiles in StatsComponent

2015-03-16 Thread Xu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Zhang updated SOLR-6350:
---
Attachment: SOLR-6350.patch

I deleted my last patch. This has my last fix and has Hoss's change.

 Percentiles in StatsComponent
 -

 Key: SOLR-6350
 URL: https://issues.apache.org/jira/browse/SOLR-6350
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6350-Xu.patch, SOLR-6350-Xu.patch, 
 SOLR-6350-xu.patch, SOLR-6350-xu.patch, SOLR-6350.patch, SOLR-6350.patch


 Add an option to compute user specified percentiles when computing stats
 Example...
 {noformat}
 stats.field={!percentiles='1,2,98,99,99.999'}price
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2784 - Still Failing

2015-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2784/

14 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([E2598A88A36900EB:6A0DB5520D956D13]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:222)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-6350) Percentiles in StatsComponent

2015-03-16 Thread Xu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Zhang updated SOLR-6350:
---
Attachment: (was: SOLR-6350.patch)

 Percentiles in StatsComponent
 -

 Key: SOLR-6350
 URL: https://issues.apache.org/jira/browse/SOLR-6350
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6350-Xu.patch, SOLR-6350-Xu.patch, 
 SOLR-6350-xu.patch, SOLR-6350-xu.patch, SOLR-6350.patch


 Add an option to compute user specified percentiles when computing stats
 Example...
 {noformat}
 stats.field={!percentiles='1,2,98,99,99.999'}price
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6350) Percentiles in StatsComponent

2015-03-16 Thread Xu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364541#comment-14364541
 ] 

Xu Zhang commented on SOLR-6350:


{quote}
This also shows the edge case: when user asking percentiles for empty document 
set, we will give NaN.
I think we should probably return 'null' for each percentile in that case?
{quote}
Sure, will add some test cases for this .

{quote}
I'm still not really comfortable with how those inpts are parsed though ... 
ultimately i'd like to refactor all of that stuff and push it down into the 
StatsValuesFactories (so each factor has an API returning what Stats it 
supports, failures are produced if you request an unsupported stat) – but for 
now, maybe we can just introduce a boolean parseParams(StatsField) into each 
Stat - most Stat instances could use a default impl that would look something 
like...
{code}
/** return value of true means user is requesting this stat */
boolean parseParams(StatsField sf) {
  return sf.getLocalParams().getBool(this.getName());
}
{code}
...but percentiles could be more interesting? ...
{code}
/** return value of true means user is requesting this stat */
boolean parseParams(StatsField sf) {
  String input = sf.getLocalParams().get(this.getName());
  if (null ! = input) {
sf.setTDigetsOptions(input);
return true;
  }
  return false;
}
{code}
...what do you think?
{quote}
+1 :)

{quote}
Yeah, i definitely think we should make those options configurable via another 
local param percentilOptions=... (or maybe a suffix on the list of 
percentiles?)
{quote}

I think percentilOptions would be really nice. I will improve the patch based 
on Hoss's comments, maybe tomorrow. 

 Percentiles in StatsComponent
 -

 Key: SOLR-6350
 URL: https://issues.apache.org/jira/browse/SOLR-6350
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6350-Xu.patch, SOLR-6350-Xu.patch, 
 SOLR-6350-xu.patch, SOLR-6350-xu.patch, SOLR-6350.patch, SOLR-6350.patch


 Add an option to compute user specified percentiles when computing stats
 Example...
 {noformat}
 stats.field={!percentiles='1,2,98,99,99.999'}price
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6141) Schema API: Remove fields, dynamic fields, field types and copy fields; and replace fields, dynamic fields and field types

2015-03-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6141:
-
Attachment: SOLR-6141.patch

This version of the patch modifies {{ZkIndexSchemaReader.updateSchema()}} to 
fully parse the remote changed schema rather than merging the local copy with 
the remote copy - now that the schema is (almost) fully addressable with the 
schema API, we can't reliably do such merges.

 Schema API: Remove fields, dynamic fields, field types and copy fields; and 
 replace fields, dynamic fields and field types
 --

 Key: SOLR-6141
 URL: https://issues.apache.org/jira/browse/SOLR-6141
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Christoph Strobl
Assignee: Steve Rowe
  Labels: rest_api
 Attachments: SOLR-6141.patch, SOLR-6141.patch


 It should be possible, via the bulk schema API, to remove and replace the 
 following: 
 # fields
 # dynamic fields
 # field types
 # copy field directives (note: replacement is not applicable to copy fields)
 Removing schema elements that are referred to elsewhere in the schema must be 
 guarded against:
 # Removing a field type should be disallowed when there are fields or dynamic 
 fields of that type.
 # Removing a field should be disallowed when there are copy field directives 
 that use the field as source or destination.
 # Removing a dynamic field should be disallowed when it is the only possible 
 match for a copy field source or destination.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6141) Schema API: Remove fields, dynamic fields, field types and copy fields; and replace fields, dynamic fields and field types

2015-03-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364592#comment-14364592
 ] 

Steve Rowe commented on SOLR-6141:
--

I'll commit this to trunk now and let it bake for a day or two before 
backporting to branch_5x.

 Schema API: Remove fields, dynamic fields, field types and copy fields; and 
 replace fields, dynamic fields and field types
 --

 Key: SOLR-6141
 URL: https://issues.apache.org/jira/browse/SOLR-6141
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Christoph Strobl
Assignee: Steve Rowe
  Labels: rest_api
 Attachments: SOLR-6141.patch, SOLR-6141.patch


 It should be possible, via the bulk schema API, to remove and replace the 
 following: 
 # fields
 # dynamic fields
 # field types
 # copy field directives (note: replacement is not applicable to copy fields)
 Removing schema elements that are referred to elsewhere in the schema must be 
 guarded against:
 # Removing a field type should be disallowed when there are fields or dynamic 
 fields of that type.
 # Removing a field should be disallowed when there are copy field directives 
 that use the field as source or destination.
 # Removing a dynamic field should be disallowed when it is the only possible 
 match for a copy field source or destination.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6141) Schema API: Remove fields, dynamic fields, field types and copy fields; and replace fields, dynamic fields and field types

2015-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364594#comment-14364594
 ] 

ASF subversion and git services commented on SOLR-6141:
---

Commit 1667175 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1667175 ]

SOLR-6141: Schema API: Remove fields, dynamic fields, field types and copy 
fields; and replace fields, dynamic fields and field types

 Schema API: Remove fields, dynamic fields, field types and copy fields; and 
 replace fields, dynamic fields and field types
 --

 Key: SOLR-6141
 URL: https://issues.apache.org/jira/browse/SOLR-6141
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Christoph Strobl
Assignee: Steve Rowe
  Labels: rest_api
 Attachments: SOLR-6141.patch, SOLR-6141.patch


 It should be possible, via the bulk schema API, to remove and replace the 
 following: 
 # fields
 # dynamic fields
 # field types
 # copy field directives (note: replacement is not applicable to copy fields)
 Removing schema elements that are referred to elsewhere in the schema must be 
 guarded against:
 # Removing a field type should be disallowed when there are fields or dynamic 
 fields of that type.
 # Removing a field should be disallowed when there are copy field directives 
 that use the field as source or destination.
 # Removing a dynamic field should be disallowed when it is the only possible 
 match for a copy field source or destination.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2782 - Still Failing

2015-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2782/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:37331/fp_/jd/c8n_1x3_commits_shard1_replica3

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:37331/fp_/jd/c8n_1x3_commits_shard1_replica3
at 
__randomizedtesting.SeedInfo.seed([1D7B9E7283995DD7:952FA1A82D65302F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:598)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:236)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:228)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

Re: 2B tests

2015-03-16 Thread Michael McCandless
On Mon, Mar 16, 2015 at 9:32 AM, Shawn Heisey apa...@elyograg.org wrote:
 On 3/16/2015 10:24 AM, Michael McCandless wrote:
 Look for the @Monster annotation to see all of them...

 Tangent question:  Is there any way to ask JUnit to *only* run tests
 with a certain annotation, so that we could ask it to only run @Monster
 tests (or some other annotation like @Weekly)?

I would like to know too!  Dawid?

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7249) Solr engine misses null-values in OR null part for eDisMax parser

2015-03-16 Thread Arsen Li (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364071#comment-14364071
 ] 

Arsen Li edited comment on SOLR-7249 at 3/16/15 11:02 PM:
--

Jack, Erick, sorry for my ignorance. SOLR/Apache community/site is big, so I 
bit lost here :)

I updated issue description by adding both parsers debug output (only most 
meaningful part)
Also, I am bit confused seeing that both parsers showing me text:area in 
debug (not sure is this expected or not)

Jack, thanks for the point about -area, I tried different cases - same result 
(LuceneQParser finds needed record, ExtendedDismaxQParser - not)

PS: going to raise this issue on users list as should be done before.


was (Author: barracuda477):
Jack, Eric, sorry for my ignorance. SOLR/Apache community/site is big, so I bit 
lost here :)

I updated issue description by adding both parsers debug output (only most 
meaningful part)
Also, I am bit confused seeing that both parsers showing me text:area in 
debug (not sure is this expected or not)

Jack, thanks for the point about -area, I tried different cases - same result 
(LuceneQParser finds needed record, ExtendedDismaxQParser - not)

PS: going to raise this issue on users list as should be done before.

 Solr engine misses null-values in OR null part for eDisMax parser
 ---

 Key: SOLR-7249
 URL: https://issues.apache.org/jira/browse/SOLR-7249
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.10.3
 Environment: Windows 7
 CentOS 6.6
Reporter: Arsen Li

 Solr engine misses null-values in OR null part for eDisMax parser
 For example, I have following query:
 ((*:* AND -area:[* TO *]) OR area:[100 TO 300]) AND objectId:40105451
 full query path visible in Solr Admin panel is
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=true
 debug part of response is below:
 --
 rawquerystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
 objectId:40105451,
 querystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
 objectId:40105451,
 parsedquery: +((+MatchAllDocsQuery(*:*) -text:area) area:[100 TO 300]) 
 +objectId:40105451,
 parsedquery_toString: +((+*:* -text:area) area:[100 TO 300]) 
 +objectId: \u0001\u\u\u\u\u\u0013\u000fkk,
 explain: {
   40105451: \n14.3509865 = (MATCH) sum of:\n  0.034590688 = (MATCH) 
 product of:\n0.069181375 = (MATCH) sum of:\n  0.069181375 = (MATCH) 
 sum of:\n0.069181375 = (MATCH) MatchAllDocsQuery, product of:\n   
0.069181375 = queryNorm\n0.5 = coord(1/2)\n  14.316396 = (MATCH) 
 weight(objectId: \u0001\u\u\u\u\u\u0013\u000fkk in 
 1109978) [DefaultSimilarity], result of:\n14.316396 = 
 score(doc=1109978,freq=1.0), product of:\n  0.9952025 = queryWeight, 
 product of:\n14.38541 = idf(docFreq=1, maxDocs=1300888)\n
 0.069181375 = queryNorm\n  14.38541 = fieldWeight in 1109978, product 
 of:\n1.0 = tf(freq=1.0), with freq of:\n  1.0 = 
 termFreq=1.0\n14.38541 = idf(docFreq=1, maxDocs=1300888)\n1.0 
 = fieldNorm(doc=1109978)\n
 },
 QParser: LuceneQParser,
 ...
 --
 so, it should return record if area between 100 and 300 or area not declared.
 it works ok for default parser, but when I set edismax checkbox checked in 
 Solr admin panel - it returns nothing (area for objectId=40105451 is null). 
 Request path is following
 select?q=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451wt=jsonindent=truedefType=edismaxstopwords=truelowercaseOperators=true
 debug response is below
 --
  rawquerystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
 objectId:40105451,
 querystring: ((*:* AND -area) OR area:[100 TO 300]) AND 
 objectId:40105451,
 parsedquery: (+(+((+DisjunctionMaxQuery((text:*\\:*)) 
 -DisjunctionMaxQuery((text:area))) area:[100 TO 300]) 
 +objectId:40105451))/no_coord,
 parsedquery_toString: +(+((+(text:*\\:*) -(text:area)) area:[100 TO 
 300]) +objectId: \u0001\u\u\u\u\u\u0013\u000fkk),
 explain: {},
 QParser: ExtendedDismaxQParser,
 altquerystring: null,
 boost_queries: null,
 parsed_boost_queries: [],
 boostfuncs: null,
 --
 However, when I move query from q field to q.alt field - it works ok, 
 query is
 select?wt=jsonindent=truedefType=edismaxq.alt=((*%3A*+AND+-area%3A%5B*+TO+*%5D)+OR+area%3A%5B100+TO+300%5D)+AND+objectId%3A40105451stopwords=truelowercaseOperators=true
 note, asterisks are not saved by editor, refer to 
 

Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 788 - Still Failing

2015-03-16 Thread Michael McCandless
I committed a fix ... MockRandomMP was failing to take IW's buffered
deletes into account, so it though the index was in fact optimized.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Mar 16, 2015 at 10:13 AM, Robert Muir rcm...@gmail.com wrote:
 It seems the test assumes after it does forceMerge(1), that it can
 open a reader that has no deletes. but this is not the case.

 with its mockrandom mergepolicy, it ends out with a single-segment
 index like this:
 leaf: _ak(5.1.0):c5727/988:delGen=1 maxDoc=5727, deleted=988

 On Mon, Mar 16, 2015 at 9:38 AM, Michael McCandless
 luc...@mikemccandless.com wrote:
 Reproduces ... I'll dig.

 Mike McCandless

 http://blog.mikemccandless.com


 On Mon, Mar 16, 2015 at 8:40 AM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/788/

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.codecs.lucene49.TestLucene49NormsFormat.testUndeadNorms

 Error Message:
 expected:0 but was:121

 Stack Trace:
 java.lang.AssertionError: expected:0 but was:121
 at 
 __randomizedtesting.SeedInfo.seed([A05EC27B0D94C2A8:AD8B8616D87AD314]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:472)
 at org.junit.Assert.assertEquals(Assert.java:456)
 at 
 org.apache.lucene.index.BaseNormsFormatTestCase.testUndeadNorms(BaseNormsFormatTestCase.java:397)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 

[jira] [Commented] (SOLR-7214) JSON Facet API

2015-03-16 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364060#comment-14364060
 ] 

Grant Ingersoll commented on SOLR-7214:
---

How does this all fit with the work many have been doing on stats, facets, etc? 
 Is there a way we can merge these features/functionality such that users don't 
have completely separate APIs for this stuff?

e.g. SOLR-6348 and its children?  

 JSON Facet API
 --

 Key: SOLR-7214
 URL: https://issues.apache.org/jira/browse/SOLR-7214
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
 Attachments: SOLR-7214.patch


 Overview is here: http://heliosearch.org/json-facet-api/
 The structured nature of nested sub-facets are more naturally expressed in a 
 nested structure like JSON rather than the flat structure that normal query 
 parameters provide.
 Goals:
 - First class JSON support
 - Easier programmatic construction of complex nested facet commands
 - Support a much more canonical response format that is easier for clients to 
 parse
 - First class analytics support
 - Support a cleaner way to do distributed faceting
 - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: 2B tests

2015-03-16 Thread Uwe Schindler
2B = Two Billion... docs, terms, postings,...

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Michael Wechner [mailto:michael.wech...@wyona.com]
 Sent: Monday, March 16, 2015 7:43 AM
 To: dev@lucene.apache.org
 Subject: Re: 2B tests
 
 what are the 2B tests? I guess the entry point is
 
 lucene/core/src/test/org/apache/lucene/index/Test2BTerms.java
 
 or where would you start to learn more about these tests?
 
 Thanks
 
 Michael
 
 
 Am 15.03.15 um 21:58 schrieb Michael McCandless:
  I confirmed 2B tests are passing on 4.10.x.  Took 17 hours to run ...
  this is the command I run, for future reference:
 
ant test -Dtests.monster=true -Dtests.heapsize=30g -Dtests.jvms=1
  -Dtests.workDir=/p/tmp
 
  Mike McCandless
 
  http://blog.mikemccandless.com
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7254) NullPointerException thrown in the QueryComponent

2015-03-16 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-7254:
--

 Summary: NullPointerException thrown in the QueryComponent
 Key: SOLR-7254
 URL: https://issues.apache.org/jira/browse/SOLR-7254
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Hrishikesh Gadre
Priority: Minor


In case of a distributed search, if we pass invalid query parameters (e.g. 
negative start value), then Solr returns internal server error (HTTP 500 
response) due to following NullPointerException,

{
  responseHeader:{
status:500,
QTime:6,
params:{
  indent:true,
  start:-1,
  q:*:*,
  wt:json}},
  error:{
trace:java.lang.NullPointerException\n\tat 
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:1031)\n\tat
 
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:715)\n\tat
 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:694)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
 org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:818)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:422)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:211)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)\n\tat
 
org.apache.solr.servlet.SolrHadoopAuthenticationFilter$2.doFilter(SolrHadoopAuthenticationFilter.java:272)\n\tat
 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)\n\tat
 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:277)\n\tat
 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:555)\n\tat
 
org.apache.solr.servlet.SolrHadoopAuthenticationFilter.doFilter(SolrHadoopAuthenticationFilter.java:277)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)\n\tat
 org.apache.solr.servlet.HostnameFilter.doFilter(HostnameFilter.java:86)\n\tat 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)\n\tat
 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)\n\tat
 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)\n\tat
 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)\n\tat
 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)\n\tat
 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)\n\tat
 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)\n\tat
 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)\n\tat
 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)\n\tat
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)\n\tat 
java.lang.Thread.run(Thread.java:745)\n,
code:500}}

The root cause of this error is that in case of a distributed query, input 
validation is missing.

(Non distributed version)
https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java#L284

(Distributed version)
https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java#L691




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7253) sort, limit for PivotFacet stats

2015-03-16 Thread David Donohue (JIRA)
David Donohue created SOLR-7253:
---

 Summary: sort, limit for PivotFacet stats
 Key: SOLR-7253
 URL: https://issues.apache.org/jira/browse/SOLR-7253
 Project: Solr
  Issue Type: New Feature
Affects Versions: 5.0
Reporter: David Donohue


Solr 5.0 added stats to its pivot facet component, so that this query 

facet=truestats=truestats.field={!tag=t1}incomefacet.pivot={!stats=t1}tagsfacet.prefix=university

returns stats for each facet with a tag that starts with university.  Very 
fast.
It returns these stats:  min,max,count,missing,sum,sumOfSquares,mean,stddev

The issue is that it returns these stats for ALL facets matching the query 
criteria.  No ability to limit or sort, e.g. to return the top 20 earning 
universities.

If this functionality can be added, then Solr could deliver on a more complete 
ability for analytics, rivaling more complex SQL aggregate queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7214) JSON Facet API

2015-03-16 Thread Crawdaddy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363910#comment-14363910
 ] 

Crawdaddy commented on SOLR-7214:
-

Thank you, Yonik, for your generosity in bringing this back into Solr!!  Hope 
to see this in a near-term 5.x release.



 JSON Facet API
 --

 Key: SOLR-7214
 URL: https://issues.apache.org/jira/browse/SOLR-7214
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
 Attachments: SOLR-7214.patch


 Overview is here: http://heliosearch.org/json-facet-api/
 The structured nature of nested sub-facets are more naturally expressed in a 
 nested structure like JSON rather than the flat structure that normal query 
 parameters provide.
 Goals:
 - First class JSON support
 - Easier programmatic construction of complex nested facet commands
 - Support a much more canonical response format that is easier for clients to 
 parse
 - First class analytics support
 - Support a cleaner way to do distributed faceting
 - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6892) Improve the way update processors are used and make it simpler

2015-03-16 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-6892.
--
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

 Improve the way update processors are used and make it simpler
 --

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: Trunk, 5.1

 Attachments: SOLR-6892.patch


 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * A new tag {{updateProcessor}}  becomes a toplevel tag and it will be 
 equivalent to the {{processor}} tag inside 
 {{updateRequestProcessorChain}} . The only difference is that it should 
 require a {{name}} attribute. The {{updateProcessorChain}} tag will 
 continue to exist and it should be possible to define {{processor}} inside 
 as well . It should also be possible to reference a named URP in a chain.
 * processors will be added in the request with their names . Example 
 {{processor=a,b,c}} ,  {{post-processor=x,y,z}} . This creates an implicit 
 chain of the named URPs the order they are specified
 * There are multiple request parameters supported by update request 
 ** processor : This chain is executed executed at the leader right before the 
 LogUpdateProcessorFactory + DistributedUpdateProcessorFactory . The replicas 
 will not execute this. 
 ** post-processor : This chain is executed right before the 
 RunUpdateProcessor in all replicas , including the leader
 * What happens to the update.chain parameter ? {{update.chain}} will be 
 honored . The implicit chain is created by merging both the update.chain and 
 the request params. {{post-processor}} will be inserted right before the 
 {{RunUpdateProcessorFactory}} in the chain.   and {{processor}} will be 
 inserted right before the 
 LogUpdateProcessorFactory,DistributedUpdateProcessorFactory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7253) sort, limit for PivotFacet stats

2015-03-16 Thread David Donohue (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Donohue updated SOLR-7253:

Description: 
Solr 5.0 added stats to its pivot facet component, so that this query 

{code}
facet=truestats=truestats.field={!tag=t1}incomefacet.pivot={!stats=t1}tagsfacet.prefix=university
{code}

returns stats for each facet with a tag that starts with university.  Very 
fast.
It returns these stats:  min,max,count,missing,sum,sumOfSquares,mean,stddev

The issue is that it returns these stats for ALL facets matching the query 
criteria.  No ability to limit or sort, e.g. to return the top 20 earning 
universities.

If this functionality can be added, then Solr could deliver on a more complete 
ability for analytics, rivaling more complex SQL aggregate queries.

  was:
Solr 5.0 added stats to its pivot facet component, so that this query 

facet=truestats=truestats.field={!tag=t1}incomefacet.pivot={!stats=t1}tagsfacet.prefix=university

returns stats for each facet with a tag that starts with university.  Very 
fast.
It returns these stats:  min,max,count,missing,sum,sumOfSquares,mean,stddev

The issue is that it returns these stats for ALL facets matching the query 
criteria.  No ability to limit or sort, e.g. to return the top 20 earning 
universities.

If this functionality can be added, then Solr could deliver on a more complete 
ability for analytics, rivaling more complex SQL aggregate queries.


 sort, limit for PivotFacet stats
 

 Key: SOLR-7253
 URL: https://issues.apache.org/jira/browse/SOLR-7253
 Project: Solr
  Issue Type: New Feature
Affects Versions: 5.0
Reporter: David Donohue
  Labels: Analytics, Facets

 Solr 5.0 added stats to its pivot facet component, so that this query 
 {code}
 facet=truestats=truestats.field={!tag=t1}incomefacet.pivot={!stats=t1}tagsfacet.prefix=university
 {code}
 returns stats for each facet with a tag that starts with university.  Very 
 fast.
 It returns these stats:  min,max,count,missing,sum,sumOfSquares,mean,stddev
 The issue is that it returns these stats for ALL facets matching the query 
 criteria.  No ability to limit or sort, e.g. to return the top 20 earning 
 universities.
 If this functionality can be added, then Solr could deliver on a more 
 complete ability for analytics, rivaling more complex SQL aggregate queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[nag] [VOTE] Release PyLucene 4.10.4-1

2015-03-16 Thread Andi Vajda


Two more PMC votes are needed for this release, please :-)

Andi..

On Mon, 9 Mar 2015, Andi Vajda wrote:



The PyLucene 4.10.4-1 release tracking the recent release of Apache Lucene 
4.10.4 is ready.


A release candidate is available from:
http://people.apache.org/~vajda/staging_area/

A list of changes in this release can be seen at:
http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_10/CHANGES

PyLucene 4.10.4 is built with JCC 2.21 included in these release artifacts.

A list of Lucene Java changes can be seen at:
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_10_4/lucene/CHANGES.txt

Please vote to release these artifacts as PyLucene 4.10.4-1.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
http://people.apache.org/~vajda/staging_area/KEYS

pps: here is my +1



[jira] [Commented] (SOLR-7214) JSON Facet API

2015-03-16 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14364063#comment-14364063
 ] 

Timothy Potter commented on SOLR-7214:
--

I added a TODO to the Ref guide page for this to be documented for the 5.1 
release.

 JSON Facet API
 --

 Key: SOLR-7214
 URL: https://issues.apache.org/jira/browse/SOLR-7214
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
 Attachments: SOLR-7214.patch


 Overview is here: http://heliosearch.org/json-facet-api/
 The structured nature of nested sub-facets are more naturally expressed in a 
 nested structure like JSON rather than the flat structure that normal query 
 parameters provide.
 Goals:
 - First class JSON support
 - Easier programmatic construction of complex nested facet commands
 - Support a much more canonical response format that is easier for clients to 
 parse
 - First class analytics support
 - Support a cleaner way to do distributed faceting
 - Support better integration with other search features



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7109) Indexing threads stuck during network partition can put leader into down state

2015-03-16 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14362787#comment-14362787
 ] 

Shalin Shekhar Mangar commented on SOLR-7109:
-

Thanks for fixing the Java7 error, Yonik!

 Indexing threads stuck during network partition can put leader into down state
 --

 Key: SOLR-7109
 URL: https://issues.apache.org/jira/browse/SOLR-7109
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3, 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.1

 Attachments: SOLR-7109.patch, SOLR-7109.patch


 I found this recently while running some Jepsen tests. I found that some 
 threads get stuck on zk operations for a long time in 
 ZkController.updateLeaderInitiatedRecoveryState method and when they wake up 
 they go ahead with setting the LIR state to down. But in the mean time, new 
 leader has been elected and sometimes you'd get into a state where the leader 
 itself is put into recovery causing the shard to reject all writes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 2B tests

2015-03-16 Thread Michael Wechner
what are the 2B tests? I guess the entry point is

lucene/core/src/test/org/apache/lucene/index/Test2BTerms.java

or where would you start to learn more about these tests?

Thanks

Michael


Am 15.03.15 um 21:58 schrieb Michael McCandless:
 I confirmed 2B tests are passing on 4.10.x.  Took 17 hours to run ...
 this is the command I run, for future reference:

   ant test -Dtests.monster=true -Dtests.heapsize=30g -Dtests.jvms=1
 -Dtests.workDir=/p/tmp

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7199) core loading should succeed irrespective of errors in loading certain components

2015-03-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363717#comment-14363717
 ] 

Erick Erickson commented on SOLR-7199:
--

-1 as well. fail early fail often is a good motto here. Now for any one of a 
number of errors I don't know anything's wrong until a query happens to hit. 
Which may not happen for hours/days/whatever. Now I have to go back to the logs 
and try to figure out what went wrong... which may have rolled over.

 I could deal with letting people specify they don't care whether a particular 
component loads or not, but I'm lukewarm to that as well for the most part. 

This seems like a cure worse than the disease. Rather than have the core come 
up anyway, what about some kind of supervisory code that _will_ come up 
independent of cores to handle this use-case (I admit I really haven't looked 
into the details though, possibly this is a nonsensical idea).

 core loading should succeed irrespective of errors in loading certain 
 components
 

 Key: SOLR-7199
 URL: https://issues.apache.org/jira/browse/SOLR-7199
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
 Attachments: SOLR-7199.patch


 If a certain component has some error , the core fails to load completely. 
 This was fine in standalone mode. We could always restart the node after 
 making corrections. In SolrCloud, the collection is totally gone and there is 
 no way to resurrect it using any commands . If the core is loaded , I can at 
 least use config commands to correct those mistakes .
 In short, Solr should try to make the best effort to make the core available 
 with whatever components available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6361) Optimized AnalyzinSuggester#topoSortStates()

2015-03-16 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363657#comment-14363657
 ] 

Robert Muir commented on LUCENE-6361:
-

This patch looks good to me. [~mikemccand] can you take a look too?

 Optimized AnalyzinSuggester#topoSortStates()
 

 Key: LUCENE-6361
 URL: https://issues.apache.org/jira/browse/LUCENE-6361
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Affects Versions: 5.0
Reporter: Markus Heiden
Priority: Minor
 Attachments: topoSortStates.patch


 Optimized implementation of AnalyzinSuggester#topoSortStates().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7199) core loading should succeed irrespective of errors in loading certain components

2015-03-16 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363727#comment-14363727
 ] 

Noble Paul commented on SOLR-7199:
--

OK , hold your horses. We discussed and I'll put up a new description and 
proposed solution and why we need this

 core loading should succeed irrespective of errors in loading certain 
 components
 

 Key: SOLR-7199
 URL: https://issues.apache.org/jira/browse/SOLR-7199
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
 Attachments: SOLR-7199.patch


 If a certain component has some error , the core fails to load completely. 
 This was fine in standalone mode. We could always restart the node after 
 making corrections. In SolrCloud, the collection is totally gone and there is 
 no way to resurrect it using any commands . If the core is loaded , I can at 
 least use config commands to correct those mistakes .
 In short, Solr should try to make the best effort to make the core available 
 with whatever components available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7199) core loading should succeed irrespective of errors in loading certain components

2015-03-16 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363727#comment-14363727
 ] 

Noble Paul edited comment on SOLR-7199 at 3/16/15 7:08 PM:
---

OK , hold your horses. We discussed the problem and I'll put up a new 
description and proposed solution and why we need this


was (Author: noble.paul):
OK , hold your horses. We discussed and I'll put up a new description and 
proposed solution and why we need this

 core loading should succeed irrespective of errors in loading certain 
 components
 

 Key: SOLR-7199
 URL: https://issues.apache.org/jira/browse/SOLR-7199
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
 Attachments: SOLR-7199.patch


 If a certain component has some error , the core fails to load completely. 
 This was fine in standalone mode. We could always restart the node after 
 making corrections. In SolrCloud, the collection is totally gone and there is 
 no way to resurrect it using any commands . If the core is loaded , I can at 
 least use config commands to correct those mistakes .
 In short, Solr should try to make the best effort to make the core available 
 with whatever components available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-16 Thread Paolo Cappuccini (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363757#comment-14363757
 ] 

Paolo Cappuccini edited comment on SOLR-7247 at 3/16/15 7:34 PM:
-

Thanks Shalin! 
I finally understood better splitting behaviour.
I did further investigation and i found the real reason of my problems.

After splitting i have obvsiouly new distribution of docs in shards.
The reason because i didn't find documents is in RealTimeGetComponent.java 
(line 366) :

Slice slice = coll.getRouter().getTargetSlice(id, null, params, coll);

In this case nobody consider routeField and it should be impossible to 
consider : at that time is not possible to get the value of route field.

Also the sliceHash function in CompositeIdRouter doesn't consider _route_ field 
in params. So the document is lost and passing explicit \_route_ field is not 
useful.

Around same behaviour is in DistributedUpdateProcessor in case of 
processDelete.

The behaviour is so strange that perhaps i am completely wrong!!

I think that CompositeIdRouter.sliceHash sliceHash could have explicit 
overloads to hash by doc/collection or hash by value (like in 
IndexSplitter)

getTargetSlice itself should have same overloads (actually it has same ambigous 
signature then sliceHash ).

RealtimeGetComponent can only think by id (and not by routeField) so it 
should consider all active slices if routeField is specified for collection; a 
good optimization for these case could be to consider \_route_ param to route 
specific shard.

About processDelete any solution look very complicate but in general, if i'm 
not wrong, routeField break something.



was (Author: cappuccini):
Thanks Shalin! 
I finally understood better splitting behaviour.
I did further investigation and i found the real reason of my problems.

After splitting i have obvsiouly new distribution of docs in shards.
The reason because i didn't find documents is in RealTimeGetComponent.java 
(line 366) :

Slice slice = coll.getRouter().getTargetSlice(id, null, params, coll);

In this case nobody consider routeField and it should be impossible to 
consider at that time is not possible to get the value of route field.

Also the sliceHash function in CompositeIdRouter doesn't consider _route_ field 
in params. So the document is lost and passing explicit _route_ field is not 
useful.

Around same behaviour is in DsitributedUpdateProcessor in case of 
processDelete.

The behaviour is so strange that perhaps i am completely wrong!!

I think that CompositeIdRouter.sliceHash sliceHash could have explicit 
overloads to hash by doc/collection or hash by value (like in 
IndexSplitter)

getTargetSlice itself should have same overloads (actually it has same ambigous 
signature then sliceHash ).

RealtimeGetComponent can only think by id (and not by routeField) so it 
should consider all active slices if routeField is specified; a good 
optimization for these case could be to consider _route_ param to route 
specific shard.

About processDelete any solution look very complicate but in general, if i'm 
not wrong, routeField break something.


 sliceHash for compositeIdRouter is not coherent with routing
 

 Key: SOLR-7247
 URL: https://issues.apache.org/jira/browse/SOLR-7247
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Paolo Cappuccini

 in CompositeIdRouter the function sliceHash check routeField configured for 
 collection.
 This make me to guess that intended behaviour is manage alternative field to  
 id field to hash documents.
 But the signature of this method is very general ( can take id, doc or 
 params) and it is used in different ways from different functionality.
 In my opinion it should have overloads instead of a weak internal logic. One 
 overload with doc and collection and another one with id , params and 
 collections.
 In any case , if \_route_ is not available by params , collection 
 should be mandatory and in case of RouteField, also doc should be mandatory.
 This will break SplitIndex but it will save coherence of data.
 If i configure routeField i noticed that is broken the DeleteCommand (this 
 pass to sliceHash only id and params ) and SolrIndexSplitter ( this pass 
 only id )
 It should be forbidden to specify RouteField to compositeIdRouter or 
 implements related functionality to make possible to hash documents based on 
 RouteField.
 in case of DeleteCommand command the workaround is to specify _route_ param 
 in request but in case of Index Splitting is not possible any workaround.
 In this case it should be passed entire document during splitting (doc 
 parameter) or build params with proper \_route_ parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SOLR-7172) addreplica API fails with incorrect error msg cannot create collection

2015-03-16 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7172:
---
Summary: addreplica API fails with incorrect error msg cannot create 
collection  (was: addreplica API can fail with cannot create collection 
error)

 addreplica API fails with incorrect error msg cannot create collection
 

 Key: SOLR-7172
 URL: https://issues.apache.org/jira/browse/SOLR-7172
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3, 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.1


 Steps to reproduce:
 # Create 1 node solr cloud cluster
 # Create collection 'test' with 
 numShards=1replicationFactor=1maxShardsPerNode=1
 # Call addreplica API:
 {code}
 http://localhost:8983/solr/admin/collections?action=addreplicacollection=testshard=shard1wt=json
  
 {code}
 API fails with the following response:
 {code}
 {
 responseHeader: {
 status: 400,
 QTime: 9
 },
 Operation ADDREPLICA caused exception:: 
 org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
 Cannot create collection test. No live Solr-instances,
 exception: {
 msg: Cannot create collection test. No live Solr-instances,
 rspCode: 400
 },
 error: {
 msg: Cannot create collection test. No live Solr-instances,
 code: 400
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7247) sliceHash for compositeIdRouter is not coherent with routing

2015-03-16 Thread Paolo Cappuccini (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363757#comment-14363757
 ] 

Paolo Cappuccini commented on SOLR-7247:


Thanks Shalin! 
I finally understood better splitting behaviour.
I did further investigation and i found the real reason of my problems.

After splitting i have obvsiouly new distribution of docs in shards.
The reason because i didn't find documents is in RealTimeGetComponent.java 
(line 366) :

Slice slice = coll.getRouter().getTargetSlice(id, null, params, coll);

In this case nobody consider routeField and it should be impossible to 
consider at that time is not possible to get the value of route field.

Also the sliceHash function in CompositeIdRouter doesn't consider _route_ field 
in params. So the document is lost and passing explicit _route_ field is not 
useful.

Around same behaviour is in DsitributedUpdateProcessor in case of 
processDelete.

The behaviour is so strange that perhaps i am completely wrong!!

I think that CompositeIdRouter.sliceHash sliceHash could have explicit 
overloads to hash by doc/collection or hash by value (like in 
IndexSplitter)

getTargetSlice itself should have same overloads (actually it has same ambigous 
signature then sliceHash ).

RealtimeGetComponent can only think by id (and not by routeField) so it 
should consider all active slices if routeField is specified; a good 
optimization for these case could be to consider _route_ param to route 
specific shard.

About processDelete any solution look very complicate but in general, if i'm 
not wrong, routeField break something.


 sliceHash for compositeIdRouter is not coherent with routing
 

 Key: SOLR-7247
 URL: https://issues.apache.org/jira/browse/SOLR-7247
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Paolo Cappuccini

 in CompositeIdRouter the function sliceHash check routeField configured for 
 collection.
 This make me to guess that intended behaviour is manage alternative field to  
 id field to hash documents.
 But the signature of this method is very general ( can take id, doc or 
 params) and it is used in different ways from different functionality.
 In my opinion it should have overloads instead of a weak internal logic. One 
 overload with doc and collection and another one with id , params and 
 collections.
 In any case , if \_route_ is not available by params , collection 
 should be mandatory and in case of RouteField, also doc should be mandatory.
 This will break SplitIndex but it will save coherence of data.
 If i configure routeField i noticed that is broken the DeleteCommand (this 
 pass to sliceHash only id and params ) and SolrIndexSplitter ( this pass 
 only id )
 It should be forbidden to specify RouteField to compositeIdRouter or 
 implements related functionality to make possible to hash documents based on 
 RouteField.
 in case of DeleteCommand command the workaround is to specify _route_ param 
 in request but in case of Index Splitting is not possible any workaround.
 In this case it should be passed entire document during splitting (doc 
 parameter) or build params with proper \_route_ parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7172) addreplica API can fail with cannot create collection error

2015-03-16 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363788#comment-14363788
 ] 

Hoss Man commented on SOLR-7172:


i'm confused by the problem statement here.

is the problem that there is a bug in addreplica which needs fixed?, or is the 
problem that when addreplica fails, it fails with a confusion/misleading error 
message about creating a collection?

 addreplica API can fail with cannot create collection error
 -

 Key: SOLR-7172
 URL: https://issues.apache.org/jira/browse/SOLR-7172
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3, 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.1


 Steps to reproduce:
 # Create 1 node solr cloud cluster
 # Create collection 'test' with 
 numShards=1replicationFactor=1maxShardsPerNode=1
 # Call addreplica API:
 {code}
 http://localhost:8983/solr/admin/collections?action=addreplicacollection=testshard=shard1wt=json
  
 {code}
 API fails with the following response:
 {code}
 {
 responseHeader: {
 status: 400,
 QTime: 9
 },
 Operation ADDREPLICA caused exception:: 
 org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
 Cannot create collection test. No live Solr-instances,
 exception: {
 msg: Cannot create collection test. No live Solr-instances,
 rspCode: 400
 },
 error: {
 msg: Cannot create collection test. No live Solr-instances,
 code: 400
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7172) addreplica API can fail with cannot create collection error

2015-03-16 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363799#comment-14363799
 ] 

Shalin Shekhar Mangar commented on SOLR-7172:
-

bq. is the problem that there is a bug in addreplica which needs fixed?, or is 
the problem that when addreplica fails, it fails with a confusion/misleading 
error message about creating a collection?

It's the latter. The error message is wrong/confusing.

 addreplica API can fail with cannot create collection error
 -

 Key: SOLR-7172
 URL: https://issues.apache.org/jira/browse/SOLR-7172
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3, 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.1


 Steps to reproduce:
 # Create 1 node solr cloud cluster
 # Create collection 'test' with 
 numShards=1replicationFactor=1maxShardsPerNode=1
 # Call addreplica API:
 {code}
 http://localhost:8983/solr/admin/collections?action=addreplicacollection=testshard=shard1wt=json
  
 {code}
 API fails with the following response:
 {code}
 {
 responseHeader: {
 status: 400,
 QTime: 9
 },
 Operation ADDREPLICA caused exception:: 
 org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
 Cannot create collection test. No live Solr-instances,
 exception: {
 msg: Cannot create collection test. No live Solr-instances,
 rspCode: 400
 },
 error: {
 msg: Cannot create collection test. No live Solr-instances,
 code: 400
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2779 - Still Failing

2015-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2779/

4 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:52939/c8n_1x3_commits_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:52939/c8n_1x3_commits_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([1B461D94AB7C0B18:9312224E058066E0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:598)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:236)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:228)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at