[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-09-08 Thread Modassar Ather (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734335#comment-14734335
 ] 

Modassar Ather commented on LUCENE-5205:


Hi [~talli...@mitre.org]

There is a document with following content in it which is indexed and stored.
{noformat}about 2% growth{noformat}

If following query is searched and the matched terms are highlighted then all 
the three terms of the document is highlighting.
Query: {noformat}"(growth* [term 2]) (about*)"~2{noformat}
Highlighted text : {noformat}about 2% 
growth{noformat}

I tried to debug and found that scorer.getFieldWeightedSpanTerms() has entries 
for all the terms in it at PositionSpan (0, 2).
Please help me understand 
* If it is an issue?
* Why "term" and "2" is present in the span terms although it should not match 
a document? 
* Why 2% is getting highlighted?

Regards,
Modassar

> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.
> Until this is added to the Lucene project, I've added a standalone 
> lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
>  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Updated] (LUCENE-6783) FuzzyLikeThisQuery.rewrite should not have side effects

2015-09-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6783:
-
Attachment: LUCENE-6783.patch

Here is a patch.

> FuzzyLikeThisQuery.rewrite should not have side effects
> ---
>
> Key: LUCENE-6783
> URL: https://issues.apache.org/jira/browse/LUCENE-6783
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6783.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6783) FuzzyLikeThisQuery.rewrite should not have side effects

2015-09-08 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6783:


 Summary: FuzzyLikeThisQuery.rewrite should not have side effects
 Key: LUCENE-6783
 URL: https://issues.apache.org/jira/browse/LUCENE-6783
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60) - Build # 13870 - Failure!

2015-09-08 Thread Michael McCandless
Looks like https://issues.apache.org/jira/browse/LUCENE-6629 (Jenkins
jobs randomly hang and hit 7200 second timeout) ... I tested that
repro line and it runs quickly for me.  I'll update the issue...

Mike McCandless

http://blog.mikemccandless.com


On Mon, Sep 7, 2015 at 4:57 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13870/
> Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseParallelGC
>
> 2 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.lucene.analysis.util.TestCharArrayMap
>
> Error Message:
> Suite timeout exceeded (>= 720 msec).
>
> Stack Trace:
> java.lang.Exception: Suite timeout exceeded (>= 720 msec).
> at __randomizedtesting.SeedInfo.seed([9E827E74BAD348B7]:0)
>
>
> FAILED:  org.apache.lucene.analysis.util.TestCharArrayMap.testCharArrayMap
>
> Error Message:
> Test abandoned because suite timeout was reached.
>
> Stack Trace:
> java.lang.Exception: Test abandoned because suite timeout was reached.
> at __randomizedtesting.SeedInfo.seed([9E827E74BAD348B7]:0)
>
>
>
>
> Build Log:
> [...truncated 3204 lines...]
>[junit4] Suite: org.apache.lucene.analysis.util.TestCharArrayMap
>[junit4]   2> сеп 07, 2015 11:56:46 AM 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
>[junit4]   2> WARNING: Suite execution timed out: 
> org.apache.lucene.analysis.util.TestCharArrayMap
>[junit4]   2>1) Thread[id=1, name=main, state=WAITING, group=main]
>[junit4]   2> at java.lang.Object.wait(Native Method)
>[junit4]   2> at java.lang.Thread.join(Thread.java:1245)
>[junit4]   2> at java.lang.Thread.join(Thread.java:1319)
>[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:578)
>[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:444)
>[junit4]   2> at 
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:199)
>[junit4]   2> at 
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:310)
>[junit4]   2> at 
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:12)
>[junit4]   2>2) Thread[id=11, name=JUnit4-serializer-daemon, 
> state=TIMED_WAITING, group=main]
>[junit4]   2> at java.lang.Thread.sleep(Native Method)
>[junit4]   2> at 
> com.carrotsearch.ant.tasks.junit4.events.Serializer$1.run(Serializer.java:47)
>[junit4]   2>3) Thread[id=840, 
> name=TEST-TestCharArrayMap.testCharArrayMap-seed#[9E827E74BAD348B7], 
> state=RUNNABLE, group=TGRP-TestCharArrayMap]
>[junit4]   2> at 
> org.apache.lucene.analysis.util.CharArrayMap.getSlot(CharArrayMap.java:166)
>[junit4]   2> at 
> org.apache.lucene.analysis.util.CharArrayMap.get(CharArrayMap.java:128)
>[junit4]   2> at 
> org.apache.lucene.analysis.util.TestCharArrayMap.doRandom(TestCharArrayMap.java:52)
>[junit4]   2> at 
> org.apache.lucene.analysis.util.TestCharArrayMap.testCharArrayMap(TestCharArrayMap.java:62)
>[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]   2> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]   2> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]   2> at java.lang.reflect.Method.invoke(Method.java:497)
>[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
>[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
>[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
>[junit4]   2> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
>[junit4]   2> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>[junit4]   2> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>[junit4]   2> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>[junit4]   2> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>[junit4]   2> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>[junit4]   2> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>[junit4]   2> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
>   

[jira] [Commented] (LUCENE-6629) Random 7200 seconds build timeouts / infinite loops in Lucene tests?

2015-09-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734503#comment-14734503
 ] 

Michael McCandless commented on LUCENE-6629:


Another one: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13870/

{noformat}
Suite: org.apache.lucene.analysis.util.TestCharArrayMap
   [junit4]   2> сеп 07, 2015 11:56:46 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> WARNING: Suite execution timed out: 
org.apache.lucene.analysis.util.TestCharArrayMap
   [junit4]   2>1) Thread[id=1, name=main, state=WAITING, group=main]
   [junit4]   2> at java.lang.Object.wait(Native Method)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1245)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1319)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:578)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:444)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:199)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:310)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:12)
   [junit4]   2>2) Thread[id=11, name=JUnit4-serializer-daemon, 
state=TIMED_WAITING, group=main]
   [junit4]   2> at java.lang.Thread.sleep(Native Method)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$1.run(Serializer.java:47)
   [junit4]   2>3) Thread[id=840, 
name=TEST-TestCharArrayMap.testCharArrayMap-seed#[9E827E74BAD348B7], 
state=RUNNABLE, group=TGRP-TestCharArrayMap]
   [junit4]   2> at 
org.apache.lucene.analysis.util.CharArrayMap.getSlot(CharArrayMap.java:166)
   [junit4]   2> at 
org.apache.lucene.analysis.util.CharArrayMap.get(CharArrayMap.java:128)
   [junit4]   2> at 
org.apache.lucene.analysis.util.TestCharArrayMap.doRandom(TestCharArrayMap.java:52)
   [junit4]   2> at 
org.apache.lucene.analysis.util.TestCharArrayMap.testCharArrayMap(TestCharArrayMap.java:62)
   [junit4]   2> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2> at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2> at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2> at java.lang.reflect.Method.invoke(Method.java:497)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
   [junit4]   2> at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
   [junit4]   2> at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
   [junit4]   2> at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
   [junit4]   2> at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
   [junit4]   2> at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
   [junit4]   2> at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
   [junit4]   2> at 

[jira] [Commented] (LUCENE-6783) FuzzyLikeThisQuery.rewrite should not have side effects

2015-09-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734515#comment-14734515
 ] 

ASF subversion and git services commented on LUCENE-6783:
-

Commit 1701754 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1701754 ]

LUCENE-6783: Removed side effects from FuzzyLikeThisQuery.rewrite.

> FuzzyLikeThisQuery.rewrite should not have side effects
> ---
>
> Key: LUCENE-6783
> URL: https://issues.apache.org/jira/browse/LUCENE-6783
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6783.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6773) Always flatten nested conjunctions

2015-09-08 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734291#comment-14734291
 ] 

Ryan Ernst commented on LUCENE-6773:


+1

Can you add a test to TestConjunctionDISI?

> Always flatten nested conjunctions
> --
>
> Key: LUCENE-6773
> URL: https://issues.apache.org/jira/browse/LUCENE-6773
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6773.patch
>
>
> LUCENE-6585 started the work to flatten nested conjunctions, but this only 
> works with approximations. Otherwise a ConjunctionScorer is passed to 
> ConjunctionDISI.intersect, and is not flattened since it is not an instance 
> of ConjunctionDISI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2015-09-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734409#comment-14734409
 ] 

ASF subversion and git services commented on LUCENE-6590:
-

Commit 1701742 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1701742 ]

LUCENE-6590: Fix BooleanQuery to not propagate query boosts twice.

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6758) Adding a SHOULD clause to a BQ over an empty field clears the score when using DefaultSimilarity

2015-09-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6758:

Attachment: LUCENE-6758.patch

The problem is just with crappy queryNorm in DefaultSimilarity, as expected.

Previously maxDoc was used, which was always assumed to be a positive 
integer... but docCount can be zero.

> Adding a SHOULD clause to a BQ over an empty field clears the score when 
> using DefaultSimilarity
> 
>
> Key: LUCENE-6758
> URL: https://issues.apache.org/jira/browse/LUCENE-6758
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Terry Smith
> Attachments: LUCENE-6758.patch, LUCENE-6758.patch
>
>
> Patch with unit test to show the bug will be attached.
> I've narrowed this change in behavior with git bisect to the following commit:
> {noformat}
> commit 698b4b56f0f2463b21c9e3bc67b8b47d635b7d1f
> Author: Robert Muir 
> Date:   Thu Aug 13 17:37:15 2015 +
> LUCENE-6711: Use CollectionStatistics.docCount() for IDF and average 
> field length computations
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1695744 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7569) Create an API to force a leader election between nodes

2015-09-08 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7569:
---
Attachment: SOLR-7569.patch

* Passing the async parameter through,
* Tests now randomly make async requests for the recover shard API call.

> Create an API to force a leader election between nodes
> --
>
> Key: SOLR-7569
> URL: https://issues.apache.org/jira/browse/SOLR-7569
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>  Labels: difficulty-medium, impact-high
> Attachments: SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, 
> SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, 
> SOLR-7569.patch, SOLR-7569_lir_down_state_test.patch
>
>
> There are many reasons why Solr will not elect a leader for a shard e.g. all 
> replicas' last published state was recovery or due to bugs which cause a 
> leader to be marked as 'down'. While the best solution is that they never get 
> into this state, we need a manual way to fix this when it does get into this  
> state. Right now we can do a series of dance involving bouncing the node 
> (since recovery paths between bouncing and REQUESTRECOVERY are different), 
> but that is difficult when running a large cluster. Although it is possible 
> that such a manual API may lead to some data loss but in some cases, it is 
> the only possible option to restore availability.
> This issue proposes to build a new collection API which can be used to force 
> replicas into recovering a leader while avoiding data loss on a best effort 
> basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8018) NPE if distrib=true on single node set up

2015-09-08 Thread Markus Jelsma (JIRA)
Markus Jelsma created SOLR-8018:
---

 Summary: NPE if distrib=true on single node set up
 Key: SOLR-8018
 URL: https://issues.apache.org/jira/browse/SOLR-8018
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.3
Reporter: Markus Jelsma
Priority: Trivial
 Fix For: 5.4


Single node set up: http://localhost:8983/solr/CORE/select?distrib=true causes 
NPE

{code}
219214 INFO  (qtp1282788025-15) [   x:logs] o.a.s.c.S.Request [logs] 
webapp=/solr path=/select params={distrib=true} status=500 QTime=0 
219215 ERROR (qtp1282788025-15) [   x:logs] o.a.s.s.SolrDispatchFilter 
null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:341)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
{code}

See also: 
https://www.mail-archive.com/solr-user@lucene.apache.org/msg113494.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6758) Adding a SHOULD clause to a BQ over an empty field clears the score when using DefaultSimilarity

2015-09-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734713#comment-14734713
 ] 

Adrien Grand commented on LUCENE-6758:
--

+1

> Adding a SHOULD clause to a BQ over an empty field clears the score when 
> using DefaultSimilarity
> 
>
> Key: LUCENE-6758
> URL: https://issues.apache.org/jira/browse/LUCENE-6758
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Terry Smith
> Attachments: LUCENE-6758.patch, LUCENE-6758.patch
>
>
> Patch with unit test to show the bug will be attached.
> I've narrowed this change in behavior with git bisect to the following commit:
> {noformat}
> commit 698b4b56f0f2463b21c9e3bc67b8b47d635b7d1f
> Author: Robert Muir 
> Date:   Thu Aug 13 17:37:15 2015 +
> LUCENE-6711: Use CollectionStatistics.docCount() for IDF and average 
> field length computations
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1695744 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6784) Enable query caching by default

2015-09-08 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6784:


 Summary: Enable query caching by default
 Key: LUCENE-6784
 URL: https://issues.apache.org/jira/browse/LUCENE-6784
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.4


Now that our main queries have become immutable, I would like to revisit 
enabling the query cache by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6783) FuzzyLikeThisQuery.rewrite should not have side effects

2015-09-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734712#comment-14734712
 ] 

ASF subversion and git services commented on LUCENE-6783:
-

Commit 1701783 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1701783 ]

LUCENE-6590,LUCENE-6783: Replace Query.getBoost, setBoost and clone with a new 
BoostQuery.

> FuzzyLikeThisQuery.rewrite should not have side effects
> ---
>
> Key: LUCENE-6783
> URL: https://issues.apache.org/jira/browse/LUCENE-6783
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6783.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2015-09-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734711#comment-14734711
 ] 

ASF subversion and git services commented on LUCENE-6590:
-

Commit 1701783 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1701783 ]

LUCENE-6590,LUCENE-6783: Replace Query.getBoost, setBoost and clone with a new 
BoostQuery.

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6783) FuzzyLikeThisQuery.rewrite should not have side effects

2015-09-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6783.
--
Resolution: Fixed

> FuzzyLikeThisQuery.rewrite should not have side effects
> ---
>
> Key: LUCENE-6783
> URL: https://issues.apache.org/jira/browse/LUCENE-6783
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6783.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6590) Explore different ways to apply boosts

2015-09-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6590.
--
   Resolution: Fixed
Fix Version/s: 5.4

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [CI] Lucene 5x Linux 64 Test Only - Build # 62917 - Failure!

2015-09-08 Thread Michael McCandless
Looks like https://issues.apache.org/jira/browse/LUCENE-6629 again (tests
randomly, inexplicably timeout at 7200 seconds)... when I run the repro
line, the test runs quickly ... so weird ... I'll update the issue.

Mike McCandless

On Tue, Sep 8, 2015 at 12:21 AM,  wrote:

> *BUILD FAILURE*
> Build URL
> http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/62917/
> Project:lucene_linux_java8_64_test_only Randomization: 
> JDK8,local,heap[512m],-server
> +UseSerialGC -UseCompressedOops,sec manager on Date of build:Tue, 08 Sep
> 2015 04:12:41 +0200 Build duration:2 hr 9 min
> *CHANGES* No Changes
> *BUILD ARTIFACTS*
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J0-20150908_042126_543.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J1-20150908_042126_544.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J2-20150908_042126_544.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J3-20150908_042126_545.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J4-20150908_042126_545.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J5-20150908_042126_545.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J6-20150908_042126_545.events
> 
> -
> checkout/lucene/build/sandbox/test/temp/junit4-J7-20150908_042126_545.events
> 
> *FAILED JUNIT TESTS* Name: junit.framework Failed: 1 test(s), Passed: 0
> test(s), Skipped: 0 test(s), Total: 1 test(s)
> *- Failed:
> junit.framework.TestSuite.org.apache.lucene.search.TestDocValuesRangeQuery * 
> Name:
> org.apache.lucene.search Failed: 1 test(s), Passed: 852 test(s), Skipped: 4
> test(s), Total: 857 test(s)
> *- Failed: org.apache.lucene.search.TestDocValuesRangeQuery.testScore *
> *CONSOLE OUTPUT* [...truncated 10827 lines...] [junit4] [junit4] [junit4]
> JVM J0: 0.94 .. 7224.19 = 7223.25s [junit4] JVM J1: 0.69 .. 9.52 = 8.83s 
> [junit4]
> JVM J2: 0.93 .. 12.27 = 11.34s [junit4] JVM J3: 0.70 .. 13.77 = 13.07s 
> [junit4]
> JVM J4: 0.69 .. 10.52 = 9.83s [junit4] JVM J5: 0.94 .. 10.23 = 9.29s [junit4]
> JVM J6: 0.93 .. 14.06 = 13.13s [junit4] JVM J7: 0.94 .. 8.48 = 7.54s [junit4]
> Execution time total: 2 hours 24 seconds [junit4] Tests summary: 19
> suites, 127 tests, 1 suite-level error, 1 error, 4 ignored (4 assumptions) 
> BUILD
> FAILED 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:471:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:2248:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/module-build.xml:58:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1452:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1006:
> There were test failures: 19 suites, 127 tests, 1 suite-level error, 1
> error, 4 ignored (4 assumptions) Total time: 128 minutes 51 seconds Build
> step 'Invoke Ant' marked build as failure Archiving artifacts Recording
> test results [description-setter] Description set:
> JDK8,local,heap[512m],-server +UseSerialGC -UseCompressedOops,sec manager on 
> Email
> was triggered for: Failure - 1st Trigger Failure - Any was overridden by
> another trigger and will not send an email. Trigger Failure - Still was
> overridden by another trigger and will not send an email. Sending email
> for trigger: Failure - 1st
>


[jira] [Created] (SOLR-8017) solr.PointType can't deal with coordination in format like (0.9504547, 1.0, 1.0890503)

2015-09-08 Thread wangshanshan (JIRA)
wangshanshan created SOLR-8017:
--

 Summary: solr.PointType can't deal with coordination in format 
like (0.9504547, 1.0, 1.0890503)
 Key: SOLR-8017
 URL: https://issues.apache.org/jira/browse/SOLR-8017
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.2
Reporter: wangshanshan
Priority: Minor


In jpg picture files there will be some fields like media_white_point, 
media_black_point, which in format like (0.9504547, 1.0, 1.0890503).
But solr.PointType can't deal with the "(", it just splis by comma and let 
Double.parse  deal with a string like "(0.9504547".
In this case, a NumberFormatException will be raised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 789 - Still Failing

2015-09-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/789/

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:55860/qm_qk/q: Could not load collection 
from ZK:halfcollectionblocker

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55860/qm_qk/q: Could not load collection from 
ZK:halfcollectionblocker
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:302)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:419)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14157 - Still Failing!

2015-09-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14157/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking

Error Message:
Shard a1x2_shard1_replica2 received all 10 requests

Stack Trace:
java.lang.AssertionError: Shard a1x2_shard1_replica2 received all 10 requests
at 
__randomizedtesting.SeedInfo.seed([19C646F8A56DE3BF:51FA1F385166F229]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking(TestRandomRequestDistribution.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6629) Random 7200 seconds build timeouts / infinite loops in Lucene tests?

2015-09-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734549#comment-14734549
 ] 

Michael McCandless commented on LUCENE-6629:


Another one: 
http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/62917

{noformat}
[junit4] Suite: org.apache.lucene.search.TestDocValuesRangeQuery
   [junit4]   2> 9 08, 2015 10:21:29 ?? 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> WARNING: Suite execution timed out: 
org.apache.lucene.search.TestDocValuesRangeQuery
   [junit4]   2>1) Thread[id=11, name=JUnit4-serializer-daemon, 
state=TIMED_WAITING, group=main]
   [junit4]   2> at java.lang.Thread.sleep(Native Method)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$1.run(Serializer.java:47)
   [junit4]   2>2) Thread[id=1, name=main, state=WAITING, group=main]
   [junit4]   2> at java.lang.Object.wait(Native Method)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1245)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1319)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:578)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:444)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:199)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:310)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:12)
   [junit4]   2>3) Thread[id=14, 
name=SUITE-TestDocValuesRangeQuery-seed#[57620327D30425D5], state=RUNNABLE, 
group=TGRP-TestDocValuesRangeQuery]
   [junit4]   2> at java.lang.Thread.getStackTrace(Thread.java:1552)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.getThreadsWithTraces(ThreadLeakControl.java:690)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.formatThreadStacksFull(ThreadLeakControl.java:679)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.access$900(ThreadLeakControl.java:62)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:412)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:651)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:138)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(RandomizedRunner.java:568)
   [junit4]   2>4) Thread[id=15, 
name=TEST-TestDocValuesRangeQuery.testScore-seed#[57620327D30425D5], 
state=RUNNABLE, group=TGRP-TestDocValuesRangeQuery]
   [junit4]   2> at 
org.apache.lucene.search.TopFieldCollector.populateResults(TopFieldCollector.java:537)
   [junit4]   2> at 
org.apache.lucene.search.TopDocsCollector.topDocs(TopDocsCollector.java:156)
   [junit4]   2> at 
org.apache.lucene.search.TopDocsCollector.topDocs(TopDocsCollector.java:93)
   [junit4]   2> at 
org.apache.lucene.search.TopFieldCollector.topDocs(TopFieldCollector.java:561)
   [junit4]   2> at 
org.apache.lucene.search.IndexSearcher$4.reduce(IndexSearcher.java:696)
   [junit4]   2> at 
org.apache.lucene.search.IndexSearcher$4.reduce(IndexSearcher.java:683)
   [junit4]   2> at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:719)
   [junit4]   2> at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:703)
   [junit4]   2> at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:645)
   [junit4]   2> at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:551)
   [junit4]   2> at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:601)
   [junit4]   2> at 
org.apache.lucene.search.TestDocValuesRangeQuery.assertSameMatches(TestDocValuesRangeQuery.java:230)
   [junit4]   2> at 
org.apache.lucene.search.TestDocValuesRangeQuery.testScore(TestDocValuesRangeQuery.java:169)
   [junit4]   2> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2> at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2> at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2> at java.lang.reflect.Method.invoke(Method.java:497)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
   [junit4]   2> 

[jira] [Updated] (LUCENE-6784) Enable query caching by default

2015-09-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6784:
-
Attachment: LUCENE-6784.patch

Here is a patch. The default cache has a size of 32MB and I added a heuristic 
to only enable it if this would represent less than 5% of the total memory that 
is available to the JVM. If you think this heuristic is too complicated I can 
remove it...

> Enable query caching by default
> ---
>
> Key: LUCENE-6784
> URL: https://issues.apache.org/jira/browse/LUCENE-6784
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6784.patch
>
>
> Now that our main queries have become immutable, I would like to revisit 
> enabling the query cache by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14156 - Failure!

2015-09-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14156/
Java: 32bit/jdk1.8.0_60 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([10F5CC960A82C3DE:B7B174326739D067]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationAfterPeerSync(CdcrReplicationHandlerTest.java:168)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Closed] (LUCENE-6369) Make queries more defensive and clone deeply

2015-09-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand closed LUCENE-6369.

   Resolution: Won't Fix
Fix Version/s: (was: 5.2)
   (was: Trunk)

Superseded by LUCENE-6590: queries should now be immutable.

> Make queries more defensive and clone deeply
> 
>
> Key: LUCENE-6369
> URL: https://issues.apache.org/jira/browse/LUCENE-6369
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Attachments: LUCENE-6369.patch, immutable_queries.patch
>
>
> It is very important for the query cache that queries be either immutable or 
> clone deeply so that they cannot change after having been put into the cache.
> There are three issues that need to be addressed:
>  - mutable queries such as boolean or phrase queries do not clone deeply
>  - queries that wrap mutable objects such as TermQuery's term
>  - filters inherit Query's default clone impl which is not enough in most 
> cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7819) ZkController.ensureReplicaInLeaderInitiatedRecovery does not respect retryOnConnLoss

2015-09-08 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7819:

Attachment: SOLR-7819.patch

# Adds a new test: TestLeaderInitiatedRecoveryThread
# Removes ZkControllerTest.testEnsureReplicaInLeaderInitiatedRecovery which is 
no longer correct
# Removes portions of HttpPartitionTest.testLeaderInitiatedRecoveryCRUD which 
are no longer relevant to the new code
# Fixed a bug in LeaderInitiatedRecoveryThread which would send recovery 
messages even when a node was not live. This is tested in the new test.

> ZkController.ensureReplicaInLeaderInitiatedRecovery does not respect 
> retryOnConnLoss
> 
>
> Key: SOLR-7819
> URL: https://issues.apache.org/jira/browse/SOLR-7819
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2, 5.2.1
>Reporter: Shalin Shekhar Mangar
>  Labels: Jepsen
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-7819.patch, SOLR-7819.patch, SOLR-7819.patch, 
> SOLR-7819.patch, SOLR-7819.patch
>
>
> SOLR-7245 added a retryOnConnLoss parameter to 
> ZkController.ensureReplicaInLeaderInitiatedRecovery so that indexing threads 
> do not hang during a partition on ZK operations. However, some of those 
> changes were unintentionally reverted by SOLR-7336 in 5.2.
> I found this while running Jepsen tests on 5.2.1 where a hung update managed 
> to put a leader into a 'down' state (I'm still investigating and will open a 
> separate issue about this problem).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6787) BooleanQuery should be able to drop duplicate non-scoring clauses

2015-09-08 Thread Terry Smith (JIRA)
Terry Smith created LUCENE-6787:
---

 Summary: BooleanQuery should be able to drop duplicate non-scoring 
clauses
 Key: LUCENE-6787
 URL: https://issues.apache.org/jira/browse/LUCENE-6787
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: Trunk
Reporter: Terry Smith
Priority: Minor


Pulling out of the discussion on LUCENE-6305.

BooleanQuery could drop duplicate non-scoring (MUST_NOT, FILTER) clauses.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6679) Filter's Weight.explain returns an explanation with isMatch==true even on documents that don't match

2015-09-08 Thread Terry Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Smith updated LUCENE-6679:

Attachment: LUCENE-6679.patch

Here is a patch (against trunk) that adds test coverage for explanations on 
hits only.

I'm looking for feedback to the approach used before expanding to cover 
explanations for misses.

Currently I get a couple of failures when running just the Lucene tests:

{noformat}
Tests with failures:
  - org.apache.lucene.search.TestSortRandom.testRandomStringValSort
  - org.apache.lucene.search.TestSortRandom.testRandomStringSort


JVM J0: 1.42 ..   284.75 =   283.33s
JVM J1: 1.64 ..   284.77 =   283.13s
JVM J2: 1.42 ..   284.70 =   283.28s
JVM J3: 1.42 ..   284.68 =   283.26s
Execution time total: 4 minutes 44 seconds
Tests summary: 404 suites, 3235 tests, 2 failures, 104 ignored (100 assumptions)
{noformat}

Happy to dig into these more once an approach has been found that people like.


> Filter's Weight.explain returns an explanation with isMatch==true even on 
> documents that don't match
> 
>
> Key: LUCENE-6679
> URL: https://issues.apache.org/jira/browse/LUCENE-6679
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
> Attachments: LUCENE-6679.patch
>
>
> This was reported by Trejkaz on the java-user list: 
> http://search-lucene.com/m/l6pAi19h4Y3DclgB1=Re+What+on+earth+is+FilteredQuery+explain+doing+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7990) timeAllowed is returning wrong results on the same query submitted with different timeAllowed limits

2015-09-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735415#comment-14735415
 ] 

Yonik Seeley commented on SOLR-7990:


OK, I figured out why your test was failing... there were no name:a* docs 
because you overwrote them all with name:b* docs in createIndex ;-)

> timeAllowed is returning wrong results on the same query submitted with 
> different timeAllowed limits
> 
>
> Key: SOLR-7990
> URL: https://issues.apache.org/jira/browse/SOLR-7990
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1, Trunk, 5.4
>Reporter: Erick Erickson
>Assignee: Yonik Seeley
> Attachments: SOLR-7990.patch, SOLR-7990.patch, SOLR-7990.patch, 
> SOLR-7990.patch, SOLR-7990_filterFix.patch
>
>
> William Bell raised a question on the user's list. The scenario is
> > send a query that exceeds timeAllowed
> > send another identical query with larger timeAllowed that does NOT time out
> The results from the second query are not correct, they reflect the doc count 
> from the first query.
> It apparently has to do with filter queries being inappropriately created and 
> re-used. I've attached a test case that illustrates the problem.
> There are three tests here. 
> testFilterSimpleCase shows the problem.
> testCacheAssumptions is my hack at what I _think_ the states of the caches 
> should be, but has a bunch of clutter so I'm Ignoring it for now. This should 
> be un-ignored and testFilterSimpleCase removed when there's any fix proposed. 
> The assumptions may not be correct though.
> testQueryResults shows what I think is a problem, the second call that does 
> NOT exceed timeAllowed still reports partial results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6787) BooleanQuery should be able to drop duplicate non-scoring clauses

2015-09-08 Thread Terry Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Smith updated LUCENE-6787:

Attachment: LUCENE-6787.patch

Here is a patch based on [~jpountz]'s suggestion of putting this optimization 
in BooleanQuery.rewrite().


> BooleanQuery should be able to drop duplicate non-scoring clauses
> -
>
> Key: LUCENE-6787
> URL: https://issues.apache.org/jira/browse/LUCENE-6787
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: Trunk
>Reporter: Terry Smith
>Priority: Minor
> Attachments: LUCENE-6787.patch
>
>
> Pulling out of the discussion on LUCENE-6305.
> BooleanQuery could drop duplicate non-scoring (MUST_NOT, FILTER) clauses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6773) Always flatten nested conjunctions

2015-09-08 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735508#comment-14735508
 ] 

Ryan Ernst commented on LUCENE-6773:


Test looks good, thanks!

> Always flatten nested conjunctions
> --
>
> Key: LUCENE-6773
> URL: https://issues.apache.org/jira/browse/LUCENE-6773
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6773.patch, LUCENE-6773.patch
>
>
> LUCENE-6585 started the work to flatten nested conjunctions, but this only 
> works with approximations. Otherwise a ConjunctionScorer is passed to 
> ConjunctionDISI.intersect, and is not flattened since it is not an instance 
> of ConjunctionDISI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.3 - Build # 15 - Failure

2015-09-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.3/15/

No tests ran.

Build Log:
[...truncated 53053 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.02 sec (6.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.3.0-src.tgz...
   [smoker] 28.5 MB in 0.05 sec (594.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.0.tgz...
   [smoker] 65.6 MB in 0.11 sec (599.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.0.zip...
   [smoker] 75.9 MB in 0.21 sec (360.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.3.0
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1449, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1394, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1432, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, svnRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 583, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 762, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1387, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

[jira] [Updated] (SOLR-7990) timeAllowed is returning wrong results on the same query submitted with different timeAllowed limits

2015-09-08 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7990:
---
Attachment: SOLR-7990.patch

Here's the latest patch with the rewritten ExitableDirectoryReaderTest.  It 
fails without the patch and passes with it.

> timeAllowed is returning wrong results on the same query submitted with 
> different timeAllowed limits
> 
>
> Key: SOLR-7990
> URL: https://issues.apache.org/jira/browse/SOLR-7990
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1, Trunk, 5.4
>Reporter: Erick Erickson
>Assignee: Yonik Seeley
> Attachments: SOLR-7990.patch, SOLR-7990.patch, SOLR-7990.patch, 
> SOLR-7990.patch, SOLR-7990_filterFix.patch
>
>
> William Bell raised a question on the user's list. The scenario is
> > send a query that exceeds timeAllowed
> > send another identical query with larger timeAllowed that does NOT time out
> The results from the second query are not correct, they reflect the doc count 
> from the first query.
> It apparently has to do with filter queries being inappropriately created and 
> re-used. I've attached a test case that illustrates the problem.
> There are three tests here. 
> testFilterSimpleCase shows the problem.
> testCacheAssumptions is my hack at what I _think_ the states of the caches 
> should be, but has a bunch of clutter so I'm Ignoring it for now. This should 
> be un-ignored and testFilterSimpleCase removed when there's any fix proposed. 
> The assumptions may not be correct though.
> testQueryResults shows what I think is a problem, the second call that does 
> NOT exceed timeAllowed still reports partial results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-09-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735398#comment-14735398
 ] 

David Smiley commented on LUCENE-5205:
--

Can you please output what the output is now and what you expect it to be?

> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.
> Until this is added to the Lucene project, I've added a standalone 
> lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
>  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 377 - Still Failing

2015-09-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/377/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DistribCursorPagingTest.test

Error Message:
expected:<1264> but was:<902>

Stack Trace:
java.lang.AssertionError: expected:<1264> but was:<902>
at 
__randomizedtesting.SeedInfo.seed([3A0D3EFE9E0FFC60:B259012430F39198]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.DistribCursorPagingTest.doRandomSortsOnLargeIndex(DistribCursorPagingTest.java:599)
at 
org.apache.solr.cloud.DistribCursorPagingTest.test(DistribCursorPagingTest.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Updated] (LUCENE-5503) Trivial fixes to WeightedSpanTermExtractor

2015-09-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-5503:
-
Assignee: David Smiley

I'll take a look at this by next week.

> Trivial fixes to WeightedSpanTermExtractor
> --
>
> Key: LUCENE-5503
> URL: https://issues.apache.org/jira/browse/LUCENE-5503
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 4.7
>Reporter: Tim Allison
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-5503.patch
>
>
> The conversion of PhraseQuery to SpanNearQuery miscalculates the slop if 
> there are stop words in some cases.  The issue only really appears if there 
> is more than one intervening run of stop words: ab the cd the the ef.
> I also noticed that the inOrder determination is based on the newly 
> calculated slop, and it should probably be based on the original 
> phraseQuery.getSlop()
> patch and unit tests on way



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6787) BooleanQuery should be able to drop duplicate non-scoring clauses

2015-09-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735625#comment-14735625
 ] 

Adrien Grand commented on LUCENE-6787:
--

The patch looks good to me. Maybe we could just create the HashSet all the time 
to keep the logic as simple as possible?

> BooleanQuery should be able to drop duplicate non-scoring clauses
> -
>
> Key: LUCENE-6787
> URL: https://issues.apache.org/jira/browse/LUCENE-6787
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: Trunk
>Reporter: Terry Smith
>Priority: Minor
> Attachments: LUCENE-6787.patch
>
>
> Pulling out of the discussion on LUCENE-6305.
> BooleanQuery could drop duplicate non-scoring (MUST_NOT, FILTER) clauses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7435) NPE can occur if CollapsingQParserPlugin is used two or more times in the same query

2015-09-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7435:
-
Description: 
The problem is that Solr 4.10.3, 
CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
looking ahead to the next segment. When you use the CollapsingQParser only once 
that look-ahead is always populated because each segment is processed by the 
scorers. The CollapsingQParser plugin does not process every segment though, it 
stops when it runs out of documents that have been collected.  So the 
look-ahead can cause a null pointer in the second Collapse. This is a problem 
in every version of the CollapsingQParserPlugin.


Below is the original description from Markus:

Not even sure it would work anyway, i tried to collapse on two distinct fields, 
ending up with this:

select?q=*:*={!collapse field=qst}={!collapse field=rdst}

{code}
584550 [qtp1121454968-20] ERROR org.apache.solr.servlet.SolrDispatchFilter  [   
suggests] – null:java.lang.NullPointerException
at 
org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:743)
at 
org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:780)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:203)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1660)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1479)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:556)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
The problem is that Solr 4.10.3, 
CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
looking ahead to the next segment. When you use the CollapsingQParser only once 
that look-ahead is always populated because each segment is processed by the 
scorers. The 

[jira] [Updated] (SOLR-7435) NPE can occur if CollapsingQParserPlugin is used two or more times in the same query

2015-09-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7435:
-
Description: 
The problem is that Solr 4.10.3, 
CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
looking ahead to the next segment. When you use the CollapsingQParser only once 
that look-ahead is always populated because each segment is processed by the 
scorers. The CollapsingQParser plugin does not process every segment though, it 
stops when it runs out of documents that have been collected.  So the 
look-ahead can cause a null pointer in the second Collapse. This is problem in 
every version of the CollapsingQParserPlugin.


Below is the original description from Markus:

Not even sure it would work anyway, i tried to collapse on two distinct fields, 
ending up with this:

select?q=*:*={!collapse field=qst}={!collapse field=rdst}

{code}
584550 [qtp1121454968-20] ERROR org.apache.solr.servlet.SolrDispatchFilter  [   
suggests] – null:java.lang.NullPointerException
at 
org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:743)
at 
org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:780)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:203)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1660)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1479)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:556)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
The problem is that 
CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
looking ahead to the next segment. When you use the CollapsingQParser only once 
that look-ahead is always populated because each segment is processed by the 
scorers. The CollapsingQParser 

[jira] [Commented] (SOLR-7613) solrcore.properties file should be loaded if it resides in ZooKeeper

2015-09-08 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735665#comment-14735665
 ] 

Steve Davids commented on SOLR-7613:


I went ahead and swapped our {{solrcore.properties}} over to 
{{configoverlay.json}} and it worked like a champ. Using the API we had the 
chicken before the egg problem where the core wouldn't come up unless we had 
some properties specified but we couldn't specify the properties without having 
the core up and running. Thanks for the suggestion [~noble.paul], I think this 
ticket is safe to be withdrawn.

> solrcore.properties file should be loaded if it resides in ZooKeeper
> 
>
> Key: SOLR-7613
> URL: https://issues.apache.org/jira/browse/SOLR-7613
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Davids
> Fix For: Trunk, 5.4
>
>
> The solrcore.properties file is used to load user defined properties for use 
> primarily in the solrconfig.xml file, though this properties file will only 
> load if it is resident in the core/conf directory on the physical disk, it 
> will not load if it is in ZK's core/conf directory. There should be a 
> mechanism to allow a core properties file to be specified in ZK and can be 
> updated appropriately along with being able to reload the properties when the 
> file changes (or via a core reload).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7990) timeAllowed is returning wrong results on the same query submitted with different timeAllowed limits

2015-09-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735666#comment-14735666
 ] 

Yonik Seeley commented on SOLR-7990:


I'm working on test fixes... my test changes were no good since the timeAllowed 
does not include query parsing time (which is where the time goes in the sleep 
function).

> timeAllowed is returning wrong results on the same query submitted with 
> different timeAllowed limits
> 
>
> Key: SOLR-7990
> URL: https://issues.apache.org/jira/browse/SOLR-7990
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1, Trunk, 5.4
>Reporter: Erick Erickson
>Assignee: Yonik Seeley
> Attachments: SOLR-7990.patch, SOLR-7990.patch, SOLR-7990.patch, 
> SOLR-7990.patch, SOLR-7990_filterFix.patch
>
>
> William Bell raised a question on the user's list. The scenario is
> > send a query that exceeds timeAllowed
> > send another identical query with larger timeAllowed that does NOT time out
> The results from the second query are not correct, they reflect the doc count 
> from the first query.
> It apparently has to do with filter queries being inappropriately created and 
> re-used. I've attached a test case that illustrates the problem.
> There are three tests here. 
> testFilterSimpleCase shows the problem.
> testCacheAssumptions is my hack at what I _think_ the states of the caches 
> should be, but has a bunch of clutter so I'm Ignoring it for now. This should 
> be un-ignored and testFilterSimpleCase removed when there's any fix proposed. 
> The assumptions may not be correct though.
> testQueryResults shows what I think is a problem, the second call that does 
> NOT exceed timeAllowed still reports partial results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6974) Rare DistribCursorPagingTest fail.

2015-09-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735601#comment-14735601
 ] 

Steve Rowe commented on SOLR-6974:
--

Yet another, different, fail on ASF Jenkins 
[https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/377/], again does 
not reproduce for me on OS X:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=DistribCursorPagingTest -Dtests.method=test 
-Dtests.seed=3A0D3EFE9E0FFC60 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale= -Dtests.timezone=Europe/Belfast -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 55.0s J1 | DistribCursorPagingTest.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<1264> but 
was:<902>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([3A0D3EFE9E0FFC60:B259012430F39198]:0)
   [junit4]>at 
org.apache.solr.cloud.DistribCursorPagingTest.doRandomSortsOnLargeIndex(DistribCursorPagingTest.java:599)
   [junit4]>at 
org.apache.solr.cloud.DistribCursorPagingTest.test(DistribCursorPagingTest.java:93)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 168568 INFO  
(SUITE-DistribCursorPagingTest-seed#[3A0D3EFE9E0FFC60]-worker) 
[n:127.0.0.1:56660_ c:collection1 s:shard1 r:core_node4 x:collection1] 
o.a.s.SolrTestCaseJ4 ###deleteCore
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/build/solr-core/test/J1/temp/solr.cloud.DistribCursorPagingTest_3A0D3EFE9E0FFC60-001
   [junit4]   2> Sep 08, 2015 3:03:00 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 1 leaked 
thread(s).
   [junit4]   2> 169769 WARN  
(OverseerStateUpdate-94485802195484690-127.0.0.1:56660_-n_04) 
[n:127.0.0.1:56660_] o.a.s.c.Overseer Solr cannot talk to ZK, exiting 
Overseer main queue loop
   [junit4]   2> org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for /overseer/queue/qn-24
   [junit4]   2>at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
   [junit4]   2>at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   [junit4]   2>at 
org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
   [junit4]   2>at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)
   [junit4]   2>at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:350)
   [junit4]   2>at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
   [junit4]   2>at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350)
   [junit4]   2>at 
org.apache.solr.cloud.DistributedQueue.removeFirst(DistributedQueue.java:384)
   [junit4]   2>at 
org.apache.solr.cloud.DistributedQueue.poll(DistributedQueue.java:187)
   [junit4]   2>at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:229)
   [junit4]   2>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 169770 INFO  
(OverseerStateUpdate-94485802195484690-127.0.0.1:56660_-n_04) 
[n:127.0.0.1:56660_] o.a.s.c.Overseer Overseer Loop exiting : 
127.0.0.1:56660_
   [junit4]   2> NOTE: test params are: codec=CheapBastard, 
sim=DefaultSimilarity, locale=, timezone=Europe/Belfast
   [junit4]   2> NOTE: Linux 3.13.0-52-generic amd64/Oracle Corporation 
1.8.0_45 (64-bit)/cpus=4,threads=1,free=186141504,total=382730240
   [junit4]   2> NOTE: All tests run in this JVM: [BufferStoreTest, 
CachingDirectoryFactoryTest, PreAnalyzedFieldTest, TestDistributedGrouping, 
HdfsLockFactoryTest, RuleEngineTest, TestTrie, TestClusterStateMutator, 
TestSurroundQueryParser, SliceStateTest, BaseCdcrDistributedZkTest, 
SSLMigrationTest, DistributedExpandComponentTest, TestMissingGroups, 
VMParamsZkACLAndCredentialsProvidersTest, DistribCursorPagingTest]
{noformat}

> Rare DistribCursorPagingTest fail. 
> ---
>
> Key: SOLR-6974
> URL: https://issues.apache.org/jira/browse/SOLR-6974
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
> Attachments: fail.log, fail2.log
>
>
> {noformat}
>[junit4] FAILURE 20.6s J1 | DistribCursorPagingTest.testDistribSearch <<<
>[junit4]> Throwable #1: java.lang.AssertionError: Expected 175 docs 
> but got 174. sort=date asc, id asc. 

[jira] [Commented] (LUCENE-6305) BooleanQuery.equals should ignore clause order

2015-09-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735615#comment-14735615
 ] 

Adrien Grand commented on LUCENE-6305:
--

Actually I think of this change more as a bug fix than as an optimization: the 
order of the clauses has no meaning for BooleanQuery so it is silly that it is 
taken into account for equals/hashcode? I don't see why someone would like 
clause order to be meaningful?



> BooleanQuery.equals should ignore clause order
> --
>
> Key: LUCENE-6305
> URL: https://issues.apache.org/jira/browse/LUCENE-6305
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6305.patch, LUCENE-6305.patch
>
>
> BooleanQuery.equals is sensible to the order in which clauses have been 
> added. So for instance "+A +B" would be considered different from "+B +A" 
> although it generates the same matches and scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7435) NPE in FieldCollapsingQParser

2015-09-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735614#comment-14735614
 ] 

Joel Bernstein commented on SOLR-7435:
--

I went back and reviewed the CollapsingQParserPlugin based on Brandon's stack 
trace and I think I see how this null pointer could occur.

The problem is that 
CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
looking ahead to the next segment. When you use the CollapsingQParser only once 
that look-ahead is always populated because each segment is processed by the 
scorers. The CollapsingQParser plugin does not process every segment though, it 
stops when it runs out of documents that have been collected.  So the 
look-ahead can cause a null pointer in the second Collapse. 

So, there is now a confirmed problem with using the CollapsingQParserPlugin 
twice in the same request.

Any other collector that does a similar look-ahead would also have the same 
problem if it followed the CollapsingQParserPlugin.

The best solution to this would be for the CollapsingQParser plugin to process 
all the segments in the finish() method even if it runs out of documents. 

> NPE in FieldCollapsingQParser
> -
>
> Key: SOLR-7435
> URL: https://issues.apache.org/jira/browse/SOLR-7435
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 5.2
>
>
> Not even sure it would work anyway, i tried to collapse on two distinct 
> fields, ending up with this:
> select?q=*:*={!collapse field=qst}={!collapse field=rdst}
> {code}
> 584550 [qtp1121454968-20] ERROR org.apache.solr.servlet.SolrDispatchFilter  [ 
>   suggests] – null:java.lang.NullPointerException
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:743)
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:780)
> at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:203)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1660)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1479)
> at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:556)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
> at 
> 

[jira] [Comment Edited] (SOLR-7990) timeAllowed is returning wrong results on the same query submitted with different timeAllowed limits

2015-09-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735628#comment-14735628
 ] 

Erick Erickson edited comment on SOLR-7990 at 9/8/15 9:17 PM:
--

Seems I'm the statue more often than the pigeon lately

assertU(adoc("id", Integer.toString(idx), "name", "b" + idx + NUM_DOCS));

should have been

assertU(adoc("id", Integer.toString(idx + NUM_DOCS), "name", "b" + idx));

Oops.


was (Author: erickerickson):
Seems I'm the pigeon more often than the statue lately

assertU(adoc("id", Integer.toString(idx), "name", "b" + idx + NUM_DOCS));

should have been

assertU(adoc("id", Integer.toString(idx + NUM_DOCS), "name", "b" + idx));

Oops.

> timeAllowed is returning wrong results on the same query submitted with 
> different timeAllowed limits
> 
>
> Key: SOLR-7990
> URL: https://issues.apache.org/jira/browse/SOLR-7990
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1, Trunk, 5.4
>Reporter: Erick Erickson
>Assignee: Yonik Seeley
> Attachments: SOLR-7990.patch, SOLR-7990.patch, SOLR-7990.patch, 
> SOLR-7990.patch, SOLR-7990_filterFix.patch
>
>
> William Bell raised a question on the user's list. The scenario is
> > send a query that exceeds timeAllowed
> > send another identical query with larger timeAllowed that does NOT time out
> The results from the second query are not correct, they reflect the doc count 
> from the first query.
> It apparently has to do with filter queries being inappropriately created and 
> re-used. I've attached a test case that illustrates the problem.
> There are three tests here. 
> testFilterSimpleCase shows the problem.
> testCacheAssumptions is my hack at what I _think_ the states of the caches 
> should be, but has a bunch of clutter so I'm Ignoring it for now. This should 
> be un-ignored and testFilterSimpleCase removed when there's any fix proposed. 
> The assumptions may not be correct though.
> testQueryResults shows what I think is a problem, the second call that does 
> NOT exceed timeAllowed still reports partial results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6785) Consider merging Query.rewrite() into Query.createWeight()

2015-09-08 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6785:
--
Attachment: LUCENE-6785.patch

Here's a first-pass patch, just changing things in lucene-core.  As David says, 
for quite a few queries this is a straight simplification, and in those cases 
where rewrites aren't just passed on, it's just a matter of moving the logic 
from rewrite() to createWeight().

I added a couple of tests for Adrien's cacheing case, specifically for BQ and 
DismaxQ.  Existing tests didn't seem to be picking up on those changes. SpanOr 
might cause the same issue as well, I'll have a look at that.  But I think this 
is promising overall.

> Consider merging Query.rewrite() into Query.createWeight()
> --
>
> Key: LUCENE-6785
> URL: https://issues.apache.org/jira/browse/LUCENE-6785
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-6785.patch
>
>
> Prompted by the discussion on LUCENE-6590.
> Query.rewrite() is a bit of an oddity.  You call it to create a query for a 
> specific IndexSearcher, and to ensure that you get a query implementation 
> that has a working createWeight() method.  However, Weight itself already 
> encapsulates the notion of a per-searcher query.
> You also need to repeatedly call rewrite() until the query has stopped 
> rewriting itself, which is a bit trappy - there are a few places (in 
> highlighting code for example) that just call rewrite() once, rather than 
> looping round as IndexSearcher.rewrite() does.  Most queries don't need to be 
> called multiple times, however, so this seems a bit redundant.  And the ones 
> that do currently return un-rewritten queries can be changed simply enough to 
> rewrite them.
> Finally, in pretty much every case I can find in the codebase, rewrite() is 
> called purely as a prelude to createWeight().  This means, in the case of for 
> example large BooleanQueries, we end up cloning the whole query structure, 
> only to throw it away immediately.
> I'd like to try removing rewrite() entirely, and merging the logic into 
> createWeight(), simplifying the API and removing the trap where code only 
> calls rewrite once.  What do people think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7990) timeAllowed is returning wrong results on the same query submitted with different timeAllowed limits

2015-09-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735628#comment-14735628
 ] 

Erick Erickson commented on SOLR-7990:
--

Seems I'm the pigeon more often than the statue lately

assertU(adoc("id", Integer.toString(idx), "name", "b" + idx + NUM_DOCS));

should have been

assertU(adoc("id", Integer.toString(idx + NUM_DOCS), "name", "b" + idx));

Oops.

> timeAllowed is returning wrong results on the same query submitted with 
> different timeAllowed limits
> 
>
> Key: SOLR-7990
> URL: https://issues.apache.org/jira/browse/SOLR-7990
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1, Trunk, 5.4
>Reporter: Erick Erickson
>Assignee: Yonik Seeley
> Attachments: SOLR-7990.patch, SOLR-7990.patch, SOLR-7990.patch, 
> SOLR-7990.patch, SOLR-7990_filterFix.patch
>
>
> William Bell raised a question on the user's list. The scenario is
> > send a query that exceeds timeAllowed
> > send another identical query with larger timeAllowed that does NOT time out
> The results from the second query are not correct, they reflect the doc count 
> from the first query.
> It apparently has to do with filter queries being inappropriately created and 
> re-used. I've attached a test case that illustrates the problem.
> There are three tests here. 
> testFilterSimpleCase shows the problem.
> testCacheAssumptions is my hack at what I _think_ the states of the caches 
> should be, but has a bunch of clutter so I'm Ignoring it for now. This should 
> be un-ignored and testFilterSimpleCase removed when there's any fix proposed. 
> The assumptions may not be correct though.
> testQueryResults shows what I think is a problem, the second call that does 
> NOT exceed timeAllowed still reports partial results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8016) CloudSolrClient has extremely verbose error logging

2015-09-08 Thread Greg Pendlebury (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735752#comment-14735752
 ] 

Greg Pendlebury commented on SOLR-8016:
---

Lowering the level to INFO would be good in our case, although when you say 
that after all the retries it will eventually error would just delay the 
event... unless the error is thrown instead of logged. The Solr nodes were in a 
bad way and needed intervention from sysadmins because of locked index segments 
from a graceless shutdown.

Under this scenario, the UI clients were logging enormous amounts of useless 
content ('rootCause.toString()') and making finding other lines in the log very 
difficult. Because the client also throws Exceptions we had already gracefully 
handled the outage by degrading functionality.

With regards to Markers I have never used them personally, but before I 
suggested them I looked at the fact that both log4j and logback support them 
via slf4j. This covers both the solr default (log4j) and the binding we use in 
production (logback) so I am selfishly happy with the possibility... and I 
think it is the simplest change. I didn't want to propose a rethink of the 
logging, or that method's flow, but I am happy if this prompts that as well.

> CloudSolrClient has extremely verbose error logging
> ---
>
> Key: SOLR-8016
> URL: https://issues.apache.org/jira/browse/SOLR-8016
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 5.2.1, Trunk
>Reporter: Greg Pendlebury
>Priority: Minor
>  Labels: easyfix
>
> CloudSolrClient has this error logging line which is fairly annoying:
> {code}
>   log.error("Request to collection {} failed due to ("+errorCode+
>   ") {}, retry? "+retryCount, collection, rootCause.toString());
> {code}
> Given that this is a client library and then gets embedded into other 
> applications this line is very problematic to handle gracefully. In today's 
> example I was looking at, every failed search was logging over 100 lines, 
> including the full HTML response from the responding node in the cluster.
> The resulting SolrServerException that comes out to our application is 
> handled appropriately but we can't stop this class complaining in logs 
> without suppressing the entire ERROR channel, which we don't want to do. This 
> is the only direct line writing to the log I could find in the client, so we 
> _could_ suppress errors, but that just feels dirty, and fragile for the 
> future.
> From looking at the code I am fairly certain it is not as simple as throwing 
> an exception instead of logging... it is right in the middle of the method. I 
> suspect the simplest answer is adding a marker 
> (http://www.slf4j.org/api/org/slf4j/Marker.html) to the logging call.
> Then solrj users can choose what to do with these log entries. I don't know 
> if there is a broader strategy for handling this that I am ignorant of; 
> apologies if that is the case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6785) Consider merging Query.rewrite() into Query.createWeight()

2015-09-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735655#comment-14735655
 ] 

Robert Muir commented on LUCENE-6785:
-

I didn't thoroughly examine the patch, but this part alone is worth the 
trouble. Its crazy today that if you subclass Query, you only need to implement 
toString() for it to compile!

{noformat}
-  public Weight createWeight(IndexSearcher searcher, boolean needsScores) 
throws IOException {
-throw new UnsupportedOperationException("Query " + this + " does not 
implement createWeight");
-  }
+  public abstract Weight createWeight(IndexSearcher searcher, boolean 
needsScores) throws IOException;
{noformat}

> Consider merging Query.rewrite() into Query.createWeight()
> --
>
> Key: LUCENE-6785
> URL: https://issues.apache.org/jira/browse/LUCENE-6785
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-6785.patch
>
>
> Prompted by the discussion on LUCENE-6590.
> Query.rewrite() is a bit of an oddity.  You call it to create a query for a 
> specific IndexSearcher, and to ensure that you get a query implementation 
> that has a working createWeight() method.  However, Weight itself already 
> encapsulates the notion of a per-searcher query.
> You also need to repeatedly call rewrite() until the query has stopped 
> rewriting itself, which is a bit trappy - there are a few places (in 
> highlighting code for example) that just call rewrite() once, rather than 
> looping round as IndexSearcher.rewrite() does.  Most queries don't need to be 
> called multiple times, however, so this seems a bit redundant.  And the ones 
> that do currently return un-rewritten queries can be changed simply enough to 
> rewrite them.
> Finally, in pretty much every case I can find in the codebase, rewrite() is 
> called purely as a prelude to createWeight().  This means, in the case of for 
> example large BooleanQueries, we end up cloning the whole query structure, 
> only to throw it away immediately.
> I'd like to try removing rewrite() entirely, and merging the logic into 
> createWeight(), simplifying the API and removing the trap where code only 
> calls rewrite once.  What do people think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7435) NPE can occur if CollapsingQParserPlugin is used two or more times in the same query

2015-09-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7435:
-
Fix Version/s: (was: 5.2)
   5.4

> NPE can occur if CollapsingQParserPlugin is used two or more times in the 
> same query
> 
>
> Key: SOLR-7435
> URL: https://issues.apache.org/jira/browse/SOLR-7435
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8.1, 4.9.1, 4.10.1, 4.10.3, 4.10.4, 5.1, 5.2, 5.2.1, 
> 5.3
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 5.4
>
>
> The problem is that Solr 4.10.3, 
> CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
> looking ahead to the next segment. When you use the CollapsingQParser only 
> once that look-ahead is always populated because each segment is processed by 
> the scorers. The CollapsingQParser plugin does not process every segment 
> though, it stops when it runs out of documents that have been collected.  So 
> the look-ahead can cause a null pointer in the second Collapse. This is a 
> problem in every version of the CollapsingQParserPlugin.
> Below is the original description from Markus which is another NPE during a 
> look-ahead in Solr 5.1:
> Not even sure it would work anyway, i tried to collapse on two distinct 
> fields, ending up with this:
> select?q=*:*={!collapse field=qst}={!collapse field=rdst}
> {code}
> 584550 [qtp1121454968-20] ERROR org.apache.solr.servlet.SolrDispatchFilter  [ 
>   suggests] – null:java.lang.NullPointerException
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:743)
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:780)
> at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:203)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1660)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1479)
> at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:556)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> at 
> 

[jira] [Updated] (SOLR-7435) NPE can occur if CollapsingQParserPlugin is used two or more times in the same query

2015-09-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7435:
-
Affects Version/s: 4.8.1
   4.9.1
   4.10.1
   4.10.3
   4.10.4
   5.2
   5.2.1
   5.3

> NPE can occur if CollapsingQParserPlugin is used two or more times in the 
> same query
> 
>
> Key: SOLR-7435
> URL: https://issues.apache.org/jira/browse/SOLR-7435
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8.1, 4.9.1, 4.10.1, 4.10.3, 4.10.4, 5.1, 5.2, 5.2.1, 
> 5.3
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 5.2
>
>
> The problem is that 
> CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
> looking ahead to the next segment. When you use the CollapsingQParser only 
> once that look-ahead is always populated because each segment is processed by 
> the scorers. The CollapsingQParser plugin does not process every segment 
> though, it stops when it runs out of documents that have been collected.  So 
> the look-ahead can cause a null pointer in the second Collapse. 
> Below is the original description from Markus:
> Not even sure it would work anyway, i tried to collapse on two distinct 
> fields, ending up with this:
> select?q=*:*={!collapse field=qst}={!collapse field=rdst}
> {code}
> 584550 [qtp1121454968-20] ERROR org.apache.solr.servlet.SolrDispatchFilter  [ 
>   suggests] – null:java.lang.NullPointerException
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:743)
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:780)
> at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:203)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1660)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1479)
> at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:556)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> at 
> 

[jira] [Updated] (SOLR-7435) NPE can occur if CollapsingQParserPlugin is used two or more times in the same query

2015-09-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7435:
-
Description: 
The problem is that Solr 4.10.3, 
CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
looking ahead to the next segment. When you use the CollapsingQParser only once 
that look-ahead is always populated because each segment is processed by the 
scorers. The CollapsingQParser plugin does not process every segment though, it 
stops when it runs out of documents that have been collected.  So the 
look-ahead can cause a null pointer in the second Collapse. This is a problem 
in every version of the CollapsingQParserPlugin.


Below is the original description from Markus which is another NPE during a 
look-ahead in Solr 5.1:

Not even sure it would work anyway, i tried to collapse on two distinct fields, 
ending up with this:

select?q=*:*={!collapse field=qst}={!collapse field=rdst}

{code}
584550 [qtp1121454968-20] ERROR org.apache.solr.servlet.SolrDispatchFilter  [   
suggests] – null:java.lang.NullPointerException
at 
org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:743)
at 
org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:780)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:203)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1660)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1479)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:556)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
The problem is that Solr 4.10.3, 
CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
looking ahead to the next segment. When you use the CollapsingQParser only once 
that look-ahead is always populated because 

[jira] [Resolved] (SOLR-7775) support SolrCloud collection as fromIndex param in query-time join

2015-09-08 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-7775.

Resolution: Fixed

updated ref guide 
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=32604257=55=56

> support SolrCloud collection as fromIndex param in query-time join
> --
>
> Key: SOLR-7775
> URL: https://issues.apache.org/jira/browse/SOLR-7775
> Project: Solr
>  Issue Type: Sub-task
>  Components: query parsers
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
> Fix For: 5.4
>
> Attachments: SOLR-7775.patch, SOLR-7775.patch
>
>
> it's allusion to SOLR-4905, will be addressed right after SOLR-6234



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6549) bin/solr script should support a -s option to set the -Dsolr.solr.home property

2015-09-08 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735640#comment-14735640
 ] 

Shawn Heisey commented on SOLR-6549:


[~dragonsinth] popped up on IRC asking about GC tuning options, and noticed 
that CMSInitiatingOccupancyFraction was at 50%, and he was going to try it at 
70.

I noted that the CMS parameters on my solr wiki page were at 70, and that the 
initial GC tuning parameters were heavily influenced by that wiki page.

Scott went digging deeper, and found that solr.in.sh was initially using 70, 
then it was changed to 50 by the initial commits for this issue.

We were wondering whether the change was intentional, and if it was, what the 
motivation was.  The GC change was not mentioned in the commit message.

> bin/solr script should support a -s option to set the -Dsolr.solr.home 
> property
> ---
>
> Key: SOLR-6549
> URL: https://issues.apache.org/jira/browse/SOLR-6549
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
>
> The bin/solr script supports a -d parameter for specifying the directory 
> containing the webapp, resources, etc, lib ... In most cases, these binaries 
> are reusable (and will eventually be in a server directory SOLR-3619) even if 
> you want to have multiple solr.solr.home directories on the same server. In 
> other words, it is more common/better to do:
> {code}
> bin/solr start -d server -s home1
> bin/solr start -d server -s home2
> {code}
> than to do:
> {code}
> bin/solr start -d server1
> bin/solr start -d server2
> {code}
> Basically, the start script needs to support a -s option that allows you to 
> share binaries but have different Solr home directories for running multiple 
> Solr instances on the same host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7435) NPE can occur if CollapsingQParserPlugin is used two or more times in the same query

2015-09-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7435:
-
Description: 
The problem is that 
CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632) is 
looking ahead to the next segment. When you use the CollapsingQParser only once 
that look-ahead is always populated because each segment is processed by the 
scorers. The CollapsingQParser plugin does not process every segment though, it 
stops when it runs out of documents that have been collected.  So the 
look-ahead can cause a null pointer in the second Collapse. 


Below is the original description from Markus:

Not even sure it would work anyway, i tried to collapse on two distinct fields, 
ending up with this:

select?q=*:*={!collapse field=qst}={!collapse field=rdst}

{code}
584550 [qtp1121454968-20] ERROR org.apache.solr.servlet.SolrDispatchFilter  [   
suggests] – null:java.lang.NullPointerException
at 
org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:743)
at 
org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:780)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:203)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1660)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1479)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:556)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
Not even sure it would work anyway, i tried to collapse on two distinct fields, 
ending up with this:

select?q=*:*={!collapse field=qst}={!collapse field=rdst}

{code}
584550 [qtp1121454968-20] ERROR org.apache.solr.servlet.SolrDispatchFilter  [   
suggests] – null:java.lang.NullPointerException
at 

[jira] [Updated] (SOLR-7435) NPE can occur if CollapsingQParserPlugin is used two or more times in the same query

2015-09-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7435:
-
Summary: NPE can occur if CollapsingQParserPlugin is used two or more times 
in the same query  (was: NPE in FieldCollapsingQParser)

> NPE can occur if CollapsingQParserPlugin is used two or more times in the 
> same query
> 
>
> Key: SOLR-7435
> URL: https://issues.apache.org/jira/browse/SOLR-7435
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: 5.2
>
>
> Not even sure it would work anyway, i tried to collapse on two distinct 
> fields, ending up with this:
> select?q=*:*={!collapse field=qst}={!collapse field=rdst}
> {code}
> 584550 [qtp1121454968-20] ERROR org.apache.solr.servlet.SolrDispatchFilter  [ 
>   suggests] – null:java.lang.NullPointerException
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:743)
> at 
> org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:780)
> at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:203)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1660)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1479)
> at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:556)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
> at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Commented] (SOLR-7929) SimplePostTool (also bin/post) -filetypes "*" does not work properly in 'web' mode

2015-09-08 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735662#comment-14735662
 ] 

Erik Hatcher commented on SOLR-7929:


The commits were not SOLR-7929 prefixed, so they didn't get added here 
automatically.  Here's the relevant commits:

* trunk: r1697798
* branch_5x: 1697799

> SimplePostTool (also bin/post) -filetypes "*" does not work properly in 'web' 
> mode
> --
>
> Key: SOLR-7929
> URL: https://issues.apache.org/jira/browse/SOLR-7929
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Critical
> Fix For: 5.4
>
> Attachments: SOLR-7929.patch
>
>
> {code}
>  $ bin/post -c tmp http://lucene.apache.org/solr/assets/images/book_sia.png 
> -filetypes “*”
> ...
>  Entering auto mode. Indexing pages with content-types corresponding to file 
> endings *
>  Entering crawl at level 0 (1 links total, 1 new)
>  SimplePostTool: WARNING: Skipping URL with unsupported type image/png
> {code}
> the mapping from image/png to a file type does not exist, and thus fails.  
> This works in 'file' mode though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7929) SimplePostTool (also bin/post) -filetypes "*" does not work properly in 'web' mode

2015-09-08 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735662#comment-14735662
 ] 

Erik Hatcher edited comment on SOLR-7929 at 9/8/15 9:35 PM:


The commits were not SOLR-7929 prefixed, so they didn't get added here 
automatically.  Here's the relevant commits:

* trunk: r1697798
* branch_5x: r1697799


was (Author: ehatcher):
The commits were not SOLR-7929 prefixed, so they didn't get added here 
automatically.  Here's the relevant commits:

* trunk: r1697798
* branch_5x: 1697799

> SimplePostTool (also bin/post) -filetypes "*" does not work properly in 'web' 
> mode
> --
>
> Key: SOLR-7929
> URL: https://issues.apache.org/jira/browse/SOLR-7929
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Critical
> Fix For: 5.4
>
> Attachments: SOLR-7929.patch
>
>
> {code}
>  $ bin/post -c tmp http://lucene.apache.org/solr/assets/images/book_sia.png 
> -filetypes “*”
> ...
>  Entering auto mode. Indexing pages with content-types corresponding to file 
> endings *
>  Entering crawl at level 0 (1 links total, 1 new)
>  SimplePostTool: WARNING: Skipping URL with unsupported type image/png
> {code}
> the mapping from image/png to a file type does not exist, and thus fails.  
> This works in 'file' mode though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6784) Enable query caching by default

2015-09-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734772#comment-14734772
 ] 

Adrien Grand commented on LUCENE-6784:
--

Test cases can't make use of this default query cache anyway since it would 
make tests not reproduce, we have LuceneTestCase.overrideTestDefaultQueryCache 
that resets the default query cache before each test runs. So I don't think 
this is much of an issue?

> Enable query caching by default
> ---
>
> Key: LUCENE-6784
> URL: https://issues.apache.org/jira/browse/LUCENE-6784
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6784.patch
>
>
> Now that our main queries have become immutable, I would like to revisit 
> enabling the query cache by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8019) OpenBitSet.class missing in Lucene core 5

2015-09-08 Thread Thomas Meyer (JIRA)
Thomas Meyer created SOLR-8019:
--

 Summary: OpenBitSet.class missing in Lucene core 5
 Key: SOLR-8019
 URL: https://issues.apache.org/jira/browse/SOLR-8019
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2, 4.10
Reporter: Thomas Meyer
Priority: Critical


A core was transferred from SOLR 4.10 to 5.2.
While adding entities works, searching will yield a 
java.lang.ClassNotFoundException: org.apache.lucene.util.OpenBitSet

We found OpenBitSet.class in lucene-core-4.10.4.jar, but it seems to be missing 
in corresponding lucene-core-5.2.1.jar 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8019) OpenBitSet.class missing in Lucene core 5

2015-09-08 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev closed SOLR-8019.
--
Resolution: Won't Fix

OpenBitSet was removed in Lucene 5 
see http://lucene.apache.org/core/5_0_0/MIGRATE.html


> OpenBitSet.class missing in Lucene core 5
> -
>
> Key: SOLR-8019
> URL: https://issues.apache.org/jira/browse/SOLR-8019
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10, 5.2
>Reporter: Thomas Meyer
>Priority: Critical
>
> A core was transferred from SOLR 4.10 to 5.2.
> While adding entities works, searching will yield a 
> java.lang.ClassNotFoundException: org.apache.lucene.util.OpenBitSet
> We found OpenBitSet.class in lucene-core-4.10.4.jar, but it seems to be 
> missing in corresponding lucene-core-5.2.1.jar 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 376 - Failure

2015-09-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/376/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior

Error Message:
Illegal state, was: down expected:active clusterState:live 
nodes:[]collections:{c1=DocCollection(c1)={   "shards":{"shard1":{   
"parent":null,   "range":null,   "state":"active",   
"replicas":{"core_node1":{   "base_url":"http://127.0.0.1/solr;,
   "node_name":"node1",   "core":"core1",   "roles":"", 
  "state":"down",   "router":{"name":"implicit"}}, 
test=LazyCollectionRef(test)}

Stack Trace:
java.lang.AssertionError: Illegal state, was: down expected:active 
clusterState:live nodes:[]collections:{c1=DocCollection(c1)={
  "shards":{"shard1":{
  "parent":null,
  "range":null,
  "state":"active",
  "replicas":{"core_node1":{
  "base_url":"http://127.0.0.1/solr;,
  "node_name":"node1",
  "core":"core1",
  "roles":"",
  "state":"down",
  "router":{"name":"implicit"}}, test=LazyCollectionRef(test)}
at 
__randomizedtesting.SeedInfo.seed([F73FAD8CCD9EB9EE:9F21AE602F0EE3A0]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.OverseerTest.verifyStatus(OverseerTest.java:601)
at 
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior(OverseerTest.java:1261)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Commented] (LUCENE-6650) Remove dependency of lucene/spatial on oal.search.Filter

2015-09-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734859#comment-14734859
 ] 

Adrien Grand commented on LUCENE-6650:
--

David, do you think you will have time to work on this again soon? I would like 
to deprecate Filter, but it is a bit weird if Filter is still part of the 
public API of some of our modules?

> Remove dependency of lucene/spatial on oal.search.Filter
> 
>
> Key: LUCENE-6650
> URL: https://issues.apache.org/jira/browse/LUCENE-6650
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: David Smiley
>
> We should try to remove usage of oal.search.Filter in lucene/spatial. I gave 
> it a try but this module makes non-trivial use of filters so I wouldn't mind 
> some help here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6758) Adding a SHOULD clause to a BQ over an empty field clears the score when using DefaultSimilarity

2015-09-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734875#comment-14734875
 ] 

Robert Muir commented on LUCENE-6758:
-

Thank you for contributing the tests.

> Adding a SHOULD clause to a BQ over an empty field clears the score when 
> using DefaultSimilarity
> 
>
> Key: LUCENE-6758
> URL: https://issues.apache.org/jira/browse/LUCENE-6758
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Terry Smith
> Attachments: LUCENE-6758.patch, LUCENE-6758.patch
>
>
> Patch with unit test to show the bug will be attached.
> I've narrowed this change in behavior with git bisect to the following commit:
> {noformat}
> commit 698b4b56f0f2463b21c9e3bc67b8b47d635b7d1f
> Author: Robert Muir 
> Date:   Thu Aug 13 17:37:15 2015 +
> LUCENE-6711: Use CollectionStatistics.docCount() for IDF and average 
> field length computations
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1695744 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6774) Remove solr hack in MorfologikFilter

2015-09-08 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6774:
--
Fix Version/s: 5.3.1

> Remove solr hack in MorfologikFilter
> 
>
> Key: LUCENE-6774
> URL: https://issues.apache.org/jira/browse/LUCENE-6774
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Robert Muir
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: Trunk, 5.4, 5.3.1
>
> Attachments: LUCENE-6774.patch, LUCENE-6774.patch, LUCENE-6774.patch, 
> LUCENE-6774.patch
>
>
> If solr wants to set the contextClassLoader because its classloading is 
> fucked up, then it needs to do this hack itself: it should not be in lucene 
> code.
> The current mess prevents use of this analyzer in other environments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3512 - Failure

2015-09-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3512/

1 tests failed.
REGRESSION:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([504B0FDF0302BEBB:A738E187C5EA115D]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10380 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 

Re: 5.3.1 bug fix release

2015-09-08 Thread Noble Paul
I would like to start the process ASAP.  I volunteer to be the RM.  Please
let me know the list of tickets you would like to include in the release
and we can coordinate the rest
On Sep 8, 2015 2:32 AM, "Shawn Heisey"  wrote:

> On 9/5/2015 10:43 PM, Shalin Shekhar Mangar wrote:
> > +1 for a 5.3.1 -- seems like there are some serious bugs around the
> > new security module.
>
> Since I'm not on the PMC, I don't know whether my vote counts, but I
> vote +1.
>
> SOLR-6188 is a simple patch that fixes a very confusing error that our
> more advanced users have reported.  I haven't yet committed the change,
> so it could go to either the 5.3 branch or branch_5x.
>
> If we are going ahead with 5.3.1, my plan is to commit it there, subject
> to approval by the RM.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-6305) BooleanQuery.equals should ignore clause order

2015-09-08 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734785#comment-14734785
 ] 

Adrien Grand commented on LUCENE-6305:
--

bq. Slightly off topic to your original goal, but what do you think about 
deduping repeated non scoring (FILTER, MUST_NOT) clauses from the list in the 
query or do you see that as an possible optimization when building the 
weights/scorers?

+1 This would be a nice optimization for the rewrite method.

> BooleanQuery.equals should ignore clause order
> --
>
> Key: LUCENE-6305
> URL: https://issues.apache.org/jira/browse/LUCENE-6305
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6305.patch
>
>
> BooleanQuery.equals is sensible to the order in which clauses have been 
> added. So for instance "+A +B" would be considered different from "+B +A" 
> although it generates the same matches and scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: 5.3.1 bug fix release

2015-09-08 Thread Uwe Schindler
The Morphologik stuff should be backported:  
 
https://issues.apache.org/jira/browse/LUCENE-6774

 

This breaks Non-Solr apps.

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Noble Paul [mailto:noble.p...@gmail.com] 
Sent: Tuesday, September 08, 2015 3:20 PM
To: Lucene Dev
Subject: Re: 5.3.1 bug fix release

 

I would like to start the process ASAP.  I volunteer to be the RM.  Please let 
me know the list of tickets you would like to include in the release and we can 
coordinate the rest 

On Sep 8, 2015 2:32 AM, "Shawn Heisey"  wrote:

On 9/5/2015 10:43 PM, Shalin Shekhar Mangar wrote:
> +1 for a 5.3.1 -- seems like there are some serious bugs around the
> new security module.

Since I'm not on the PMC, I don't know whether my vote counts, but I
vote +1.

SOLR-6188 is a simple patch that fixes a very confusing error that our
more advanced users have reported.  I haven't yet committed the change,
so it could go to either the 5.3 branch or branch_5x.

If we are going ahead with 5.3.1, my plan is to commit it there, subject
to approval by the RM.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6758) Adding a SHOULD clause to a BQ over an empty field clears the score when using DefaultSimilarity

2015-09-08 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734796#comment-14734796
 ] 

Terry Smith commented on LUCENE-6758:
-

Ah, you've changed DefaultSimilarity.idf() to use (docCount + 1) instead of 
just docCount forcing it to be larger than 0.

That looks like a great fix, thanks.


> Adding a SHOULD clause to a BQ over an empty field clears the score when 
> using DefaultSimilarity
> 
>
> Key: LUCENE-6758
> URL: https://issues.apache.org/jira/browse/LUCENE-6758
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Terry Smith
> Attachments: LUCENE-6758.patch, LUCENE-6758.patch
>
>
> Patch with unit test to show the bug will be attached.
> I've narrowed this change in behavior with git bisect to the following commit:
> {noformat}
> commit 698b4b56f0f2463b21c9e3bc67b8b47d635b7d1f
> Author: Robert Muir 
> Date:   Thu Aug 13 17:37:15 2015 +
> LUCENE-6711: Use CollectionStatistics.docCount() for IDF and average 
> field length computations
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1695744 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6305) BooleanQuery.equals should ignore clause order

2015-09-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6305:
-
Attachment: LUCENE-6305.patch

Revisiting this issue now that queries can be considered immutable. I rebased 
the patch to current trunk. BooleanQuery.equals/hashcode don't depend on the 
order of clauses anymore but the iteration order of the clauses and the 
toString() representation of BooleanQuery have not changed.

> BooleanQuery.equals should ignore clause order
> --
>
> Key: LUCENE-6305
> URL: https://issues.apache.org/jira/browse/LUCENE-6305
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6305.patch, LUCENE-6305.patch
>
>
> BooleanQuery.equals is sensible to the order in which clauses have been 
> added. So for instance "+A +B" would be considered different from "+B +A" 
> although it generates the same matches and scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6784) Enable query caching by default

2015-09-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734768#comment-14734768
 ] 

Robert Muir commented on LUCENE-6784:
-

I'm a little concerned since tests run with 512MB heap (except when running 
with clover, in which case its 768MB)

> Enable query caching by default
> ---
>
> Key: LUCENE-6784
> URL: https://issues.apache.org/jira/browse/LUCENE-6784
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6784.patch
>
>
> Now that our main queries have become immutable, I would like to revisit 
> enabling the query cache by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-6774) Remove solr hack in MorfologikFilter

2015-09-08 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-6774:
---

Reopen for backport

> Remove solr hack in MorfologikFilter
> 
>
> Key: LUCENE-6774
> URL: https://issues.apache.org/jira/browse/LUCENE-6774
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Robert Muir
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6774.patch, LUCENE-6774.patch, LUCENE-6774.patch, 
> LUCENE-6774.patch
>
>
> If solr wants to set the contextClassLoader because its classloading is 
> fucked up, then it needs to do this hack itself: it should not be in lucene 
> code.
> The current mess prevents use of this analyzer in other environments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6773) Always flatten nested conjunctions

2015-09-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6773:
-
Attachment: LUCENE-6773.patch

Thanks Ryan for having a look. Here is an updated patch.

> Always flatten nested conjunctions
> --
>
> Key: LUCENE-6773
> URL: https://issues.apache.org/jira/browse/LUCENE-6773
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6773.patch, LUCENE-6773.patch
>
>
> LUCENE-6585 started the work to flatten nested conjunctions, but this only 
> works with approximations. Otherwise a ConjunctionScorer is passed to 
> ConjunctionDISI.intersect, and is not flattened since it is not an instance 
> of ConjunctionDISI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 272 - Failure

2015-09-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/272/

No tests ran.

Build Log:
[...truncated 12326 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/build.xml:511:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build.xml:394:
 java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1168)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1104)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:998)
at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:932)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 6 minutes 25 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Encrypted index?

2015-09-08 Thread Adam Retter
>
> bq: I was rather hoping that I could do the encryption and subsequent
> decryption at a level below the search itself
>


I am not sure what "bq" standard for?


Aside from the different encryption key per index (or whatever), why
> does the client think this is any more secure than an encrypted disk?
>
> Just askin'
>

Well I never said that Client was reasonable or even wanted to explain
their thought process in any logical manner ;-) The client wants it because
they think they need it, they think they need it quite likely because they
don't understand what it means. When you try and explain why they don't
need it or possibly better solutions they are not interested, because...
they *know* they need it!


-- 
Adam Retter

skype: adam.retter
tweet: adamretter
http://www.adamretter.org.uk


[jira] [Commented] (SOLR-8016) CloudSolrClient has extremely verbose error logging

2015-09-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735827#comment-14735827
 ] 

Mark Miller commented on SOLR-8016:
---

Pretty sure CloudSolrClient should not be retrying on such errors. The load 
balancing code is in another class - if anything, it might chose to retry 
depending on the request type, but code in CloudSolrClient should probably not 
be retrying in this case.

> CloudSolrClient has extremely verbose error logging
> ---
>
> Key: SOLR-8016
> URL: https://issues.apache.org/jira/browse/SOLR-8016
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 5.2.1, Trunk
>Reporter: Greg Pendlebury
>Priority: Minor
>  Labels: easyfix
>
> CloudSolrClient has this error logging line which is fairly annoying:
> {code}
>   log.error("Request to collection {} failed due to ("+errorCode+
>   ") {}, retry? "+retryCount, collection, rootCause.toString());
> {code}
> Given that this is a client library and then gets embedded into other 
> applications this line is very problematic to handle gracefully. In today's 
> example I was looking at, every failed search was logging over 100 lines, 
> including the full HTML response from the responding node in the cluster.
> The resulting SolrServerException that comes out to our application is 
> handled appropriately but we can't stop this class complaining in logs 
> without suppressing the entire ERROR channel, which we don't want to do. This 
> is the only direct line writing to the log I could find in the client, so we 
> _could_ suppress errors, but that just feels dirty, and fragile for the 
> future.
> From looking at the code I am fairly certain it is not as simple as throwing 
> an exception instead of logging... it is right in the middle of the method. I 
> suspect the simplest answer is adding a marker 
> (http://www.slf4j.org/api/org/slf4j/Marker.html) to the logging call.
> Then solrj users can choose what to do with these log entries. I don't know 
> if there is a broader strategy for handling this that I am ignorant of; 
> apologies if that is the case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7990) timeAllowed is returning wrong results on the same query submitted with different timeAllowed limits

2015-09-08 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-7990:
---
Attachment: SOLR-7990.patch

Here's the final patch.
I also incorporated parts of Erick's test and modified the delay component to 
take a parameter that tells it how long to sleep.

> timeAllowed is returning wrong results on the same query submitted with 
> different timeAllowed limits
> 
>
> Key: SOLR-7990
> URL: https://issues.apache.org/jira/browse/SOLR-7990
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1, Trunk, 5.4
>Reporter: Erick Erickson
>Assignee: Yonik Seeley
> Attachments: SOLR-7990.patch, SOLR-7990.patch, SOLR-7990.patch, 
> SOLR-7990.patch, SOLR-7990.patch, SOLR-7990_filterFix.patch
>
>
> William Bell raised a question on the user's list. The scenario is
> > send a query that exceeds timeAllowed
> > send another identical query with larger timeAllowed that does NOT time out
> The results from the second query are not correct, they reflect the doc count 
> from the first query.
> It apparently has to do with filter queries being inappropriately created and 
> re-used. I've attached a test case that illustrates the problem.
> There are three tests here. 
> testFilterSimpleCase shows the problem.
> testCacheAssumptions is my hack at what I _think_ the states of the caches 
> should be, but has a bunch of clutter so I'm Ignoring it for now. This should 
> be un-ignored and testFilterSimpleCase removed when there's any fix proposed. 
> The assumptions may not be correct though.
> testQueryResults shows what I think is a problem, the second call that does 
> NOT exceed timeAllowed still reports partial results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [CI] Lucene 5x Linux 64 Test Only - Build # 63025 - Failure!

2015-09-08 Thread Michael McCandless
I'll dig ... it repros on beasting.

Separately, it looks like build failures from the ES jenkins instances
still must go through moderation on the Lucene dev list?  (I don't see this
build failure on the dev list yet ...).

We had tried to fix this but it looks like it did not take!

Mike McCandless

On Tue, Sep 8, 2015 at 5:39 PM,  wrote:

> *BUILD FAILURE*
> Build URL
> http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/63025/
> Project:lucene_linux_java8_64_test_only Randomization: 
> JDK8,network,heap[799m],-server
> +UseG1GC +UseCompressedOops,sec manager on Date of build:Tue, 08 Sep 2015
> 23:36:36 +0200 Build duration:2 min 44 sec
> *CHANGES* No Changes
> *BUILD ARTIFACTS*
> -
> checkout/lucene/build/core/test/temp/junit4-J0-20150908_233706_469.events
> 
> -
> checkout/lucene/build/core/test/temp/junit4-J1-20150908_233706_472.events
> 
> -
> checkout/lucene/build/core/test/temp/junit4-J2-20150908_233706_469.events
> 
> -
> checkout/lucene/build/core/test/temp/junit4-J3-20150908_233706_472.events
> 
> -
> checkout/lucene/build/core/test/temp/junit4-J4-20150908_233706_469.events
> 
> -
> checkout/lucene/build/core/test/temp/junit4-J5-20150908_233706_469.events
> 
> -
> checkout/lucene/build/core/test/temp/junit4-J6-20150908_233706_472.events
> 
> -
> checkout/lucene/build/core/test/temp/junit4-J7-20150908_233706_472.events
> 
> *FAILED JUNIT TESTS* Name: org.apache.lucene.index Failed: 1 test(s),
> Passed: 792 test(s), Skipped: 24 test(s), Total: 817 test(s)
> *- Failed: org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef *
> *CONSOLE OUTPUT* [...truncated 1824 lines...] [junit4] Tests with
> failures: [junit4] -
> org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef [junit4] [junit4]
> [junit4] JVM J0: 0.80 .. 125.00 = 124.20s [junit4] JVM J1: 1.00 .. 127.13
> = 126.12s [junit4] JVM J2: 1.00 .. 136.48 = 135.48s [junit4] JVM J3: 1.00
> .. 126.06 = 125.06s [junit4] JVM J4: 1.00 .. 132.71 = 131.71s [junit4]
> JVM J5: 1.00 .. 126.27 = 125.27s [junit4] JVM J6: 1.00 .. 126.38 = 125.37s 
> [junit4]
> JVM J7: 1.00 .. 125.70 = 124.70s [junit4] Execution time total: 2 minutes
> 16 seconds [junit4] Tests summary: 414 suites, 3332 tests, 1 error, 51
> ignored (47 assumptions) BUILD FAILED 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:50:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1452:
> The following error occurred while executing this line: 
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1006:
> There were test failures: 414 suites, 3332 tests, 1 error, 51 ignored (47
> assumptions) Total time: 2 minutes 26 seconds Build step 'Invoke Ant'
> marked build as failure Archiving artifacts Recording test results 
> [description-setter]
> Description set: JDK8,network,heap[799m],-server +UseG1GC
> +UseCompressedOops,sec manager on Email was triggered for: Failure - 1st 
> Trigger
> Failure - Any was overridden by another trigger and will not send an email. 
> Trigger
> Failure - Still was overridden by another trigger and will not send an
> email. Sending email for trigger: Failure - 1st
>


[jira] [Commented] (SOLR-8016) CloudSolrClient has extremely verbose error logging

2015-09-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735798#comment-14735798
 ] 

Mark Miller commented on SOLR-8016:
---

That sounds like a different issue - the CloudSolrClient really should not be 
retrying like this on such an error?

> CloudSolrClient has extremely verbose error logging
> ---
>
> Key: SOLR-8016
> URL: https://issues.apache.org/jira/browse/SOLR-8016
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 5.2.1, Trunk
>Reporter: Greg Pendlebury
>Priority: Minor
>  Labels: easyfix
>
> CloudSolrClient has this error logging line which is fairly annoying:
> {code}
>   log.error("Request to collection {} failed due to ("+errorCode+
>   ") {}, retry? "+retryCount, collection, rootCause.toString());
> {code}
> Given that this is a client library and then gets embedded into other 
> applications this line is very problematic to handle gracefully. In today's 
> example I was looking at, every failed search was logging over 100 lines, 
> including the full HTML response from the responding node in the cluster.
> The resulting SolrServerException that comes out to our application is 
> handled appropriately but we can't stop this class complaining in logs 
> without suppressing the entire ERROR channel, which we don't want to do. This 
> is the only direct line writing to the log I could find in the client, so we 
> _could_ suppress errors, but that just feels dirty, and fragile for the 
> future.
> From looking at the code I am fairly certain it is not as simple as throwing 
> an exception instead of logging... it is right in the middle of the method. I 
> suspect the simplest answer is adding a marker 
> (http://www.slf4j.org/api/org/slf4j/Marker.html) to the logging call.
> Then solrj users can choose what to do with these log entries. I don't know 
> if there is a broader strategy for handling this that I am ignorant of; 
> apologies if that is the case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8016) CloudSolrClient has extremely verbose error logging

2015-09-08 Thread Greg Pendlebury (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735813#comment-14735813
 ] 

Greg Pendlebury commented on SOLR-8016:
---

I haven't looked at the innards of the method enough to say for sure. I know in 
our particular use case it is fruitless to keep trying. The nodes are online, 
but cannot answer in the way expected:

{code}
ERROR o.a.s.c.s.i.CloudSolrClient - Request to collection trove failed due to 
(500) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
Error from server at /solr/trove: Expected mime type 
application/octet-stream but got text/html. 


Error 500 {msg=SolrCore 'trove' is not available due to init failure: 
Index locked for write for core 
trove,trace=org.apache.solr.common.SolrException: SolrCore 'trove' is not 
available due to init failure: Index locked for write for core trove
{code}

And then lots and lots more html output.

The Exception that bubbles up to our code is more than enough for us know where 
to start looking:
{code}
ERROR a.g.n.n.c.r.SolrService - Solr search failed: No live SolrServers 
available to handle this request:[]
{code}

> CloudSolrClient has extremely verbose error logging
> ---
>
> Key: SOLR-8016
> URL: https://issues.apache.org/jira/browse/SOLR-8016
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 5.2.1, Trunk
>Reporter: Greg Pendlebury
>Priority: Minor
>  Labels: easyfix
>
> CloudSolrClient has this error logging line which is fairly annoying:
> {code}
>   log.error("Request to collection {} failed due to ("+errorCode+
>   ") {}, retry? "+retryCount, collection, rootCause.toString());
> {code}
> Given that this is a client library and then gets embedded into other 
> applications this line is very problematic to handle gracefully. In today's 
> example I was looking at, every failed search was logging over 100 lines, 
> including the full HTML response from the responding node in the cluster.
> The resulting SolrServerException that comes out to our application is 
> handled appropriately but we can't stop this class complaining in logs 
> without suppressing the entire ERROR channel, which we don't want to do. This 
> is the only direct line writing to the log I could find in the client, so we 
> _could_ suppress errors, but that just feels dirty, and fragile for the 
> future.
> From looking at the code I am fairly certain it is not as simple as throwing 
> an exception instead of logging... it is right in the middle of the method. I 
> suspect the simplest answer is adding a marker 
> (http://www.slf4j.org/api/org/slf4j/Marker.html) to the logging call.
> Then solrj users can choose what to do with these log entries. I don't know 
> if there is a broader strategy for handling this that I am ignorant of; 
> apologies if that is the case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Encrypted index?

2015-09-08 Thread Adam Retter
> The problem with encrypted file systems is that if someone gets access to
> the file system (not the disk, the file system e.g via ssh), it is wide
> open to it. It's like my work laptop's disk is encrypted, but after I've
> entered my password, all files are readable to me. However, files that are
> password protected, aren't, and that's what security experts want - that
> even if an attacker stole the machine and has all the passwords and the
> time in the world, without the public/private key of the encrypted index,
> he won't be able to read it. I'm not justifying it, just repeating what I
> was told. Even though I think it's silly - if someone managed to get a hold
> of the machine, the login password, root access... what are the chance he
> doesn't already have the other keys?
>

I was rather assuming an encrypted filesystem (a partition if you like)
that is only available to a specific system user which our application runs
under. This filesystem would only hold the Lucene indexes, it would not be
a general purpose system boot filesystem as you are describing.


> Anyway, we're here to solve the technical problem, and we obviously aren't
> the ones making these decisions, and it's futile attempting to argue with
> security folks, so let's address the question of how to achieve encryption.
>

I'm not a security folk, some of the responders might be. I am just trying
to deliver a requirement, and have been told by the client that the
suggested encrypted filesystem etc is not good enough.


> I wouldn't go with a Codec, personally, to achieve encryption. It's over
> complicated IMO. Rather an encrypted Directory is a simpler solution. You
> will need to implement an EncryptingIndexOutput and a matching
> DecryptingIndexInput, but that's more or less it. The encryption/decryption
> happens in buffers, so you will want to extend the respective BufferedIO
> classes. The issues mentioned above should give you a head start, even
> though the patches are old and likely don't compile against new versions,
> but they contain the gist of it.
>

Thanks I will take a look. At the moment I am predominantly just trying to
understand if it is even possible, it is unlikely the client will sign off
any real development work on this until the New Year; If they sign-off,
expect some more questions to the list from me :-p


> Just make sure your application, or actually the process running Lucene,
> receive the public/private key in a non obvious way, so that if someone
> does get a hold of the machine, he can't obtain that information!
>
 Ok of course I will try and protect my app and paths to and from. However,
I assume that if someone gets root access to the server, they can just dump
the server's RAM to a disk file and have access to all the keys that happen
to be in RAM anyway and that I can't really protect against that.

> Also, as for encrypting the terms themselves, beyond the problems
> mentioned above about wildcard queries, there is the risk of someone
> guessing the terms based on their statistics. If the attacker knows the
> corpus domain, I assume it shouldn't be hard for him to guess that a
> certain word with a high DF and TF is probably "the" and proceed from there.
>

Based on the fact that my client doesn't seem to understand that this is
probably not a good idea. I think the fact that someone might use
statistical analysis to guess and potentially decrypt the index will be of
little worry to them (even if I explain it).


> Again, I'm no security expert and I've learned it's sometimes futile
> trying to argue with them. If you can convince them though that the system
> as a whole is protected enough, and if breached an encrypted index is
> likely already breached too, you can avoid the complexity. From my
> experience, encryption hurts performance, but you can improve that by eg
> buffering parts unencrypted, but then you also need to prove your program's
> memory is protected...
>
Mainly understood, but can you elaborate on "prove your program's memory is
protected"?


Thanks

-- 
Adam Retter

skype: adam.retter
tweet: adamretter
http://www.adamretter.org.uk


[jira] [Created] (SOLR-8021) TestCSVResponseWriter.testPseudoFields failure

2015-09-08 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-8021:


 Summary: TestCSVResponseWriter.testPseudoFields failure
 Key: SOLR-8021
 URL: https://issues.apache.org/jira/browse/SOLR-8021
 Project: Solr
  Issue Type: Bug
Reporter: Steve Rowe


My Jenkins found a seed that reproduces for me 
[http://jenkins.sarowe.net/job/Lucene-Solr-tests-5.x-Java8/1814/]:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestCSVResponseWriter -Dtests.method=testPseudoFields 
-Dtests.seed=56AE2F2A54741427 -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=da -Dtests.timezone=Atlantic/St_Helena -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.04s J10 | TestCSVResponseWriter.testPseudoFields <<<
   [junit4]> Throwable #1: org.junit.ComparisonFailure: 
expected:<[3,false,tru]e> but was:<[5,false,fals]e>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([56AE2F2A54741427:A0BF7D474E7A4CFC]:0)
   [junit4]>at 
org.apache.solr.response.TestCSVResponseWriter.testPseudoFields(TestCSVResponseWriter.java:234)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
[...]
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/var/lib/jenkins/jobs/Lucene-Solr-tests-5.x-Java8/workspace/solr/build/solr-core/test/J10/temp/solr.response.TestCSVResponseWriter_56AE2F2A54741427-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene53): 
{foo_i=Lucene50(blocksize=128), foo_l=PostingsFormat(name=Memory doPackFST= 
true), store_rpt=Lucene50(blocksize=128), foo_s=BlockTreeOrds(blocksize=128), 
shouldbeunstored=BlockTreeOrds(blocksize=128), v2_ss=PostingsFormat(name=Memory 
doPackFST= true), amount_camount_raw=FSTOrd50, 
store_1_coordinate=Lucene50(blocksize=128), foo_dt=FSTOrd50, foo_b=FSTOrd50, 
foo_d=PostingsFormat(name=Memory doPackFST= true), 
id=PostingsFormat(name=Memory doPackFST= true), store_0_coordinate=FSTOrd50, 
foo_f=FSTOrd50, amount_ccurrency=FSTOrd50, v_ss=FSTOrd50}, docValues:{}, 
sim=RandomSimilarityProvider(queryNorm=true,coord=no): {}, locale=da, 
timezone=Atlantic/St_Helena
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_45 (64-bit)/cpus=16,threads=1,free=88697488,total=531103744
   [junit4]   2> NOTE: All tests run in this JVM: [TestRandomDVFaceting, 
TestDistribIDF, TestSerializedLuceneMatchVersion, QueryEqualityTest, 
FacetPivotSmallTest, TestQueryUtils, SoftAutoCommitTest, SampleTest, 
TestSchemaNameResource, TestRangeQuery, HdfsSyncSliceTest, CursorMarkTest, 
TestSolrIndexConfig, IndexSchemaTest, TermVectorComponentDistributedTest, 
CurrencyFieldXmlFileTest, BasicZkTest, ScriptEngineTest, 
SchemaVersionSpecificBehaviorTest, TestCollationFieldDocValues, TestConfig, 
TestPerFieldSimilarity, CollectionsAPIDistributedZkTest, 
DistributedSpellCheckComponentTest, DistributedTermsComponentTest, 
DistributedQueryElevationComponentTest, DirectUpdateHandlerOptimizeTest, 
LukeRequestHandlerTest, TestCSVResponseWriter]
   [junit4] Completed [331/537] on J10 in 1.45s, 2 tests, 1 failure <<< 
FAILURES!
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Encrypted index?

2015-09-08 Thread Adam Retter
Thanks very much Jack, I will take a look into those.

On 8 September 2015 at 16:21, Jack Krupansky 
wrote:

> Here's an old Lucene issue/patch for an AES encrypted Lucene directory
> class that might give you some ideas:
> https://issues.apache.org/jira/browse/LUCENE-2228
>
> No idea what happened to it.
>
> An even older issue attempting to add encryption for specific fields:
> https://issues.apache.org/jira/browse/LUCENE-737
>
> -- Jack Krupansky
>
> On Tue, Sep 8, 2015 at 11:07 AM, Adam Retter 
> wrote:
>
>>
>> The easiest way to do this is put the index over
>>> an encrypted file system. Encrypting the actual
>>> _tokens_ has a few problems, not the least of
>>> which is that any encryption algorithm worth
>>> its salt is going to make most searching totally
>>> impossible.
>>>
>>
>> I already suggested an encrypted filesystem to the customer but
>> unfortunately that was rejected.
>>
>>
>> Consider run, runner, running and runs with
>>> simple wildcards. Searching for run* requires that all 4
>>> variants have 'run' as a prefix, and any decent
>>> encryption algorithm will not do that. Any
>>> encryption that _does_ make that search possible
>>> is trivially broken. I usually stop my thinking there,
>>> but ngrams, casing, WordDelimiterFilterFactory
>>> all come immediately to mind as "interesting".
>>>
>>
>> I was rather hoping that I could do the encryption and subsequent
>> decryption at a level below the search itself, so that when the query
>> examines the data it sees the decrypted values so that things like prefix
>> scans etc would indeed still work. Previously in this thread, Shawn
>> suggested writing a custom codec, I wonder if that would enable querying?
>>
>>
>>> But what about stored data you ask? Yes, the
>>> stored fields are compressed but stored verbatim,
>>> so I've seen arguments for encrypting _that_ stream,
>>> but that's really a "feel good" fig-leaf. If I get access to the
>>> index and it has position information, I can reconstruct
>>> documents without the stored data as Luke does. The
>>> process is a bit lossy, but the reconstructed document
>>> has enough fidelity that it'll give people seriously
>>> concerned about encryption conniption fits.
>>>
>>
>> Exactly!
>>
>>
>>>
>>> So all in all I have to back up Shawn's comments: You're
>>> better off isolating your Solr/Lucene system, putting
>>> authorization to view _documents_ at that level, and possibly
>>> using an encrypted filesystem.
>>>
>>> FWIW,
>>> Erick
>>>
>>> On Sat, Sep 5, 2015 at 7:27 AM, Shawn Heisey 
>>> wrote:
>>> > On 9/5/2015 5:06 AM, Adam Retter wrote:
>>> >> I wondered if there is any facility already existing in Lucene for
>>> >> encrypting the values stored into the index and still being able to
>>> >> search them?
>>> >>
>>> >> If not, I wondered if anyone could tell me if this is impossible to
>>> >> implement, and if not to point me perhaps in the right direction?
>>> >>
>>> >> I imagine that just the text values and document fields to index (and
>>> >> optionally store) in the index would be either encrypted on the fly by
>>> >> Lucene using perhaps a public/private key mechanism. When a user
>>> issues
>>> >> a search query to Lucene they would also provide a key so that Lucene
>>> >> can decrypt the values as necessary to try and answer their query.
>>> >
>>> > I think you could probably add transparent encryption/decryption at the
>>> > Lucene level in a custom codec.  That probably has implications for
>>> > being able to read the older index when it's time to upgrade Lucene,
>>> > with a complete reindex being the likely solution.  Others will need to
>>> > confirm ... I'm not very familiar with Lucene code, I'm here for Solr.
>>> >
>>> > Any verification of user identity/permission is probably best done in
>>> > your own code, before it makes the Lucene query, and wouldn't
>>> > necessarily be related to the encryption.
>>> >
>>> > Requirements like this are usually driven by paranoid customers or
>>> > product managers.  I think that when you really start to examine what
>>> an
>>> > attacker has to do to actually reach the unencrypted information
>>> (Lucene
>>> > index in this case), they already have acquired so much access that the
>>> > system is completely breached and it won't matter what kind of
>>> > encryption is added.
>>> >
>>> > I find many of these requirements to be silly, and put an incredible
>>> > burden on admin and developer resources with little or no benefit.
>>> > Here's an example of similar customer encryption requirement which I
>>> > encountered recently:
>>> >
>>> > We have a web application that has three "hops" involved.  A user talks
>>> > to a load balancer, which talks to Apache, where the connection is then
>>> > proxied to a Tomcat server with the AJP protocol.  The customer wanted
>>> > all three hops encrypted.  The first hop was already encrypted, the
>>> > second was easy, but 

[jira] [Commented] (SOLR-7978) Really fix the example/files update-script Java version issues

2015-09-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735857#comment-14735857
 ] 

ASF subversion and git services commented on SOLR-7978:
---

Commit 1701883 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1701883 ]

SOLR-7978: Fixed example/files update-script.js to be Java 7 and 8 compatible

> Really fix the example/files update-script Java version issues
> --
>
> Key: SOLR-7978
> URL: https://issues.apache.org/jira/browse/SOLR-7978
> Project: Solr
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 5.3
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
> Fix For: 5.4
>
> Attachments: SOLR-7978.patch
>
>
> SOLR-7652 addressed this issue by having a Java7 version of the script for 5x 
> and a Java8 version on trunk.  5x on Java8 is broken though.  I wager that 
> there's got to be some incantations that can make the same script work on 
> Java 7 and 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b78) - Build # 14163 - Failure!

2015-09-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14163/
Java: 64bit/jdk1.9.0-ea-b78 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=868, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)2) Thread[id=869, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)3) Thread[id=870, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)4) Thread[id=867, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=871, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=868, name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 

Re: [CI] Lucene 5x Linux 64 Test Only - Build # 63025 - Failure!

2015-09-08 Thread Steve Rowe
I’m a dev list moderator, and I haven’t seen this message (yet?).  I checked my 
Junk folder and it’s not there either (Gmail puts some email to be moderated 
there sometimes).  AFAICT the email was sent at 17:40 EDT or so, but it’s been 
over two hours now.

But yes every few days ES Jenkins emails are still showing up in the dev list 
moderation queue.  The most recent one I saw (and moderated through) was from 
yesterday - my mail client received it at 2:31 AM EDT, so there was very little 
delay:

> From: bu...@elastic.co
> Subject: [CI] Lucene 5x Linux 64 Test Only - Build # 62794 - Failure!
> Date: September 7, 2015 at 2:30:58 AM EDT
> To: d...@elastic.co, dev@lucene.apache.org, sim...@apache.org
> Reply-To: d...@elastic.co
> 
> 
>  BUILD FAILURE
> 
> Build URL 
> http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/62794/
> Project:  lucene_linux_java8_64_test_only
> Randomization:JDKEA8,local,heap[512m],-server +UseG1GC 
> -UseCompressedOops,assert off,sec manager on
> Date of build:Mon, 07 Sep 2015 06:24:10 +0200
> Build duration:   2 hr 6 min

Steve

> On Sep 8, 2015, at 7:13 PM, Michael McCandless  wrote:
> 
> I'll dig ... it repros on beasting.
> 
> Separately, it looks like build failures from the ES jenkins instances still 
> must go through moderation on the Lucene dev list?  (I don't see this build 
> failure on the dev list yet ...).
> 
> We had tried to fix this but it looks like it did not take!
> 
> Mike McCandless
> 
> On Tue, Sep 8, 2015 at 5:39 PM,  wrote:
>  BUILD FAILURE
> 
> Build URL 
> http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/63025/
> Project:  lucene_linux_java8_64_test_only
> Randomization:JDK8,network,heap[799m],-server +UseG1GC 
> +UseCompressedOops,sec manager on 
> Date of build:Tue, 08 Sep 2015 23:36:36 +0200
> Build duration:   2 min 44 sec
> 
> CHANGES
> No Changes
> 
> BUILD ARTIFACTS
> • checkout/lucene/build/core/test/temp/junit4-J0-20150908_233706_469.events
> • checkout/lucene/build/core/test/temp/junit4-J1-20150908_233706_472.events
> • checkout/lucene/build/core/test/temp/junit4-J2-20150908_233706_469.events
> • checkout/lucene/build/core/test/temp/junit4-J3-20150908_233706_472.events
> • checkout/lucene/build/core/test/temp/junit4-J4-20150908_233706_469.events
> • checkout/lucene/build/core/test/temp/junit4-J5-20150908_233706_469.events
> • checkout/lucene/build/core/test/temp/junit4-J6-20150908_233706_472.events
> • checkout/lucene/build/core/test/temp/junit4-J7-20150908_233706_472.events
> 
> FAILED JUNIT TESTS
> Name: org.apache.lucene.index Failed: 1 test(s), Passed: 792 test(s), 
> Skipped: 24 test(s), Total: 817 test(s)
> • Failed: org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef
> 
> CONSOLE OUTPUT
> [...truncated 1824 lines...]
> [junit4] Tests with failures:
> [junit4] - org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef
> [junit4]
> [junit4]
> [junit4] JVM J0: 0.80 .. 125.00 = 124.20s
> [junit4] JVM J1: 1.00 .. 127.13 = 126.12s
> [junit4] JVM J2: 1.00 .. 136.48 = 135.48s
> [junit4] JVM J3: 1.00 .. 126.06 = 125.06s
> [junit4] JVM J4: 1.00 .. 132.71 = 131.71s
> [junit4] JVM J5: 1.00 .. 126.27 = 125.27s
> [junit4] JVM J6: 1.00 .. 126.38 = 125.37s
> [junit4] JVM J7: 1.00 .. 125.70 = 124.70s
> [junit4] Execution time total: 2 minutes 16 seconds
> [junit4] Tests summary: 414 suites, 3332 tests, 1 error, 51 ignored (47 
> assumptions)
> BUILD FAILED
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:50:
>  The following error occurred while executing this line:
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1452:
>  The following error occurred while executing this line:
> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1006:
>  There were test failures: 414 suites, 3332 tests, 1 error, 51 ignored (47 
> assumptions)
> Total time: 2 minutes 26 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> Recording test results
> [description-setter] Description set: JDK8,network,heap[799m],-server 
> +UseG1GC +UseCompressedOops,sec manager on
> Email was triggered for: Failure - 1st
> Trigger Failure - Any was overridden by another trigger and will not send an 
> email.
> Trigger Failure - Still was overridden by another trigger and will not send 
> an email.
> Sending email for trigger: Failure - 1st
> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6774) Remove solr hack in MorfologikFilter

2015-09-08 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-6774.
---
Resolution: Fixed

OK, I backported.

> Remove solr hack in MorfologikFilter
> 
>
> Key: LUCENE-6774
> URL: https://issues.apache.org/jira/browse/LUCENE-6774
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Robert Muir
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: Trunk, 5.4, 5.3.1
>
> Attachments: LUCENE-6774.patch, LUCENE-6774.patch, LUCENE-6774.patch, 
> LUCENE-6774.patch
>
>
> If solr wants to set the contextClassLoader because its classloading is 
> fucked up, then it needs to do this hack itself: it should not be in lucene 
> code.
> The current mess prevents use of this analyzer in other environments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6650) Remove dependency of lucene/spatial on oal.search.Filter

2015-09-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-6650:
-
Attachment: LUCENE-6650.patch

Hi Adrien.
I'm attaching my patch in progress from when I last touched it a month ago.  
Probably the main thing left to do is to change the Filters in the 
org.apache.lucene.spatial.prefix package to be Queries.  I'll try and resume 
working on it in a week.

> Remove dependency of lucene/spatial on oal.search.Filter
> 
>
> Key: LUCENE-6650
> URL: https://issues.apache.org/jira/browse/LUCENE-6650
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: David Smiley
> Attachments: LUCENE-6650.patch
>
>
> We should try to remove usage of oal.search.Filter in lucene/spatial. I gave 
> it a try but this module makes non-trivial use of filters so I wouldn't mind 
> some help here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6774) Remove solr hack in MorfologikFilter

2015-09-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734908#comment-14734908
 ] 

ASF subversion and git services commented on LUCENE-6774:
-

Commit 1701811 from [~thetaphi] in branch 'dev/branches/lucene_solr_5_3'
[ https://svn.apache.org/r1701811 ]

Backport:
LUCENE-6774: Remove classloader hack in MorfologikFilter

> Remove solr hack in MorfologikFilter
> 
>
> Key: LUCENE-6774
> URL: https://issues.apache.org/jira/browse/LUCENE-6774
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Robert Muir
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: Trunk, 5.4, 5.3.1
>
> Attachments: LUCENE-6774.patch, LUCENE-6774.patch, LUCENE-6774.patch, 
> LUCENE-6774.patch
>
>
> If solr wants to set the contextClassLoader because its classloading is 
> fucked up, then it needs to do this hack itself: it should not be in lucene 
> code.
> The current mess prevents use of this analyzer in other environments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6784) Enable query caching by default

2015-09-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734893#comment-14734893
 ] 

Robert Muir commented on LUCENE-6784:
-

Thats true, I guess there is always the situation of consumers' tests. I'm not 
sure there is a real issue here: but its good to avoid any traps that would 
only be caught in production.

Alternatively the cache could just be sized as min(32MB, 5% heap) or something, 
so that if you run with a 256MB heap you still get cache, just a 12.8MB one.

> Enable query caching by default
> ---
>
> Key: LUCENE-6784
> URL: https://issues.apache.org/jira/browse/LUCENE-6784
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6784.patch
>
>
> Now that our main queries have become immutable, I would like to revisit 
> enabling the query cache by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Encrypted index?

2015-09-08 Thread Adam Retter
>
> I think you could probably add transparent encryption/decryption at the
> Lucene level in a custom codec.  That probably has implications for
> being able to read the older index when it's time to upgrade Lucene,
> with a complete reindex being the likely solution.  Others will need to
> confirm ... I'm not very familiar with Lucene code, I'm here for Solr.
>

Thanks, that sounds interesting, and an avenue I will investigate further...


Any verification of user identity/permission is probably best done in
> your own code, before it makes the Lucene query, and wouldn't
> necessarily be related to the encryption.
>

Okay, but somehow my codec is going to need to know the key to use to
encrypt/decrypt the data, only the user has that, so they will need to pass
it in somehow I imagine.


Requirements like this are usually driven by paranoid customers or
> product managers.  I think that when you really start to examine what an
> attacker has to do to actually reach the unencrypted information (Lucene
> index in this case), they already have acquired so much access that the
> system is completely breached and it won't matter what kind of
> encryption is added.
>
> I find many of these requirements to be silly, and put an incredible
> burden on admin and developer resources with little or no benefit.
>

Your preaching to the converted ;-) I already tried pointing out that
futility of this approach and that it really doesn't bring much if anything
to the security of the system. I also suggested just using an encrypted
filesystem. Unfortunately, as you have most likely experienced, customers
and their requirements whether wrong or right often have to be fulfilled if
you want to get paid by them.


-- 
Adam Retter

skype: adam.retter
tweet: adamretter
http://www.adamretter.org.uk


RE: 5.3.1 bug fix release

2015-09-08 Thread Uwe Schindler
Hi,

 

I also backported the Java 9 fix, so I can start the Jenkins test of 5.3 again.

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Noble Paul [mailto:noble.p...@gmail.com] 
Sent: Tuesday, September 08, 2015 3:20 PM
To: Lucene Dev
Subject: Re: 5.3.1 bug fix release

 

I would like to start the process ASAP.  I volunteer to be the RM.  Please let 
me know the list of tickets you would like to include in the release and we can 
coordinate the rest 

On Sep 8, 2015 2:32 AM, "Shawn Heisey"  wrote:

On 9/5/2015 10:43 PM, Shalin Shekhar Mangar wrote:
> +1 for a 5.3.1 -- seems like there are some serious bugs around the
> new security module.

Since I'm not on the PMC, I don't know whether my vote counts, but I
vote +1.

SOLR-6188 is a simple patch that fixes a very confusing error that our
more advanced users have reported.  I haven't yet committed the change,
so it could go to either the 5.3 branch or branch_5x.

If we are going ahead with 5.3.1, my plan is to commit it there, subject
to approval by the RM.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Encrypted index?

2015-09-08 Thread Adam Retter
Thanks Walter, that would be a neat solution if we just wanted to store
values, but we also want full-text query capabilities.

On 5 September 2015 at 17:56, Walter Underwood 
wrote:

> Alternatively, do not store values in the Solr fields. Return a key and
> fetch encrypted data from a database or other repository.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>
> On Sep 5, 2015, at 9:40 AM, Erick Erickson 
> wrote:
>
> The easiest way to do this is put the index over
> an encrypted file system. Encrypting the actual
> _tokens_ has a few problems, not the least of
> which is that any encryption algorithm worth
> its salt is going to make most searching totally
> impossible.
>
> Consider run, runner, running and runs with
> simple wildcards. Searching for run* requires that all 4
> variants have 'run' as a prefix, and any decent
> encryption algorithm will not do that. Any
> encryption that _does_ make that search possible
> is trivially broken. I usually stop my thinking there,
> but ngrams, casing, WordDelimiterFilterFactory
> all come immediately to mind as "interesting".
>
> But what about stored data you ask? Yes, the
> stored fields are compressed but stored verbatim,
> so I've seen arguments for encrypting _that_ stream,
> but that's really a "feel good" fig-leaf. If I get access to the
> index and it has position information, I can reconstruct
> documents without the stored data as Luke does. The
> process is a bit lossy, but the reconstructed document
> has enough fidelity that it'll give people seriously
> concerned about encryption conniption fits.
>
> So all in all I have to back up Shawn's comments: You're
> better off isolating your Solr/Lucene system, putting
> authorization to view _documents_ at that level, and possibly
> using an encrypted filesystem.
>
> FWIW,
> Erick
>
> On Sat, Sep 5, 2015 at 7:27 AM, Shawn Heisey  wrote:
>
> On 9/5/2015 5:06 AM, Adam Retter wrote:
>
> I wondered if there is any facility already existing in Lucene for
> encrypting the values stored into the index and still being able to
> search them?
>
> If not, I wondered if anyone could tell me if this is impossible to
> implement, and if not to point me perhaps in the right direction?
>
> I imagine that just the text values and document fields to index (and
> optionally store) in the index would be either encrypted on the fly by
> Lucene using perhaps a public/private key mechanism. When a user issues
> a search query to Lucene they would also provide a key so that Lucene
> can decrypt the values as necessary to try and answer their query.
>
>
> I think you could probably add transparent encryption/decryption at the
> Lucene level in a custom codec.  That probably has implications for
> being able to read the older index when it's time to upgrade Lucene,
> with a complete reindex being the likely solution.  Others will need to
> confirm ... I'm not very familiar with Lucene code, I'm here for Solr.
>
> Any verification of user identity/permission is probably best done in
> your own code, before it makes the Lucene query, and wouldn't
> necessarily be related to the encryption.
>
> Requirements like this are usually driven by paranoid customers or
> product managers.  I think that when you really start to examine what an
> attacker has to do to actually reach the unencrypted information (Lucene
> index in this case), they already have acquired so much access that the
> system is completely breached and it won't matter what kind of
> encryption is added.
>
> I find many of these requirements to be silly, and put an incredible
> burden on admin and developer resources with little or no benefit.
> Here's an example of similar customer encryption requirement which I
> encountered recently:
>
> We have a web application that has three "hops" involved.  A user talks
> to a load balancer, which talks to Apache, where the connection is then
> proxied to a Tomcat server with the AJP protocol.  The customer wanted
> all three hops encrypted.  The first hop was already encrypted, the
> second was easy, but the third proved to be very difficult.  Finally we
> decided that we did not need load balancing on that last hop, and it
> could simply talk to localhost, eliminating the need to encrypt it.
>
> The customer was worried about an attacker sniffing the traffic on the
> LAN and seeing details like passwords.  I consider this to be an insane
> requirement.  In order to sniff that traffic, the attacker would need
> one of two things:  Root access on a server, or physical access to the
> infrastructure.  Physical access can be escalated to root access if you
> know what you're doing.  Once someone has either of those things,
> encrypted traffic won't matter, they will be able to learn anything they
> need or do any damage they desire, without even needing to sniff the
> traffic.
>
> Thanks,
> Shawn
>
>
> 

[jira] [Commented] (LUCENE-6758) Adding a SHOULD clause to a BQ over an empty field clears the score when using DefaultSimilarity

2015-09-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734978#comment-14734978
 ] 

Michael McCandless commented on LUCENE-6758:


+1

> Adding a SHOULD clause to a BQ over an empty field clears the score when 
> using DefaultSimilarity
> 
>
> Key: LUCENE-6758
> URL: https://issues.apache.org/jira/browse/LUCENE-6758
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Terry Smith
> Attachments: LUCENE-6758.patch, LUCENE-6758.patch
>
>
> Patch with unit test to show the bug will be attached.
> I've narrowed this change in behavior with git bisect to the following commit:
> {noformat}
> commit 698b4b56f0f2463b21c9e3bc67b8b47d635b7d1f
> Author: Robert Muir 
> Date:   Thu Aug 13 17:37:15 2015 +
> LUCENE-6711: Use CollectionStatistics.docCount() for IDF and average 
> field length computations
> 
> git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/trunk@1695744 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-09-08 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734997#comment-14734997
 ] 

Tim Allison edited comment on LUCENE-5205 at 9/8/15 3:34 PM:
-

This looks like a genuine issue in the Highlighter.  I was hoping that it was 
LUCENE-5503 so that would get some attention, but I don't think it is.

This is the minimal code to show the problem:
{code}
  @Test
  public void testEmbeddedSpanNearHighlighterIssue() throws Exception {
String field = "f";
Analyzer analyzer = new StandardAnalyzer();
String text = "b c d";

//SpanQueryParser p = new SpanQueryParser(field, analyzer);
//Query q = p.parse("\"(b [c z]) d\"~2");
SpanQuery cz = new SpanNearQuery(
new SpanQuery[]{
new SpanTermQuery(new Term(field, "c")),
new SpanTermQuery(new Term(field, "z"))
}, 0, true
);
SpanQuery bcz = new SpanOrQuery(
new SpanTermQuery(new Term(field, "b")),
cz);
SpanQuery q = new SpanNearQuery(
new SpanQuery[]{
bcz,
new SpanTermQuery(new Term(field, "d"))
}, 2, false
);
QueryScorer scorer = new QueryScorer(q, field);
scorer.setExpandMultiTermQuery(true);


Fragmenter fragmenter = new SimpleFragmenter(1000);

Highlighter highlighter = new Highlighter(
new SimpleHTMLFormatter(),
new SimpleHTMLEncoder(),
scorer);
highlighter.setTextFragmenter(fragmenter);
String[] snippets = highlighter.getBestFragments(analyzer,
field, text,
3);
assertEquals(1, snippets.length);
assertFalse(snippets[0].contains("c"));
  }
{code}

This problem does not happen if "c" comes before "a" or after "d" in the text: 
"c b d" or "b d c".


was (Author: talli...@mitre.org):
This looks like a genuine issue in the Highlighter.  I was hoping that it was 
LUCENE-5503 so that would get some attention, but I don't think it is.

This is the minimal code to show the problem:
{code}
  @Test
  public void testEmbeddedSpanNearHighlighterIssue() throws Exception {
String field = "f";
Analyzer analyzer = new StandardAnalyzer();
String text = "b c d";

//SpanQueryParser p = new SpanQueryParser("f", analyzer);
//Query q = p.parse("\"(b [c z]) d\"~2");
SpanQuery cz = new SpanNearQuery(
new SpanQuery[]{
new SpanTermQuery(new Term(field, "c")),
new SpanTermQuery(new Term(field, "z"))
}, 0, true
);
SpanQuery bcz = new SpanOrQuery(
new SpanTermQuery(new Term(field, "b")),
cz);
SpanQuery q = new SpanNearQuery(
new SpanQuery[]{
bcz,
new SpanTermQuery(new Term(field, "d"))
}, 2, false
);
QueryScorer scorer = new QueryScorer(q, "f");
scorer.setExpandMultiTermQuery(true);


Fragmenter fragmenter = new SimpleFragmenter(1000);

Highlighter highlighter = new Highlighter(
new SimpleHTMLFormatter(),
new SimpleHTMLEncoder(),
scorer);
highlighter.setTextFragmenter(fragmenter);
String[] snippets = highlighter.getBestFragments(analyzer,
"f", text,
3);
assertEquals(1, snippets.length);
assertFalse(snippets[0].contains("c"));
  }
{code}

This problem does not happen if "c" comes before "a" or after "d" in the text: 
"c b d" or "b d c".

> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> 

[jira] [Commented] (LUCENE-6776) Randomized planet model shows up additional XYZBounds errors

2015-09-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734998#comment-14734998
 ] 

Michael McCandless commented on LUCENE-6776:


I've been beasting this last patch for ~6 hours ... no failures!  I think it's 
a keeper ... I'll commit soon.

> Randomized planet model shows up additional XYZBounds errors
> 
>
> Key: LUCENE-6776
> URL: https://issues.apache.org/jira/browse/LUCENE-6776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial
>Reporter: Karl Wright
> Attachments: LUCENE-6776.patch, LUCENE-6776.patch, LUCENE-6776.patch, 
> LUCENE-6776.patch, LUCENE-6776.patch, LUCENE-6776.patch, LUCENE-6776.patch, 
> LUCENE-6776.patch
>
>
> Adding randomized PlanetModel construction causes points to be generated 
> inside a shape that are outside XYZBounds.  [~mikemccand] please take note.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3.1 bug fix release

2015-09-08 Thread Erik Hatcher
I would like to port the following:

From branch_5x:
* SOLR-7972: Fix VelocityResponseWriter template encoding issue.
  Templates must be UTF-8 encoded. (Erik Hatcher)

* SOLR-7929: SimplePostTool (also bin/post) -filetypes "*" now works properly 
in 'web' mode (Erik Hatcher)

And get SOLR-7978 (Really fix the example/files update-script Java version 
issues) resolved; current patch is a test case and such, but the real fix is 
just to patch example/files/conf/update-script.js (to make it work on Java 8 
and 7, branch 5x example/files only works on Java 7 currently).

I can get these ported/committed by end of day today.

Thanks,
Erik



> On Sep 8, 2015, at 9:19 AM, Noble Paul  wrote:
> 
> I would like to start the process ASAP.  I volunteer to be the RM.  Please 
> let me know the list of tickets you would like to include in the release and 
> we can coordinate the rest
> 
> On Sep 8, 2015 2:32 AM, "Shawn Heisey"  wrote:
> On 9/5/2015 10:43 PM, Shalin Shekhar Mangar wrote:
> > +1 for a 5.3.1 -- seems like there are some serious bugs around the
> > new security module.
> 
> Since I'm not on the PMC, I don't know whether my vote counts, but I
> vote +1.
> 
> SOLR-6188 is a simple patch that fixes a very confusing error that our
> more advanced users have reported.  I haven't yet committed the change,
> so it could go to either the 5.3 branch or branch_5x.
> 
> If we are going ahead with 5.3.1, my plan is to commit it there, subject
> to approval by the RM.
> 
> Thanks,
> Shawn
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7435) NPE in FieldCollapsingQParser

2015-09-08 Thread Brandon Chapman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735003#comment-14735003
 ] 

Brandon Chapman edited comment on SOLR-7435 at 9/8/15 3:40 PM:
---

[~joel.bernstein], this also sometimes works sometimes gets an exception for me 
in Solr 4.10.3.

{code}



{code}
{code}
{
  "responseHeader": {
"status": 500,
"QTime": 89,
"params": {
  "facet": "true",
  "fl": "psid, bsin, groupId, sku, merchant",
  "indent": "true",
  "q": "type_s:parent",
  "_": "1441726236828",
  "facet.field": "bsin",
  "wt": "json",
  "fq": [
"{!collapse field=groupId  min=sourceRank cost=201}",
"{!collapse field=merchant cost=200}"
  ],
  "rows": "10"
}
  },
  "error": {
"trace": "java.lang.NullPointerException\n\tat 
org.apache.solr.search.CollapsingQParserPlugin$CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632)\n\tat
 
org.apache.solr.search.CollapsingQParserPlugin$CollapsingScoreCollector.finish(CollapsingQParserPlugin.java:525)\n\tat
 
org.apache.solr.search.SolrIndexSearcher.getDocSetScore(SolrIndexSearcher.java:918)\n\tat
 
org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:938)\n\tat
 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1366)\n\tat
 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)\n\tat
 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
 org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)\n\tat
 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)\n\tat
 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)\n\tat
 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)\n\tat
 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)\n\tat
 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:929)\n\tat 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)\n\tat
 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)\n\tat
 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1002)\n\tat
 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585)\n\tat
 
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)\n\tat
 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
 java.lang.Thread.run(Thread.java:744)\n",
"code": 500
  }
}
{code}


was (Author: bchapman):
[~joel.bernstein], this also sometimes works sometimes gets an exception for me 
in Solr 4.10.3.

{code}


   
{code}
{code}
{
  "responseHeader": {
"status": 500,
"QTime": 89,
"params": {
  "facet": "true",
  "fl": "psid, bsin, groupId, sku, merchant",
  "indent": "true",
  "q": "type_s:parent",
  "_": "1441726236828",
  "facet.field": "bsin",
  "wt": "json",
  "fq": [
"{!collapse field=groupId  min=sourceRank cost=201}",
"{!collapse field=merchant cost=200}"
  ],
  "rows": "10"
}
  },
  "error": {
"trace": "java.lang.NullPointerException\n\tat 
org.apache.solr.search.CollapsingQParserPlugin$CollapsingFieldValueCollector.finish(CollapsingQParserPlugin.java:632)\n\tat
 
org.apache.solr.search.CollapsingQParserPlugin$CollapsingScoreCollector.finish(CollapsingQParserPlugin.java:525)\n\tat
 
org.apache.solr.search.SolrIndexSearcher.getDocSetScore(SolrIndexSearcher.java:918)\n\tat
 
org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:938)\n\tat
 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1366)\n\tat
 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)\n\tat
 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)\n\tat
 

[jira] [Commented] (SOLR-8016) CloudSolrClient has extremely verbose error logging

2015-09-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734935#comment-14734935
 ] 

Mark Miller commented on SOLR-8016:
---

I think it is incorrect logging levels.

This method is expected to have to retry sometimes. When it does, it prints out 
all kinds of errors and warnings. But this is an expected case.

Really, at most, the error and warn logging done in this area should be info 
and then only perhaps log the error when the retries are done without success.

Markers don't seem very satisfying - do we know what implementations respect 
them?

> CloudSolrClient has extremely verbose error logging
> ---
>
> Key: SOLR-8016
> URL: https://issues.apache.org/jira/browse/SOLR-8016
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 5.2.1, Trunk
>Reporter: Greg Pendlebury
>Priority: Minor
>  Labels: easyfix
>
> CloudSolrClient has this error logging line which is fairly annoying:
> {code}
>   log.error("Request to collection {} failed due to ("+errorCode+
>   ") {}, retry? "+retryCount, collection, rootCause.toString());
> {code}
> Given that this is a client library and then gets embedded into other 
> applications this line is very problematic to handle gracefully. In today's 
> example I was looking at, every failed search was logging over 100 lines, 
> including the full HTML response from the responding node in the cluster.
> The resulting SolrServerException that comes out to our application is 
> handled appropriately but we can't stop this class complaining in logs 
> without suppressing the entire ERROR channel, which we don't want to do. This 
> is the only direct line writing to the log I could find in the client, so we 
> _could_ suppress errors, but that just feels dirty, and fragile for the 
> future.
> From looking at the code I am fairly certain it is not as simple as throwing 
> an exception instead of logging... it is right in the middle of the method. I 
> suspect the simplest answer is adding a marker 
> (http://www.slf4j.org/api/org/slf4j/Marker.html) to the logging call.
> Then solrj users can choose what to do with these log entries. I don't know 
> if there is a broader strategy for handling this that I am ignorant of; 
> apologies if that is the case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >