[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956170#comment-13956170
 ] 

Shai Erera commented on LUCENE-2446:


+1

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956169#comment-13956169
 ] 

Robert Muir commented on LUCENE-2446:
-

checkIntegrity is fine with me. We can also rename it before releasing. 

I'll mark it internal for now...

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956167#comment-13956167
 ] 

Shai Erera commented on LUCENE-2446:


bq. You guys figure out the name for the method, i really dont care. I will 
wait on the issue until you guys bikeshed it out.

Uwe, so what do you think: {{validateChecksums}} or {{checkIntegrity}}? :)
Let's get this thing committed so Jenkins can bless it.

> Add checksums to Lucene segment files
> -
>
> Key: LUCENE-2446
> URL: https://issues.apache.org/jira/browse/LUCENE-2446
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Lance Norskog
>  Labels: checksum
> Attachments: LUCENE-2446.patch
>
>
> It would be useful for the different files in a Lucene index to include 
> checksums. This would make it easy to spot corruption while copying index 
> files around; the various cloud efforts assume many more data-copying 
> operations than older single-index implementations.
> This feature might be much easier to implement if all index files are created 
> in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5561) NativeUnixDirectory is broken

2014-03-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956129#comment-13956129
 ] 

Robert Muir commented on LUCENE-5561:
-

Wow the test base class is awesome! Please resolve LUCENE-5067 once you get 
this in, I think this is a really useful thing to have in test-framework and 
for our tests.

> NativeUnixDirectory is broken
> -
>
> Key: LUCENE-5561
> URL: https://issues.apache.org/jira/browse/LUCENE-5561
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5561.patch
>
>
> Several things:
>   * It assumed ByteBuffer.allocateDirect would be page-aligned, but
> that's no longer true in Java 1.7
>   * It failed to throw FNFE if a file didn't exist (throw IOExc
> instead)
>   * It didn't have a default ctor taking File (so it was hard to run
> all tests against it)
>   * It didn't have a test case
>   * Some Javadocs problems
>   * I cutover to FilterDirectory
> I tried to cutover to BufferedIndexOutput since this is essentially
> all that NativeUnixIO is doing ... but it's not simple because BIO
> sometimes flushes non-full (non-aligned) buffers even before the end
> of the file (its writeBytes method).
> I also factored out a BaseDirectoryTestCase, and tried to fold in
> "generic" Directory tests, and added/cutover explicit tests for the
> core directory impls.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956120#comment-13956120
 ] 

ASF subversion and git services commented on LUCENE-5205:
-

Commit 1583537 from [~rcmuir] in branch 'dev/branches/lucene5205'
[ https://svn.apache.org/r1583537 ]

LUCENE-5205: merge trunk

> [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
> classic QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Fix For: 4.8
>
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5908) Make REQUESTSTATUS call non-blocking and non-blocked

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956118#comment-13956118
 ] 

ASF subversion and git services commented on SOLR-5908:
---

Commit 1583536 from [~anshumg] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583536 ]

SOLR-5908: Merging commit from trunk (r1583532)

> Make REQUESTSTATUS call non-blocking and non-blocked
> 
>
> Key: SOLR-5908
> URL: https://issues.apache.org/jira/browse/SOLR-5908
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5908.patch, SOLR-5908.patch
>
>
> Currently REQUESTSTATUS Collection API call is blocked by any other call in 
> the OCP work queue.
> Make it independent and non-blocked/non-blocking.
> This would be handled as a part of having the OCP multi-threaded but I'm 
> opening this issue to explore other possible options of handling this.
> If the final fix happens via SOLR-5681, will resolve it when SOLR-5681 gets 
> resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5908) Make REQUESTSTATUS call non-blocking and non-blocked

2014-03-31 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-5908.


   Resolution: Fixed
Fix Version/s: 5.0
   4.8

Committed into trunk and branch_4x.

> Make REQUESTSTATUS call non-blocking and non-blocked
> 
>
> Key: SOLR-5908
> URL: https://issues.apache.org/jira/browse/SOLR-5908
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5908.patch, SOLR-5908.patch
>
>
> Currently REQUESTSTATUS Collection API call is blocked by any other call in 
> the OCP work queue.
> Make it independent and non-blocked/non-blocking.
> This would be handled as a part of having the OCP multi-threaded but I'm 
> opening this issue to explore other possible options of handling this.
> If the final fix happens via SOLR-5681, will resolve it when SOLR-5681 gets 
> resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956110#comment-13956110
 ] 

ASF subversion and git services commented on LUCENE-5205:
-

Commit 1583533 from [~rcmuir] in branch 'dev/branches/lucene5205'
[ https://svn.apache.org/r1583533 ]

LUCENE-5205: Tim's test cleanup patch

> [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
> classic QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Fix For: 4.8
>
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.7-Linux (32bit/jdk1.8.0) - Build # 63 - Failure!

2014-03-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.7-Linux/63/
Java: 32bit/jdk1.8.0 -client -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:50990 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:50990 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([E49ED3DFCE102312:65785DC7B94F432E]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:148)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:99)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:94)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:85)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:200)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(Tes

[jira] [Commented] (SOLR-5908) Make REQUESTSTATUS call non-blocking and non-blocked

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956104#comment-13956104
 ] 

ASF subversion and git services commented on SOLR-5908:
---

Commit 1583532 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1583532 ]

SOLR-5908: Make the REQUESTSTATUS Collection API call non-blocking and 
non-blocked.

> Make REQUESTSTATUS call non-blocking and non-blocked
> 
>
> Key: SOLR-5908
> URL: https://issues.apache.org/jira/browse/SOLR-5908
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-5908.patch, SOLR-5908.patch
>
>
> Currently REQUESTSTATUS Collection API call is blocked by any other call in 
> the OCP work queue.
> Make it independent and non-blocked/non-blocking.
> This would be handled as a part of having the OCP multi-threaded but I'm 
> opening this issue to explore other possible options of handling this.
> If the final fix happens via SOLR-5681, will resolve it when SOLR-5681 gets 
> resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5559) Argument validation for TokenFilters having numeric constructor parameter(s)

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956103#comment-13956103
 ] 

ASF subversion and git services commented on LUCENE-5559:
-

Commit 1583531 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583531 ]

LUCENE-5559: Add missing checks to TokenFilters with numeric arguments

> Argument validation for TokenFilters having numeric constructor parameter(s)
> 
>
> Key: LUCENE-5559
> URL: https://issues.apache.org/jira/browse/LUCENE-5559
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.7
>Reporter: Ahmet Arslan
>Priority: Minor
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5559.patch, LUCENE-5559.patch, LUCENE-5559.patch, 
> LUCENE-5559.patch
>
>
> Some TokenFilters have numeric arguments in their constructors. They should 
> throw {{IllegalArgumentException}} for negative or meaningless values. 
> Here is some examples that demonstrates invalid/meaningless arguments :
> {code:xml}
>  
> {code}
> {code:xml}
>  
> {code}
> {code:xml}
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5559) Argument validation for TokenFilters having numeric constructor parameter(s)

2014-03-31 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5559.
-

   Resolution: Fixed
Fix Version/s: 5.0

Thanks for cleaning this up Ahmet!

> Argument validation for TokenFilters having numeric constructor parameter(s)
> 
>
> Key: LUCENE-5559
> URL: https://issues.apache.org/jira/browse/LUCENE-5559
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.7
>Reporter: Ahmet Arslan
>Priority: Minor
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5559.patch, LUCENE-5559.patch, LUCENE-5559.patch, 
> LUCENE-5559.patch
>
>
> Some TokenFilters have numeric arguments in their constructors. They should 
> throw {{IllegalArgumentException}} for negative or meaningless values. 
> Here is some examples that demonstrates invalid/meaningless arguments :
> {code:xml}
>  
> {code}
> {code:xml}
>  
> {code}
> {code:xml}
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5559) Argument validation for TokenFilters having numeric constructor parameter(s)

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956099#comment-13956099
 ] 

ASF subversion and git services commented on LUCENE-5559:
-

Commit 1583530 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1583530 ]

LUCENE-5559: Add missing checks to TokenFilters with numeric arguments

> Argument validation for TokenFilters having numeric constructor parameter(s)
> 
>
> Key: LUCENE-5559
> URL: https://issues.apache.org/jira/browse/LUCENE-5559
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.7
>Reporter: Ahmet Arslan
>Priority: Minor
> Fix For: 4.8
>
> Attachments: LUCENE-5559.patch, LUCENE-5559.patch, LUCENE-5559.patch, 
> LUCENE-5559.patch
>
>
> Some TokenFilters have numeric arguments in their constructors. They should 
> throw {{IllegalArgumentException}} for negative or meaningless values. 
> Here is some examples that demonstrates invalid/meaningless arguments :
> {code:xml}
>  
> {code}
> {code:xml}
>  
> {code}
> {code:xml}
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5558) Add TruncateTokenFilter

2014-03-31 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5558.
-

   Resolution: Fixed
Fix Version/s: 5.0

Thanks Ahmet, very nice!

> Add TruncateTokenFilter
> ---
>
> Key: LUCENE-5558
> URL: https://issues.apache.org/jira/browse/LUCENE-5558
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.7
>Reporter: Ahmet Arslan
>Assignee: Robert Muir
>Priority: Minor
>  Labels: Turkish, f5
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5558.patch, LUCENE-5558.patch, LUCENE-5558.patch, 
> LUCENE-5558.patch
>
>
> I am using this filter as a stemmer for Turkish language. In many academic 
> research (classification, retrieval) it is used and called as Fixed Prefix 
> Stemmer or Simple Truncation Method or F5 in short.
> Among F3 TO F7, F5 stemmer (length=5) is found to work well for Turkish 
> language in [Information Retrieval on Turkish 
> Texts|http://www.users.muohio.edu/canf/papers/JASIST2008offPrint.pdf]. It is 
> the same work where most of stopwords_tr.txt are acquired. 
> ElasticSearch has 
> [truncate|http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-truncate-tokenfilter.html]
>  filter but it does not respect keyword attribute. And it has a use case 
> similar to TruncateFieldUpdateProcessorFactory
> Main advantage of F5 stemming is : it does not effected by the meaning loss 
> caused by ascii folding. It is a diacritics-insensitive stemmer and works 
> well with ascii folding. [Effects of diacritics on Turkish information 
> retrieval|http://journals.tubitak.gov.tr/elektrik/issues/elk-12-20-5/elk-20-5-9-1010-819.pdf]
> Here is the full field type I use for "diacritics-insensitive search" for 
> Turkish
> {code:xml}
>   positionIncrementGap="100">
>
>  
>  
>  
>  
>  
>  
>  
>
> {code}
> I  would like to get community opinions :
> 1) Any interest in this? 
> 2) keyword attribute should be respected? 
> 3) package name analysis.misc versus analyis.tr 
> 4) name of the class TruncateTokenFilter versus FixedPrefixStemFilter



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5558) Add TruncateTokenFilter

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956095#comment-13956095
 ] 

ASF subversion and git services commented on LUCENE-5558:
-

Commit 1583527 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583527 ]

LUCENE-5558: Add TruncateTokenFilter

> Add TruncateTokenFilter
> ---
>
> Key: LUCENE-5558
> URL: https://issues.apache.org/jira/browse/LUCENE-5558
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.7
>Reporter: Ahmet Arslan
>Assignee: Robert Muir
>Priority: Minor
>  Labels: Turkish, f5
> Fix For: 4.8
>
> Attachments: LUCENE-5558.patch, LUCENE-5558.patch, LUCENE-5558.patch, 
> LUCENE-5558.patch
>
>
> I am using this filter as a stemmer for Turkish language. In many academic 
> research (classification, retrieval) it is used and called as Fixed Prefix 
> Stemmer or Simple Truncation Method or F5 in short.
> Among F3 TO F7, F5 stemmer (length=5) is found to work well for Turkish 
> language in [Information Retrieval on Turkish 
> Texts|http://www.users.muohio.edu/canf/papers/JASIST2008offPrint.pdf]. It is 
> the same work where most of stopwords_tr.txt are acquired. 
> ElasticSearch has 
> [truncate|http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-truncate-tokenfilter.html]
>  filter but it does not respect keyword attribute. And it has a use case 
> similar to TruncateFieldUpdateProcessorFactory
> Main advantage of F5 stemming is : it does not effected by the meaning loss 
> caused by ascii folding. It is a diacritics-insensitive stemmer and works 
> well with ascii folding. [Effects of diacritics on Turkish information 
> retrieval|http://journals.tubitak.gov.tr/elektrik/issues/elk-12-20-5/elk-20-5-9-1010-819.pdf]
> Here is the full field type I use for "diacritics-insensitive search" for 
> Turkish
> {code:xml}
>   positionIncrementGap="100">
>
>  
>  
>  
>  
>  
>  
>  
>
> {code}
> I  would like to get community opinions :
> 1) Any interest in this? 
> 2) keyword attribute should be respected? 
> 3) package name analysis.misc versus analyis.tr 
> 4) name of the class TruncateTokenFilter versus FixedPrefixStemFilter



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5558) Add TruncateTokenFilter

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956078#comment-13956078
 ] 

ASF subversion and git services commented on LUCENE-5558:
-

Commit 1583525 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1583525 ]

LUCENE-5558: Add TruncateTokenFilter

> Add TruncateTokenFilter
> ---
>
> Key: LUCENE-5558
> URL: https://issues.apache.org/jira/browse/LUCENE-5558
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.7
>Reporter: Ahmet Arslan
>Assignee: Robert Muir
>Priority: Minor
>  Labels: Turkish, f5
> Fix For: 4.8
>
> Attachments: LUCENE-5558.patch, LUCENE-5558.patch, LUCENE-5558.patch, 
> LUCENE-5558.patch
>
>
> I am using this filter as a stemmer for Turkish language. In many academic 
> research (classification, retrieval) it is used and called as Fixed Prefix 
> Stemmer or Simple Truncation Method or F5 in short.
> Among F3 TO F7, F5 stemmer (length=5) is found to work well for Turkish 
> language in [Information Retrieval on Turkish 
> Texts|http://www.users.muohio.edu/canf/papers/JASIST2008offPrint.pdf]. It is 
> the same work where most of stopwords_tr.txt are acquired. 
> ElasticSearch has 
> [truncate|http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-truncate-tokenfilter.html]
>  filter but it does not respect keyword attribute. And it has a use case 
> similar to TruncateFieldUpdateProcessorFactory
> Main advantage of F5 stemming is : it does not effected by the meaning loss 
> caused by ascii folding. It is a diacritics-insensitive stemmer and works 
> well with ascii folding. [Effects of diacritics on Turkish information 
> retrieval|http://journals.tubitak.gov.tr/elektrik/issues/elk-12-20-5/elk-20-5-9-1010-819.pdf]
> Here is the full field type I use for "diacritics-insensitive search" for 
> Turkish
> {code:xml}
>   positionIncrementGap="100">
>
>  
>  
>  
>  
>  
>  
>  
>
> {code}
> I  would like to get community opinions :
> 1) Any interest in this? 
> 2) keyword attribute should be respected? 
> 3) package name analysis.misc versus analyis.tr 
> 4) name of the class TruncateTokenFilter versus FixedPrefixStemFilter



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5558) Add TruncateTokenFilter

2014-03-31 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir reassigned LUCENE-5558:
---

Assignee: Robert Muir

> Add TruncateTokenFilter
> ---
>
> Key: LUCENE-5558
> URL: https://issues.apache.org/jira/browse/LUCENE-5558
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 4.7
>Reporter: Ahmet Arslan
>Assignee: Robert Muir
>Priority: Minor
>  Labels: Turkish, f5
> Fix For: 4.8
>
> Attachments: LUCENE-5558.patch, LUCENE-5558.patch, LUCENE-5558.patch, 
> LUCENE-5558.patch
>
>
> I am using this filter as a stemmer for Turkish language. In many academic 
> research (classification, retrieval) it is used and called as Fixed Prefix 
> Stemmer or Simple Truncation Method or F5 in short.
> Among F3 TO F7, F5 stemmer (length=5) is found to work well for Turkish 
> language in [Information Retrieval on Turkish 
> Texts|http://www.users.muohio.edu/canf/papers/JASIST2008offPrint.pdf]. It is 
> the same work where most of stopwords_tr.txt are acquired. 
> ElasticSearch has 
> [truncate|http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-truncate-tokenfilter.html]
>  filter but it does not respect keyword attribute. And it has a use case 
> similar to TruncateFieldUpdateProcessorFactory
> Main advantage of F5 stemming is : it does not effected by the meaning loss 
> caused by ascii folding. It is a diacritics-insensitive stemmer and works 
> well with ascii folding. [Effects of diacritics on Turkish information 
> retrieval|http://journals.tubitak.gov.tr/elektrik/issues/elk-12-20-5/elk-20-5-9-1010-819.pdf]
> Here is the full field type I use for "diacritics-insensitive search" for 
> Turkish
> {code:xml}
>   positionIncrementGap="100">
>
>  
>  
>  
>  
>  
>  
>  
>
> {code}
> I  would like to get community opinions :
> 1) Any interest in this? 
> 2) keyword attribute should be respected? 
> 3) package name analysis.misc versus analyis.tr 
> 4) name of the class TruncateTokenFilter versus FixedPrefixStemFilter



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956051#comment-13956051
 ] 

Shawn Heisey commented on SOLR-5931:


{quote}
Is there a good workaround in the meantime?
We need a quick way to switch master URL on individual the slaves in case of 
site issues. Updating solrconfig.xml directly for particular slave doesn't work 
well because the change will get overwritten on each replication (unless we 
change this in master's solrconfig as well)
{quote}

One option is using xinclude in your solrconfig.xml file.  In solrconfig.xml, 
you would have something like this:

{code}
http://www.w3.org/2001/XInclude";>
  

  http://default_master:port/solr/corename


  

{code}

With this config, normally masterUrl does not need to exist.  When you need to 
switch the master URL, create masterUrl.xml in the same location as 
solrconfig.xml and reload/restart.  It needs contents like this:

{code}
http://master_host:port/solr/corename
{code}

You can also do this without the fallback, in which case you'd use a config 
like this and masterUrl.xml would always need to exist.

{code}
http://www.w3.org/2001/XInclude";>
  


  

{code}

As long as you don't include masterUrl.txt in the list of config files to 
replicate, it won't get overwritten on the slave.

This is only to give you the general idea of xinclude.  You can rearrange this 
in any way that you require.


> solrcore.properties is not reloaded when core is reloaded
> -
>
> Key: SOLR-5931
> URL: https://issues.apache.org/jira/browse/SOLR-5931
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.7
>Reporter: Gunnlaugur Thor Briem
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>
> When I change solrcore.properties for a core, and then reload the core, the 
> previous values of the properties in that file are still in effect. If I 
> *unload* the core and then add it back, in the “Core Admin” section of the 
> admin UI, then the changes in solrcore.properties do take effect.
> My specific test case is a DataImportHandler where {{db-data-config.xml}} 
> uses a property to decide which DB host to talk to:
> {code:xml}
>  url="jdbc:postgresql://${dbhost}/${solr.core.name}" .../>
> {code}
> When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
> the core, the next dataimport operation still connects to the previous DB 
> host. Reloading the dataimport config does not help. I have to unload the 
> core (or fully restart the whole Solr) for the properties change to take 
> effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2327) IndexOutOfBoundsException in FieldInfos.java

2014-03-31 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956008#comment-13956008
 ] 

Trejkaz commented on LUCENE-2327:
-

I have an almost identical stack trace from v3.6, but I did get the index from 
someone else so I don't know where they were storing it.

{noformat}
java.lang.IndexOutOfBoundsException: Index: 100, Size: 64
  at java.util.ArrayList.rangeCheck(ArrayList.java:635)
  at java.util.ArrayList.get(ArrayList.java:411)
  at org.apache.lucene.index.FieldInfos.fieldInfo(FieldInfos.java:255)
  at org.apache.lucene.index.FieldInfos.fieldName(FieldInfos.java:244)
  at org.apache.lucene.index.TermBuffer.read(TermBuffer.java:86)
  at org.apache.lucene.index.SegmentTermEnum.next(SegmentTermEnum.java:133)
  at org.apache.lucene.index.SegmentTermEnum.scanTo(SegmentTermEnum.java:174)
  at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:202)
  at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
  at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:539)
  at org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
  at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:81)
  at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87)
  at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70)
  at org.apache.lucene.search.TermQuery$TermWeight.(TermQuery.java:53)
  at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
  at 
org.apache.lucene.search.BooleanQuery$BooleanWeight.(BooleanQuery.java:176)
  at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:354)
  at org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
  at 
org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)
  at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:364)
{noformat}


> IndexOutOfBoundsException in FieldInfos.java
> 
>
> Key: LUCENE-2327
> URL: https://issues.apache.org/jira/browse/LUCENE-2327
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.0.1
> Environment: Fedora 12
>Reporter: Shane
>Priority: Minor
>  Labels: fedora_12, search, tomcat
> Attachments: CheckIndex.txt
>
>
> When retrieving the scoreDocs from a multisearcher, the following exception 
> is thrown:
> java.lang.IndexOutOfBoundsException: Index: 52, Size: 4
> at java.util.ArrayList.rangeCheck(ArrayList.java:571)
> at java.util.ArrayList.get(ArrayList.java:349)
> at org.apache.lucene.index.FieldInfos.fieldInfo(FieldInfos.java:285)
> at org.apache.lucene.index.FieldInfos.fieldName(FieldInfos.java:274)
> at org.apache.lucene.index.TermBuffer.read(TermBuffer.java:86)
> at 
> org.apache.lucene.index.SegmentTermEnum.next(SegmentTermEnum.java:131)
> at 
> org.apache.lucene.index.SegmentTermEnum.scanTo(SegmentTermEnum.java:162)
> at 
> org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:232)
> at 
> org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:179)
> at 
> org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:911)
> at 
> org.apache.lucene.index.DirectoryReader.docFreq(DirectoryReader.java:644)
> The error is caused when the fieldNumber passed to FieldInfos.fieldInfo() is 
> greater than the size of array list containing the FieldInfo values.  I am 
> not sure what the field number represents or why it would be larger than the 
> array list's size.  The quick fix would be to validate the bounds but there 
> may be a bigger underlying problem.  The issue does appear to be directly 
> related to LUCENE-939.  I've only been able to duplicate this in my 
> production environment and so can't give a good test case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5773) CollapsingQParserPlugin should make elevated documents the group head

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955998#comment-13955998
 ] 

ASF subversion and git services commented on SOLR-5773:
---

Commit 1583507 from [~joel.bernstein] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583507 ]

SOLR-5773: CollapsingQParserPlugin should make elevated documents the group head

> CollapsingQParserPlugin should make elevated documents the group head
> -
>
> Key: SOLR-5773
> URL: https://issues.apache.org/jira/browse/SOLR-5773
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.6.1
>Reporter: David Boychuck
>Assignee: Joel Bernstein
>  Labels: collapse, solr
> Fix For: 4.8
>
> Attachments: SOLR-5773.patch, SOLR-5773.patch, SOLR-5773.patch, 
> SOLR-5773.patch, SOLR-5773.patch, SOLR-5773.patch, SOLR-5773.patch
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> Hi Joel,
> I sent you an email but I'm not sure if you received it or not. I ran into a 
> bit of trouble using the CollapsingQParserPlugin with elevated documents. To 
> explain it simply, I want to exclude grouped documents when one of the 
> members of the group are contained in the elevated document set. I'm not sure 
> this is possible currently because as you explain above elevated documents 
> are added to the request context after the original query is constructed.
> To try to better illustrate the problem. If I have 2 documents docid=1 and 
> docid=2 and both have a groupid of 'a'. If a grouped query scores docid 2 
> first in the results but I have elevated docid 1 then both documents are 
> shown in the results when I really only want the elevated document to be 
> shown in the results.
> Is this something that would be difficult to implement? Any help is 
> appreciated.
> I think the solution would be to remove the documents from liveDocs that 
> share the same groupid in the getBoostDocs() function. Let me know if this 
> makes any sense. I'll continue working towards a solution in the meantime.
> {code}
> private IntOpenHashSet getBoostDocs(SolrIndexSearcher indexSearcher, 
> Set boosted) throws IOException {
>   IntOpenHashSet boostDocs = null;
>   if(boosted != null) {
> SchemaField idField = indexSearcher.getSchema().getUniqueKeyField();
> String fieldName = idField.getName();
> HashSet localBoosts = new HashSet(boosted.size()*2);
> Iterator boostedIt = boosted.iterator();
> while(boostedIt.hasNext()) {
>   localBoosts.add(new BytesRef(boostedIt.next()));
> }
> boostDocs = new IntOpenHashSet(boosted.size()*2);
> Listleaves = 
> indexSearcher.getTopReaderContext().leaves();
> TermsEnum termsEnum = null;
> DocsEnum docsEnum = null;
> for(AtomicReaderContext leaf : leaves) {
>   AtomicReader reader = leaf.reader();
>   int docBase = leaf.docBase;
>   Bits liveDocs = reader.getLiveDocs();
>   Terms terms = reader.terms(fieldName);
>   termsEnum = terms.iterator(termsEnum);
>   Iterator it = localBoosts.iterator();
>   while(it.hasNext()) {
> BytesRef ref = it.next();
> if(termsEnum.seekExact(ref)) {
>   docsEnum = termsEnum.docs(liveDocs, docsEnum);
>   int doc = docsEnum.nextDoc();
>   if(doc != -1) {
> //Found the document.
> boostDocs.add(doc+docBase);
>*// HERE REMOVE ANY DOCUMENTS THAT SHARE THE GROUPID NOT ONLY 
> THE DOCID //*
> it.remove();
>   }
> }
>   }
> }
>   }
>   return boostDocs;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4329) Have DocumentBuilder give value collections to the FieldType

2014-03-31 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955994#comment-13955994
 ] 

Ishan Chattopadhyaya commented on SOLR-4329:


I am interested in unblocking the possible optimizations in spatial that would 
be possible if docValues from the multivalued shape field can be created and 
used. Specifically, I'm interested in a strategy that uses recursive spatial 
prefix tree as well as the docvalues to better resolve the boundary cases while 
matching shapes. 

> Have DocumentBuilder give value collections to the FieldType
> 
>
> Key: SOLR-4329
> URL: https://issues.apache.org/jira/browse/SOLR-4329
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 4.8
>
> Attachments: DocumentBuilder.java, SOLR-4329.patch
>
>
> I'd like to write a multi-value-configured FieldType that can return a 
> DocValue Field from its createFields().  Since DocValues holds a single value 
> per document for a field, you can only have one.  However 
> FieldType.createFields() is invoked by the DocumentBuilder once per each 
> value being indexed.
> FYI the reason I'm asking for this is for a multi-valued spatial field to 
> store its points in DocValues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4329) Have DocumentBuilder give value collections to the FieldType

2014-03-31 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955981#comment-13955981
 ] 

David Smiley commented on SOLR-4329:


I need to look at it closer; but can you comment if there are any use-cases you 
have which triggered your interest in this issue?

> Have DocumentBuilder give value collections to the FieldType
> 
>
> Key: SOLR-4329
> URL: https://issues.apache.org/jira/browse/SOLR-4329
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 4.8
>
> Attachments: DocumentBuilder.java, SOLR-4329.patch
>
>
> I'd like to write a multi-value-configured FieldType that can return a 
> DocValue Field from its createFields().  Since DocValues holds a single value 
> per document for a field, you can only have one.  However 
> FieldType.createFields() is invoked by the DocumentBuilder once per each 
> value being indexed.
> FYI the reason I'm asking for this is for a multi-valued spatial field to 
> store its points in DocValues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955979#comment-13955979
 ] 

Otis Gospodnetic commented on SOLR-5935:


bq. Have you tried raising the total number of max connections?

I think the answer is yes, but Rafal can confirm tomorrow.

bq. Also, are you using the batch or streaming apis or one update per request?

Not sure about batch vs. streaming, but I think there were 10 docs per request.


> SolrCloud hangs under certain conditions
> 
>
> Key: SOLR-5935
> URL: https://issues.apache.org/jira/browse/SOLR-5935
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
>Reporter: Rafał Kuć
>Priority: Critical
> Attachments: thread dumps.zip
>
>
> As discussed in a mailing list - let's try to find the reason while under 
> certain conditions SolrCloud can hang.
> I have an issue with one of the SolrCloud deployments. Six machines, a 
> collection with 6 shards with a replication factor of 3. It all runs on 6 
> physical servers, each with 24 cores. We've indexed about 32 million 
> documents and everything was fine until that point.
> Now, during performance tests, we run into an issue - SolrCloud hangs
> when querying and indexing is run at the same time. First we see a
> normal load on the machines, than the load starts to drop and thread
> dump shown numerous threads like this:
> {noformat}
> Thread 12624: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
> @bci=42, line=2043 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
> line=131 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
> java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
>  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.get(long, 
> java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
>  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
>  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
>  - 
> org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
> line=456 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
> line=906 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
>  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
>  @bci=6, line=784 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
>  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
> (Interpreted frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
>  @bci=17, line=199 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
>  @bci=132, line=285 (Interpreted frame)
>  - 
> org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
>  java.util.List) @bci=13, line=214 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
> line=161 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
> line=118 (Interpreted frame)
>  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
> (Interpreted frame)
>  - java.util

[jira] [Updated] (SOLR-4329) Have DocumentBuilder give value collections to the FieldType

2014-03-31 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-4329:
---

Attachment: SOLR-4329.patch

Since its been a while on this issue, here's a patch to set the ball rolling 
again.
This is based on the refactoring of the FieldType's createFields() to now 
accept (SchemaField, Object[] vals, float[] boosts). As David mentioned in 
comment 1, it has an extra wrapper of array, but API-wise looks most consistent 
with what other field types are already doing.

(All tests passing after this refactor)

[~dsmiley], do you see any usecase that will be missed with this approach?

> Have DocumentBuilder give value collections to the FieldType
> 
>
> Key: SOLR-4329
> URL: https://issues.apache.org/jira/browse/SOLR-4329
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 4.8
>
> Attachments: DocumentBuilder.java, SOLR-4329.patch
>
>
> I'd like to write a multi-value-configured FieldType that can return a 
> DocValue Field from its createFields().  Since DocValues holds a single value 
> per document for a field, you can only have one.  However 
> FieldType.createFields() is invoked by the DocumentBuilder once per each 
> value being indexed.
> FYI the reason I'm asking for this is for a multi-valued spatial field to 
> store its points in DocValues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955958#comment-13955958
 ] 

Mark Miller commented on SOLR-5935:
---

Have you tried raising the total number of max connections?

Also, are you using the batch or streaming apis or one update per request?

> SolrCloud hangs under certain conditions
> 
>
> Key: SOLR-5935
> URL: https://issues.apache.org/jira/browse/SOLR-5935
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
>Reporter: Rafał Kuć
>Priority: Critical
> Attachments: thread dumps.zip
>
>
> As discussed in a mailing list - let's try to find the reason while under 
> certain conditions SolrCloud can hang.
> I have an issue with one of the SolrCloud deployments. Six machines, a 
> collection with 6 shards with a replication factor of 3. It all runs on 6 
> physical servers, each with 24 cores. We've indexed about 32 million 
> documents and everything was fine until that point.
> Now, during performance tests, we run into an issue - SolrCloud hangs
> when querying and indexing is run at the same time. First we see a
> normal load on the machines, than the load starts to drop and thread
> dump shown numerous threads like this:
> {noformat}
> Thread 12624: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
> @bci=42, line=2043 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
> line=131 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
> java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
>  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.get(long, 
> java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
>  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
>  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
>  - 
> org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
> line=456 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
> line=906 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
>  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
>  @bci=6, line=784 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
>  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
> (Interpreted frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
>  @bci=17, line=199 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
>  @bci=132, line=285 (Interpreted frame)
>  - 
> org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
>  java.util.List) @bci=13, line=214 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
> line=161 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
> line=118 (Interpreted frame)
>  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
> (Interpreted frame)
>  - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
>  - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=471 
> (Interpreted frame

[jira] [Commented] (SOLR-5937) Modernize the DIH example config sets

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955951#comment-13955951
 ] 

ASF subversion and git services commented on SOLR-5937:
---

Commit 1583501 from [~steve_rowe] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583501 ]

SOLR-5937: Modernize the DIH example config sets

> Modernize the DIH example config sets
> -
>
> Key: SOLR-5937
> URL: https://issues.apache.org/jira/browse/SOLR-5937
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5937.branch_4x.patch
>
>
> The DIH example schemas should be modified to include trie numeric/date 
> fields, and add comments about the non-trie numeric/date fields being 
> deprecated and removed in 5.0.
> The DIH example {{solrconfig.xml}} files are also showing their age - they 
> should be copied from the main example {{solrconfig.xml}} and have the config 
> they need added back.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-445) Update Handlers abort with bad documents

2014-03-31 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-445:
---

Attachment: SOLR-445-alternative.patch

This is a different approach for this issue. The errors are managed by an 
UpdateRequestProcessor that must be added before other processors in the chain. 
It accepts maxErrors in the configuration as default or as a request parameter. 
If used, the default maxErrors value is Integer.MAX_VALUE, to get the current 
behavior one should set it to 0 (however, wouldn’t make sense to add the 
processor to the chain in this case, unless it depends on the request 
parameter).
This would handle only bad documents, but not others mentioned in previous 
comments (like Tika parsing exceptions, etc).
The response will look something like: 

{code:xml}



  10
  

  ERROR: [doc=1] Error adding field 'weight'='b' 
msg=For input string: "b"


  ERROR: [doc=3] Error adding field 'weight'='b' 
msg=For input string: "b"

...
  0
  17


{code}

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
> Fix For: 4.8
>
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5773) CollapsingQParserPlugin should make elevated documents the group head

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955932#comment-13955932
 ] 

ASF subversion and git services commented on SOLR-5773:
---

Commit 1583500 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1583500 ]

SOLR-5773: CollapsingQParserPlugin should make elevated documents the group head

> CollapsingQParserPlugin should make elevated documents the group head
> -
>
> Key: SOLR-5773
> URL: https://issues.apache.org/jira/browse/SOLR-5773
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.6.1
>Reporter: David Boychuck
>Assignee: Joel Bernstein
>  Labels: collapse, solr
> Fix For: 4.8
>
> Attachments: SOLR-5773.patch, SOLR-5773.patch, SOLR-5773.patch, 
> SOLR-5773.patch, SOLR-5773.patch, SOLR-5773.patch, SOLR-5773.patch
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> Hi Joel,
> I sent you an email but I'm not sure if you received it or not. I ran into a 
> bit of trouble using the CollapsingQParserPlugin with elevated documents. To 
> explain it simply, I want to exclude grouped documents when one of the 
> members of the group are contained in the elevated document set. I'm not sure 
> this is possible currently because as you explain above elevated documents 
> are added to the request context after the original query is constructed.
> To try to better illustrate the problem. If I have 2 documents docid=1 and 
> docid=2 and both have a groupid of 'a'. If a grouped query scores docid 2 
> first in the results but I have elevated docid 1 then both documents are 
> shown in the results when I really only want the elevated document to be 
> shown in the results.
> Is this something that would be difficult to implement? Any help is 
> appreciated.
> I think the solution would be to remove the documents from liveDocs that 
> share the same groupid in the getBoostDocs() function. Let me know if this 
> makes any sense. I'll continue working towards a solution in the meantime.
> {code}
> private IntOpenHashSet getBoostDocs(SolrIndexSearcher indexSearcher, 
> Set boosted) throws IOException {
>   IntOpenHashSet boostDocs = null;
>   if(boosted != null) {
> SchemaField idField = indexSearcher.getSchema().getUniqueKeyField();
> String fieldName = idField.getName();
> HashSet localBoosts = new HashSet(boosted.size()*2);
> Iterator boostedIt = boosted.iterator();
> while(boostedIt.hasNext()) {
>   localBoosts.add(new BytesRef(boostedIt.next()));
> }
> boostDocs = new IntOpenHashSet(boosted.size()*2);
> Listleaves = 
> indexSearcher.getTopReaderContext().leaves();
> TermsEnum termsEnum = null;
> DocsEnum docsEnum = null;
> for(AtomicReaderContext leaf : leaves) {
>   AtomicReader reader = leaf.reader();
>   int docBase = leaf.docBase;
>   Bits liveDocs = reader.getLiveDocs();
>   Terms terms = reader.terms(fieldName);
>   termsEnum = terms.iterator(termsEnum);
>   Iterator it = localBoosts.iterator();
>   while(it.hasNext()) {
> BytesRef ref = it.next();
> if(termsEnum.seekExact(ref)) {
>   docsEnum = termsEnum.docs(liveDocs, docsEnum);
>   int doc = docsEnum.nextDoc();
>   if(doc != -1) {
> //Found the document.
> boostDocs.add(doc+docBase);
>*// HERE REMOVE ANY DOCUMENTS THAT SHARE THE GROUPID NOT ONLY 
> THE DOCID //*
> it.remove();
>   }
> }
>   }
> }
>   }
>   return boostDocs;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5232) SolrCloud should distribute updates via streaming rather than buffering.

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955931#comment-13955931
 ] 

Mark Miller commented on SOLR-5232:
---

I do think the per request time may have gone up a bit (and we might be able to 
improve that), but with a tradeoff that batch or streaming updates are much, 
much faster.

> SolrCloud should distribute updates via streaming rather than buffering.
> 
>
> Key: SOLR-5232
> URL: https://issues.apache.org/jira/browse/SOLR-5232
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 4.6, 5.0
>
> Attachments: SOLR-5232.patch, SOLR-5232.patch, SOLR-5232.patch, 
> SOLR-5232.patch, SOLR-5232.patch, SOLR-5232.patch
>
>
> The current approach was never the best for SolrCloud - it was designed for a 
> pre SolrCloud Solr - it also uses too many connections and threads - nailing 
> that down is likely wasted effort when we should really move away from 
> explicitly buffering docs and sending small batches per thread as we have 
> been doing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Gary Yue (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955910#comment-13955910
 ] 

Gary Yue commented on SOLR-5931:


thanks Eric. I'll ask on the user list for these kind of questions going 
forward.

Regarding your suggestion: i am aware of such support, but not sure how this is 
different from changing the "replication for slave" section of solrconfig.xml 
on master solr. In both cases, you will need to make a change on master box, 
and it can take up to few minutes for the files to be replicated (especially if 
u have multiple level of replication hierarchy, for example master A-> repeater 
B->slave C). I understand that this may be the only workaround available 
though, as there appears to be no other way of getting RELOAD to re-read a 
local property file (like solrconfig.properties) which is not meant to be 
replicable.


> solrcore.properties is not reloaded when core is reloaded
> -
>
> Key: SOLR-5931
> URL: https://issues.apache.org/jira/browse/SOLR-5931
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.7
>Reporter: Gunnlaugur Thor Briem
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>
> When I change solrcore.properties for a core, and then reload the core, the 
> previous values of the properties in that file are still in effect. If I 
> *unload* the core and then add it back, in the “Core Admin” section of the 
> admin UI, then the changes in solrcore.properties do take effect.
> My specific test case is a DataImportHandler where {{db-data-config.xml}} 
> uses a property to decide which DB host to talk to:
> {code:xml}
>  url="jdbc:postgresql://${dbhost}/${solr.core.name}" .../>
> {code}
> When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
> the core, the next dataimport operation still connects to the previous DB 
> host. Reloading the dataimport config does not help. I have to unload the 
> core (or fully restart the whole Solr) for the properties change to take 
> effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5937) Modernize the DIH example config sets

2014-03-31 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955893#comment-13955893
 ] 

Steve Rowe commented on SOLR-5937:
--

I'm going to commit the 4.x patch shortly.

> Modernize the DIH example config sets
> -
>
> Key: SOLR-5937
> URL: https://issues.apache.org/jira/browse/SOLR-5937
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5937.branch_4x.patch
>
>
> The DIH example schemas should be modified to include trie numeric/date 
> fields, and add comments about the non-trie numeric/date fields being 
> deprecated and removed in 5.0.
> The DIH example {{solrconfig.xml}} files are also showing their age - they 
> should be copied from the main example {{solrconfig.xml}} and have the config 
> they need added back.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1136: POMs out of sync

2014-03-31 Thread Steve Rowe
I’ll dig - likely this is a misconversion from analysis-extras to 
solr-analyzers-extras when constructing the test dependencies for solr-core 
(analysis/analyzer conversions are done elsewhere since we apparently like 
having this duality...):

-
-run-maven-build
:
  [mvn] [INFO] Scanning for projects...
  [mvn] [INFO] 

  [mvn] [ERROR] FATAL ERROR
  [mvn] [INFO] 

  [mvn] [INFO] Error building POM (may not be this project's POM).
  [mvn] 
  [mvn] 
  [mvn] Project ID: org.apache.solr:solr-core-tests
  [mvn] POM Location: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/maven-build/solr/core/src/test/pom.xml
  [mvn] Validation Messages:
  [mvn] 
  [mvn] [0]  'dependencies.dependency.version' is missing for 
org.apache.solr:solr-analyzers-extras:jar
  [mvn] 
  [mvn] 
  [mvn] Reason: Failed to validate POM for project 
org.apache.solr:solr-core-tests at 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/maven-build/solr/core/src/test/pom.xml
-

On Mar 31, 2014, at 7:11 PM, Apache Jenkins Server  
wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1136/
> 
> No tests ran.
> 
> Build Log:
> [...truncated 39919 lines...]
> BUILD FAILED
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:490:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:182:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/extra-targets.xml:77:
>  Java returned: 1
> 
> Total time: 31 minutes 4 seconds
> Build step 'Invoke Ant' marked build as failure
> Recording test results
> Email was triggered for: Failure
> Sending email for trigger: Failure
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955869#comment-13955869
 ] 

ASF subversion and git services commented on LUCENE-5560:
-

Commit 1583489 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583489 ]

Merged revision(s) 1583488 from lucene/dev/trunk:
LUCENE-5560: Remove useless exception block

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560-google-Charset.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955868#comment-13955868
 ] 

ASF subversion and git services commented on LUCENE-5560:
-

Commit 1583488 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1583488 ]

LUCENE-5560: Remove useless exception block

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560-google-Charset.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-31 Thread David Webster (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955861#comment-13955861
 ] 

David Webster commented on SOLR-4470:
-

Just a few clarifying comments.  Yes, likewise with this patch, we only did a 
very minor mod to the sending side.  However, we have a fairly complicated way 
of getting the credentials to place on the outbound side, if they were not 
valid.  That required binding in a custom jar (as opposed to touching 
additional core SOLR code) containing the logic about how to interact with our 
SiteMinder and CyberArk infrastructure to get those.  Other than that, we made 
no mods to receive, that's all a Tomcat JAASLoginModule, nothing to do with 
SOLR.

Hopefully they rethink their plan of moving to a standalone implementation of 
some sort, because we are quite confident now we will have little trouble 
moving from version to version in the future, as the small change to core code 
is in a place that should never change.

> Support for basic http auth in internal solr requests
> -
>
> Key: SOLR-4470
> URL: https://issues.apache.org/jira/browse/SOLR-4470
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, multicore, replication (java), SolrCloud
>Affects Versions: 4.0
>Reporter: Per Steffensen
>Assignee: Jan Høydahl
>  Labels: authentication, https, solrclient, solrcloud, ssl
> Fix For: 5.0
>
> Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
> SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
> SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
> SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
> SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
> SOLR-4470_trunk_r1568857.patch
>
>
> We want to protect any HTTP-resource (url). We want to require credentials no 
> matter what kind of HTTP-request you make to a Solr-node.
> It can faily easy be acheived as described on 
> http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
> also make "internal" request to other Solr-nodes, and for it to work 
> credentials need to be provided here also.
> Ideally we would like to "forward" credentials from a particular request to 
> all the "internal" sub-requests it triggers. E.g. for search and update 
> request.
> But there are also "internal" requests
> * that only indirectly/asynchronously triggered from "outside" requests (e.g. 
> shard creation/deletion/etc based on calls to the "Collection API")
> * that do not in any way have relation to an "outside" "super"-request (e.g. 
> replica synching stuff)
> We would like to aim at a solution where "original" credentials are 
> "forwarded" when a request directly/synchronously trigger a subrequest, and 
> fallback to a configured "internal credentials" for the 
> asynchronous/non-rooted requests.
> In our solution we would aim at only supporting basic http auth, but we would 
> like to make a "framework" around it, so that not to much refactoring is 
> needed if you later want to make support for other kinds of auth (e.g. digest)
> We will work at a solution but create this JIRA issue early in order to get 
> input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1136: POMs out of sync

2014-03-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1136/

No tests ran.

Build Log:
[...truncated 39919 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:490: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:182: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/extra-targets.xml:77:
 Java returned: 1

Total time: 31 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5937) Modernize the DIH example config sets

2014-03-31 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955860#comment-13955860
 ] 

Steve Rowe commented on SOLR-5937:
--

bq. Something that's been in the back of my mind for discussion is adding 
'collection2' to the standard example with the same config/schema as 
collection1. I can't decide whether it's a good or bad idea.

I wonder if we could make 'ant example' create the non-main examples on the 
fly?  The changes I did here made it clear that the config required for the 
example DIH cores is very minor, and could maybe be stored as a small patch 
that 'ant example' applies after copying the main example over, or something 
similar.  The same could go for 'collection2', I guess?

> Modernize the DIH example config sets
> -
>
> Key: SOLR-5937
> URL: https://issues.apache.org/jira/browse/SOLR-5937
> Project: Solr
>  Issue Type: Sub-task
>  Components: Schema and Analysis
>Reporter: Steve Rowe
>Priority: Minor
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5937.branch_4x.patch
>
>
> The DIH example schemas should be modified to include trie numeric/date 
> fields, and add comments about the non-trie numeric/date fields being 
> deprecated and removed in 5.0.
> The DIH example {{solrconfig.xml}} files are also showing their age - they 
> should be copied from the main example {{solrconfig.xml}} and have the config 
> they need added back.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.8.0) - Build # 3833 - Still Failing!

2014-03-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/3833/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.solr.core.TestNonNRTOpen.testReaderIsNotNRT

Error Message:
SOLR-5815? : wrong maxDoc: core=org.apache.solr.core.SolrCore@6b8288be 
searcher=Searcher@79c2a013[collection1] 
main{StandardDirectoryReader(segments_8:16 _4(4.8):C1 _5(4.8):C1)} expected:<3> 
but was:<2>

Stack Trace:
java.lang.AssertionError: SOLR-5815? : wrong maxDoc: 
core=org.apache.solr.core.SolrCore@6b8288be 
searcher=Searcher@79c2a013[collection1] 
main{StandardDirectoryReader(segments_8:16 _4(4.8):C1 _5(4.8):C1)} expected:<3> 
but was:<2>
at 
__randomizedtesting.SeedInfo.seed([4A246CFEF045583:B124274850C5E777]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.core.TestNonNRTOpen.assertNotNRT(TestNonNRTOpen.java:142)
at 
org.apache.solr.core.TestNonNRTOpen.testReaderIsNotNRT(TestNonNRTOpen.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(T

[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955795#comment-13955795
 ] 

ASF subversion and git services commented on LUCENE-5560:
-

Commit 1583477 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583477 ]

Merged revision(s) 1583476 from lucene/dev/trunk:
LUCENE-5560: Replace com.google.common.base.Charsets by Java7 StandardCharsets

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560-google-Charset.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955791#comment-13955791
 ] 

ASF subversion and git services commented on LUCENE-5560:
-

Commit 1583476 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1583476 ]

LUCENE-5560: Replace com.google.common.base.Charsets by Java7 StandardCharsets

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560-google-Charset.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955789#comment-13955789
 ] 

Jan Høydahl commented on SOLR-4470:
---

I'll be away for a few weeks, feel free to continue to improve the patch while 
I'm away :)

> Support for basic http auth in internal solr requests
> -
>
> Key: SOLR-4470
> URL: https://issues.apache.org/jira/browse/SOLR-4470
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, multicore, replication (java), SolrCloud
>Affects Versions: 4.0
>Reporter: Per Steffensen
>Assignee: Jan Høydahl
>  Labels: authentication, https, solrclient, solrcloud, ssl
> Fix For: 5.0
>
> Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
> SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
> SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
> SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
> SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
> SOLR-4470_trunk_r1568857.patch
>
>
> We want to protect any HTTP-resource (url). We want to require credentials no 
> matter what kind of HTTP-request you make to a Solr-node.
> It can faily easy be acheived as described on 
> http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
> also make "internal" request to other Solr-nodes, and for it to work 
> credentials need to be provided here also.
> Ideally we would like to "forward" credentials from a particular request to 
> all the "internal" sub-requests it triggers. E.g. for search and update 
> request.
> But there are also "internal" requests
> * that only indirectly/asynchronously triggered from "outside" requests (e.g. 
> shard creation/deletion/etc based on calls to the "Collection API")
> * that do not in any way have relation to an "outside" "super"-request (e.g. 
> replica synching stuff)
> We would like to aim at a solution where "original" credentials are 
> "forwarded" when a request directly/synchronously trigger a subrequest, and 
> fallback to a configured "internal credentials" for the 
> asynchronous/non-rooted requests.
> In our solution we would aim at only supporting basic http auth, but we would 
> like to make a "framework" around it, so that not to much refactoring is 
> needed if you later want to make support for other kinds of auth (e.g. digest)
> We will work at a solution but create this JIRA issue early in order to get 
> input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955783#comment-13955783
 ] 

Uwe Schindler commented on LUCENE-5560:
---

Thanks [~iorixxx]! I will commit this, too.

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560-google-Charset.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955756#comment-13955756
 ] 

Erick Erickson commented on SOLR-5931:
--

Gary:

First, this would have been best asked on the user's list first...

But if you're using old-style master/slave configurations, you can configure 
the replication to take particular files from the master and replicate them to 
a differently-named file on the slave. See the example here (conffiles is the 
param). Does that work?

See: https://cwiki.apache.org/confluence/display/solr/Index+Replication

> solrcore.properties is not reloaded when core is reloaded
> -
>
> Key: SOLR-5931
> URL: https://issues.apache.org/jira/browse/SOLR-5931
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.7
>Reporter: Gunnlaugur Thor Briem
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>
> When I change solrcore.properties for a core, and then reload the core, the 
> previous values of the properties in that file are still in effect. If I 
> *unload* the core and then add it back, in the “Core Admin” section of the 
> admin UI, then the changes in solrcore.properties do take effect.
> My specific test case is a DataImportHandler where {{db-data-config.xml}} 
> uses a property to decide which DB host to talk to:
> {code:xml}
>  url="jdbc:postgresql://${dbhost}/${solr.core.name}" .../>
> {code}
> When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
> the core, the next dataimport operation still connects to the previous DB 
> host. Reloading the dataimport config does not help. I have to unload the 
> core (or fully restart the whole Solr) for the properties change to take 
> effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955738#comment-13955738
 ] 

Rafał Kuć commented on SOLR-5935:
-

No problem. As for the connections issues - we've tried bumping up the 
maxConnectionsPerHost for shardHandlerFactory. The higher the value of 
maxConnectionsPerHost the faster Solr was locking. 

> SolrCloud hangs under certain conditions
> 
>
> Key: SOLR-5935
> URL: https://issues.apache.org/jira/browse/SOLR-5935
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
>Reporter: Rafał Kuć
>Priority: Critical
> Attachments: thread dumps.zip
>
>
> As discussed in a mailing list - let's try to find the reason while under 
> certain conditions SolrCloud can hang.
> I have an issue with one of the SolrCloud deployments. Six machines, a 
> collection with 6 shards with a replication factor of 3. It all runs on 6 
> physical servers, each with 24 cores. We've indexed about 32 million 
> documents and everything was fine until that point.
> Now, during performance tests, we run into an issue - SolrCloud hangs
> when querying and indexing is run at the same time. First we see a
> normal load on the machines, than the load starts to drop and thread
> dump shown numerous threads like this:
> {noformat}
> Thread 12624: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
> @bci=42, line=2043 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
> line=131 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
> java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
>  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.get(long, 
> java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
>  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
>  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
>  - 
> org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
> line=456 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
> line=906 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
>  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
>  @bci=6, line=784 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
>  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
> (Interpreted frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
>  @bci=17, line=199 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
>  @bci=132, line=285 (Interpreted frame)
>  - 
> org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
>  java.util.List) @bci=13, line=214 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
> line=161 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
> line=118 (Interpreted frame)
>  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
> (Interpreted frame)
>  - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
>  - java.util.concurrent.Executors$Runna

[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated LUCENE-5560:
-

Attachment: LUCENE-5560-google-Charset.patch

this patch replaces {{com.google.common.base.Charsets.UTF_8}} to 
{{java.nio.charset.StandardCharsets}}

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560-google-Charset.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 81070 - Failure!

2014-03-31 Thread Robert Muir
Thanks!

On Mon, Mar 31, 2014 at 4:19 PM, Michael McCandless
 wrote:
> I committed a fix.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Mon, Mar 31, 2014 at 3:48 PM, Michael McCandless
>  wrote:
>> I'll try to remove it.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Mon, Mar 31, 2014 at 3:19 PM, Simon Willnauer  wrote:
>>> I agree I dont' think it's necessary to run that here - we are done
>>> with that IW anyways no?
>>>
>>> On Mon, Mar 31, 2014 at 6:10 PM, Michael McCandless
>>>  wrote:
 Doesn't repro for me but I think it's related to LUCENE-5544; it's
 happening in the very last part of IW.rollbackInternal:

 try {
   processEvents(false, true);
 } finally {
   notifyAll();
 }

 Down in DocumentsWriterFlushQueue, the assert is angry that we are
 sync'd on the IW instance.

 Is it even necessary to process events after rollback has "finished"?
 What could the events even do (the IW is closed)...

 Mike McCandless

 http://blog.mikemccandless.com


 On Mon, Mar 31, 2014 at 5:16 AM,   wrote:
> Build: 
> builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81070/
>
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.index.TestIndexWriterWithThreads.testRollbackAndCommitWithThreads
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=164, name=Thread-98, 
> state=RUNNABLE, group=TGRP-TestIndexWriterWithThreads]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=164, name=Thread-98, 
> state=RUNNABLE, group=TGRP-TestIndexWriterWithThreads]
> Caused by: java.lang.RuntimeException: java.lang.AssertionError
> at __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
> at 
> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
> Caused by: java.lang.AssertionError
> at 
> org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
> at 
> org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
> at 
> org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
> at 
> org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
> at 
> org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
> at 
> org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
> at 
> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)
>
>
>
>
> Build Log:
> [...truncated 690 lines...]
>[junit4] Suite: org.apache.lucene.index.TestIndexWriterWithThreads
>[junit4]   2> mar 31, 2014 8:14:01 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Thread-98,5,TGRP-TestIndexWriterWithThreads]
>[junit4]   2> java.lang.RuntimeException: java.lang.AssertionError
>[junit4]   2>at 
> __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
>[junit4]   2>at 
> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
>[junit4]   2> Caused by: java.lang.AssertionError
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:

[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955688#comment-13955688
 ] 

ASF subversion and git services commented on LUCENE-5560:
-

Commit 1583455 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583455 ]

Merged revision(s) 1583449 from lucene/dev/trunk:
LUCENE-5560: Followup: Cleanup charset handling for Java 7

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5560.
---

Resolution: Fixed

I fix this for now.

I will open a new issue to get rid of commons-io in Solr.

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5908) Make REQUESTSTATUS call non-blocking and non-blocked

2014-03-31 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5908:
---

Attachment: SOLR-5908.patch

Updated patch with a quick test.
Updated the AsyncMigrateRouteKeyTest to check after 2 seconds for the task to 
be in submitted/running state.
Considering MigrateRouteKey is a relatively long running task, I think it makes 
sense to just use it to test for this change.

> Make REQUESTSTATUS call non-blocking and non-blocked
> 
>
> Key: SOLR-5908
> URL: https://issues.apache.org/jira/browse/SOLR-5908
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-5908.patch, SOLR-5908.patch
>
>
> Currently REQUESTSTATUS Collection API call is blocked by any other call in 
> the OCP work queue.
> Make it independent and non-blocked/non-blocking.
> This would be handled as a part of having the OCP multi-threaded but I'm 
> opening this issue to explore other possible options of handling this.
> If the final fix happens via SOLR-5681, will resolve it when SOLR-5681 gets 
> resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955677#comment-13955677
 ] 

ASF subversion and git services commented on LUCENE-5560:
-

Commit 1583449 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1583449 ]

LUCENE-5560: Followup: Cleanup charset handling for Java 7

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5560:
--

Attachment: LUCENE-5560-followup.patch

Foolowup patch as described before.

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, 
> LUCENE-5560-followup.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5544) exceptions during IW.rollback can leak files and locks

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955647#comment-13955647
 ] 

ASF subversion and git services commented on LUCENE-5544:
-

Commit 1583440 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583440 ]

LUCENE-5544: disregard leftover events after rollback has finished

> exceptions during IW.rollback can leak files and locks
> --
>
> Key: LUCENE-5544
> URL: https://issues.apache.org/jira/browse/LUCENE-5544
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.8, 5.0, 4.7.1
>
> Attachments: LUCENE-5544.patch, LUCENE-5544.patch
>
>
> Today, rollback() doesn't always succeed: if it does, it closes the writer 
> nicely. otherwise, if it hits exception, it leaves you with a half-broken 
> writer, still potentially holding file handles and write lock.
> This is especially bad if you use Native locks, because you are kind of 
> hosed, the static map prevents you from forcefully unlocking (e.g. 
> IndexWriter.unlock) so you have no real course of action to try to recover.
> If rollback() hits exception, it should still deliver the exception, but 
> release things (e.g. like IOUtils.close).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 81070 - Failure!

2014-03-31 Thread Michael McCandless
I committed a fix.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Mar 31, 2014 at 3:48 PM, Michael McCandless
 wrote:
> I'll try to remove it.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Mon, Mar 31, 2014 at 3:19 PM, Simon Willnauer  wrote:
>> I agree I dont' think it's necessary to run that here - we are done
>> with that IW anyways no?
>>
>> On Mon, Mar 31, 2014 at 6:10 PM, Michael McCandless
>>  wrote:
>>> Doesn't repro for me but I think it's related to LUCENE-5544; it's
>>> happening in the very last part of IW.rollbackInternal:
>>>
>>> try {
>>>   processEvents(false, true);
>>> } finally {
>>>   notifyAll();
>>> }
>>>
>>> Down in DocumentsWriterFlushQueue, the assert is angry that we are
>>> sync'd on the IW instance.
>>>
>>> Is it even necessary to process events after rollback has "finished"?
>>> What could the events even do (the IW is closed)...
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>>
>>> On Mon, Mar 31, 2014 at 5:16 AM,   wrote:
 Build: 
 builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81070/

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.index.TestIndexWriterWithThreads.testRollbackAndCommitWithThreads

 Error Message:
 Captured an uncaught exception in thread: Thread[id=164, name=Thread-98, 
 state=RUNNABLE, group=TGRP-TestIndexWriterWithThreads]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=164, name=Thread-98, 
 state=RUNNABLE, group=TGRP-TestIndexWriterWithThreads]
 Caused by: java.lang.RuntimeException: java.lang.AssertionError
 at __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
 at 
 org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
 Caused by: java.lang.AssertionError
 at 
 org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
 at 
 org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
 at org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
 at 
 org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
 at 
 org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
 at 
 org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
 at 
 org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
 at 
 org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
 at 
 org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
 at 
 org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)




 Build Log:
 [...truncated 690 lines...]
[junit4] Suite: org.apache.lucene.index.TestIndexWriterWithThreads
[junit4]   2> mar 31, 2014 8:14:01 PM 
 com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
  uncaughtException
[junit4]   2> WARNING: Uncaught exception in thread: 
 Thread[Thread-98,5,TGRP-TestIndexWriterWithThreads]
[junit4]   2> java.lang.RuntimeException: java.lang.AssertionError
[junit4]   2>at 
 __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
[junit4]   2>at 
 org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
[junit4]   2> Caused by: java.lang.AssertionError
[junit4]   2>at 
 org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
[junit4]   2>at 
 org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
[junit4]   2>at 
 org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
[junit4]   2>at 
 org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
[junit4]   2>at 
 org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
[junit4]   2>at 
 org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
[junit4]   2>at 
 org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
[junit4]   2>at 
 org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
[junit4]   2>at 
 org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
[junit4]   2>at 
 org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)
[junit4]   2>
[junit4]   2> NOTE: reproduce w

[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5560:
--

Attachment: (was: LUCENE-5560.patch)

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, LUCENE-5560.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-5560:
---


I have some more stuff in Solr to fix. Inspired by [~iorixxx], I fixed more 
code, especially tests (and some oversights). I also found crazy conversions in 
Solr like: {{byte[] -> String -> byte[] -> String}} without any sense to copy 
files or streams.

I will post patch and backport. For now I left the hardcoded strings in tests 
for IOUtils/FileUtils (because of import clash with {{IOUtils.UTF_8}} from 
Lucene. I also left URLEncoder/Decoder.

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, LUCENE-5560.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5560:
--

Attachment: LUCENE-5560-addonByAhmet.patch

re-add Ahmets patch wth other file name.

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560-addonByAhmet.patch, LUCENE-5560.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5544) exceptions during IW.rollback can leak files and locks

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955639#comment-13955639
 ] 

ASF subversion and git services commented on LUCENE-5544:
-

Commit 1583439 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1583439 ]

LUCENE-5544: disregard leftover events after rollback has finished

> exceptions during IW.rollback can leak files and locks
> --
>
> Key: LUCENE-5544
> URL: https://issues.apache.org/jira/browse/LUCENE-5544
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 4.8, 5.0, 4.7.1
>
> Attachments: LUCENE-5544.patch, LUCENE-5544.patch
>
>
> Today, rollback() doesn't always succeed: if it does, it closes the writer 
> nicely. otherwise, if it hits exception, it leaves you with a half-broken 
> writer, still potentially holding file handles and write lock.
> This is especially bad if you use Native locks, because you are kind of 
> hosed, the static map prevents you from forcefully unlocking (e.g. 
> IndexWriter.unlock) so you have no real course of action to try to recover.
> If rollback() hits exception, it should still deliver the exception, but 
> release things (e.g. like IOUtils.close).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 81070 - Failure!

2014-03-31 Thread Michael McCandless
I'll try to remove it.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Mar 31, 2014 at 3:19 PM, Simon Willnauer  wrote:
> I agree I dont' think it's necessary to run that here - we are done
> with that IW anyways no?
>
> On Mon, Mar 31, 2014 at 6:10 PM, Michael McCandless
>  wrote:
>> Doesn't repro for me but I think it's related to LUCENE-5544; it's
>> happening in the very last part of IW.rollbackInternal:
>>
>> try {
>>   processEvents(false, true);
>> } finally {
>>   notifyAll();
>> }
>>
>> Down in DocumentsWriterFlushQueue, the assert is angry that we are
>> sync'd on the IW instance.
>>
>> Is it even necessary to process events after rollback has "finished"?
>> What could the events even do (the IW is closed)...
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Mon, Mar 31, 2014 at 5:16 AM,   wrote:
>>> Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81070/
>>>
>>> 1 tests failed.
>>> REGRESSION:  
>>> org.apache.lucene.index.TestIndexWriterWithThreads.testRollbackAndCommitWithThreads
>>>
>>> Error Message:
>>> Captured an uncaught exception in thread: Thread[id=164, name=Thread-98, 
>>> state=RUNNABLE, group=TGRP-TestIndexWriterWithThreads]
>>>
>>> Stack Trace:
>>> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
>>> uncaught exception in thread: Thread[id=164, name=Thread-98, 
>>> state=RUNNABLE, group=TGRP-TestIndexWriterWithThreads]
>>> Caused by: java.lang.RuntimeException: java.lang.AssertionError
>>> at __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
>>> at 
>>> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
>>> Caused by: java.lang.AssertionError
>>> at 
>>> org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
>>> at 
>>> org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
>>> at org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
>>> at 
>>> org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
>>> at 
>>> org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
>>> at 
>>> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
>>> at 
>>> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
>>> at 
>>> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
>>> at 
>>> org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
>>> at 
>>> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)
>>>
>>>
>>>
>>>
>>> Build Log:
>>> [...truncated 690 lines...]
>>>[junit4] Suite: org.apache.lucene.index.TestIndexWriterWithThreads
>>>[junit4]   2> mar 31, 2014 8:14:01 PM 
>>> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>>>  uncaughtException
>>>[junit4]   2> WARNING: Uncaught exception in thread: 
>>> Thread[Thread-98,5,TGRP-TestIndexWriterWithThreads]
>>>[junit4]   2> java.lang.RuntimeException: java.lang.AssertionError
>>>[junit4]   2>at 
>>> __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
>>>[junit4]   2> Caused by: java.lang.AssertionError
>>>[junit4]   2>at 
>>> org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
>>>[junit4]   2>at 
>>> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)
>>>[junit4]   2>
>>>[junit4]   2> NOTE: reproduce with: ant test  
>>> -Dtestcase=TestIndexWriterWithThreads 
>>> -Dtests.method=testRollbackAndCommitWithThreads 
>>> -Dtests.seed=A2CAC9704F740906 -Dtests.slow=true -Dtests.locale=da 
>>> -Dtests.timezone=Australia/LHI -Dtests.file.encoding=UTF-8
>>>   

[jira] [Updated] (SOLR-5894) Speed up high-cardinality facets with sparse counters

2014-03-31 Thread Toke Eskildsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toke Eskildsen updated SOLR-5894:
-

Description: 
Field based faceting in Solr has two phases: Collecting counts for tags in 
facets and extracting the requested tags.

The execution time for the collecting phase is approximately linear to the 
number of hits and the number of references from hits to tags. This phase is 
not the focus here.

The extraction time scales with the number of unique tags in the search result, 
but is also heavily influenced by the total number of unique tags in the facet 
as every counter, 0 or not, is visited by the extractor (at least for count 
order). For fields with millions of unique tag values this means 10s of 
milliseconds added to the minimum response time (see 
https://sbdevel.wordpress.com/2014/03/18/sparse-facet-counting-on-a-real-index/ 
for a test on a corpus with 7M unique values in the facet).

The extractor needs to visit every counter due to the current counter structure 
being a plain int-array of size #unique_tags. Switching to a sparse structure, 
where only the tag counters > 0 are visited, makes the extraction time linear 
to the number of unique tags in the result set.

Unfortunately the number of unique tags in the result set is unknown at collect 
time, so it is not possible to reliably select sparse counting vs. full 
counting up front. Luckily there exists solutions for sparse sets that has the 
property of switching to non-sparse-mode without a switch-penalty, when the 
sparse-threshold is exceeded (see 
http://programmingpraxis.com/2012/03/09/sparse-sets/ for an example). This JIRA 
aims to implement this functionality in Solr.

Current status: Sparse counting is implemented for field cache faceting, both 
single- and multi-value, with and without doc-values. Sort by count only. The 
patch applies cleanly to Solr 4.6.1 and should integrate well with everything 
as all functionality is unchanged. After patching, the following new parameters 
are possible:

* facet.sparse=true enables sparse faceting.
* facet.sparse.mintags=1 the minimum amount of unique tags in the given 
field for sparse faceting to be active. This is used for auto-selecting whether 
sparse should be used or not.
* facet.sparse.fraction=0.08 the overhead used for the sparse tracker. Setting 
this too low means that only very small result sets are handled as sparse. 
Setting this too high will result in a large performance penalty if the result 
set blows the sparse tracker. Values between 0.04 and 0.1 seems to work well.
* facet.sparse.pool.size=2 the maximum amount of sparse trackers to clear and 
keep in memory, ready for usage. Clearing and re-using a counter is faster that 
allocating it fresh from the heap. Setting the pool size to 0 means than a new 
sparse counter will be allocated each time, just as standard Solr faceting 
works.

* facet.sparse.stats=true adds a special tag with timing statistics for sparse 
faceting.
* facet.sparse.stats.reset=true resets the timing statistics and clears the 
pool.

The parameters needs to be given together with standard faceting parameters, 
such as facet=true&facet.field=myfield&facet.mincount=1&facet.sort=true. The 
defaults should be usable, so simply appending facet.sparse=true to the URL is 
a good start.

  was:
Field based faceting in Solr has two phases: Collecting counts for tags in 
facets and extracting the requested tags.

The execution time for the collecting phase is approximately linear to the 
number of hits and the number of references from hits to tags. This phase is 
not the focus here.

The extraction time scales with the number of unique tags in the search result, 
but is also heavily influenced by the total number of unique tags in the facet 
as every counter, 0 or not, is visited by the extractor (at least for count 
order). For fields with millions of unique tag values this means 10s of 
milliseconds added to the minimum response time (see 
https://sbdevel.wordpress.com/2014/03/18/sparse-facet-counting-on-a-real-index/ 
for a test on a corpus with 7M unique values in the facet).

The extractor needs to visit every counter due to the current counter structure 
being a plain int-array of size #unique_tags. Switching to a sparse structure, 
where only the tag counters > 0 are visited, makes the extraction time linear 
to the number of unique tags in the result set.

Unfortunately the number of unique tags in the result set is unknown at collect 
time, so it is not possible to reliably select sparse counting vs. full 
counting up front. Luckily there exists solutions for sparse sets that has the 
property of switching to non-sparse-mode without a switch-penalty, when the 
sparse-threshold is exceeded (see 
http://programmingpraxis.com/2012/03/09/sparse-sets/ for an example). This JIRA 
aims to implement this functionality in Solr.

Current status: Spars

[jira] [Updated] (SOLR-5894) Speed up high-cardinality facets with sparse counters

2014-03-31 Thread Toke Eskildsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toke Eskildsen updated SOLR-5894:
-

Description: 
Field based faceting in Solr has two phases: Collecting counts for tags in 
facets and extracting the requested tags.

The execution time for the collecting phase is approximately linear to the 
number of hits and the number of references from hits to tags. This phase is 
not the focus here.

The extraction time scales with the number of unique tags in the search result, 
but is also heavily influenced by the total number of unique tags in the facet 
as every counter, 0 or not, is visited by the extractor (at least for count 
order). For fields with millions of unique tag values this means 10s of 
milliseconds added to the minimum response time (see 
https://sbdevel.wordpress.com/2014/03/18/sparse-facet-counting-on-a-real-index/ 
for a test on a corpus with 7M unique values in the facet).

The extractor needs to visit every counter due to the current counter structure 
being a plain int-array of size #unique_tags. Switching to a sparse structure, 
where only the tag counters > 0 are visited, makes the extraction time linear 
to the number of unique tags in the result set.

Unfortunately the number of unique tags in the result set is unknown at collect 
time, so it is not possible to reliably select sparse counting vs. full 
counting up front. Luckily there exists solutions for sparse sets that has the 
property of switching to non-sparse-mode without a switch-penalty, when the 
sparse-threshold is exceeded (see 
http://programmingpraxis.com/2012/03/09/sparse-sets/ for an example). This JIRA 
aims to implement this functionality in Solr.

Current status: Sparse counting is implemented for field cache faceting, both 
single- and multi-value, with and without doc-values. The patch applies cleanly 
to Solr 4.6.1 and should integrate well with everything as all functionality is 
unchanged. After patching, the following new parameters are possible:



  was:
Field based faceting in Solr has two phases: Collecting counts for tags in 
facets and extracting the requested tags.

The execution time for the collecting phase is approximately linear to the 
number of hits and the number of references from hits to tags. This phase is 
not the focus here.

The extraction time scales with the number of unique tags in the search result, 
but is also heavily influenced by the total number of unique tags in the facet 
as every counter, 0 or not, is visited by the extractor (at least for count 
order). For fields with millions of unique tag values this means 10s of 
milliseconds added to the minimum response time (see 
https://sbdevel.wordpress.com/2014/03/18/sparse-facet-counting-on-a-real-index/ 
for a test on a corpus with 7M unique values in the facet).

The extractor needs to visit every counter due to the current counter structure 
being a plain int-array of size #unique_tags. Switching to a sparse structure, 
where only the tag counters > 0 are visited, makes the extraction time linear 
to the number of unique tags in the result set.

Unfortunately the number of unique tags in the result set is unknown at collect 
time, so it is not possible to reliably select sparse counting vs. full 
counting up front. Luckily there exists solutions for sparse sets that has the 
property of switching to non-sparse-mode without a switch-penalty, when the 
sparse-threshold is exceeded (see 
http://programmingpraxis.com/2012/03/09/sparse-sets/ for an example). This JIRA 
aims to implement this functionality in Solr (a proof of concept patch will be 
provided shortly).


> Speed up high-cardinality facets with sparse counters
> -
>
> Key: SOLR-5894
> URL: https://issues.apache.org/jira/browse/SOLR-5894
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Affects Versions: 4.6.1, 4.7
>Reporter: Toke Eskildsen
>Priority: Minor
> Fix For: 4.6.1
>
> Attachments: SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, 
> SOLR-5894.patch, SOLR-5894.patch, SOLR-5894_test.zip, SOLR-5894_test.zip, 
> SOLR-5894_test.zip, author_7M_tags_1852_logged_queries_warmed.png, 
> sparse_500docs_20140331-151918_multi.png, 
> sparse_500docs_20140331-151918_single.png, 
> sparse_5051docs_20140328-152807.png
>
>
> Field based faceting in Solr has two phases: Collecting counts for tags in 
> facets and extracting the requested tags.
> The execution time for the collecting phase is approximately linear to the 
> number of hits and the number of references from hits to tags. This phase is 
> not the focus here.
> The extraction time scales with the number of unique tags in the search 
> result, but is also heavily influenced by the total number of unique tags i

Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 81070 - Failure!

2014-03-31 Thread Simon Willnauer
I agree I dont' think it's necessary to run that here - we are done
with that IW anyways no?

On Mon, Mar 31, 2014 at 6:10 PM, Michael McCandless
 wrote:
> Doesn't repro for me but I think it's related to LUCENE-5544; it's
> happening in the very last part of IW.rollbackInternal:
>
> try {
>   processEvents(false, true);
> } finally {
>   notifyAll();
> }
>
> Down in DocumentsWriterFlushQueue, the assert is angry that we are
> sync'd on the IW instance.
>
> Is it even necessary to process events after rollback has "finished"?
> What could the events even do (the IW is closed)...
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Mon, Mar 31, 2014 at 5:16 AM,   wrote:
>> Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81070/
>>
>> 1 tests failed.
>> REGRESSION:  
>> org.apache.lucene.index.TestIndexWriterWithThreads.testRollbackAndCommitWithThreads
>>
>> Error Message:
>> Captured an uncaught exception in thread: Thread[id=164, name=Thread-98, 
>> state=RUNNABLE, group=TGRP-TestIndexWriterWithThreads]
>>
>> Stack Trace:
>> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
>> uncaught exception in thread: Thread[id=164, name=Thread-98, state=RUNNABLE, 
>> group=TGRP-TestIndexWriterWithThreads]
>> Caused by: java.lang.RuntimeException: java.lang.AssertionError
>> at __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
>> at 
>> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
>> Caused by: java.lang.AssertionError
>> at 
>> org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
>> at 
>> org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
>> at org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
>> at 
>> org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
>> at 
>> org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
>> at 
>> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
>> at 
>> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
>> at 
>> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
>> at 
>> org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
>> at 
>> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)
>>
>>
>>
>>
>> Build Log:
>> [...truncated 690 lines...]
>>[junit4] Suite: org.apache.lucene.index.TestIndexWriterWithThreads
>>[junit4]   2> mar 31, 2014 8:14:01 PM 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>>  uncaughtException
>>[junit4]   2> WARNING: Uncaught exception in thread: 
>> Thread[Thread-98,5,TGRP-TestIndexWriterWithThreads]
>>[junit4]   2> java.lang.RuntimeException: java.lang.AssertionError
>>[junit4]   2>at 
>> __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
>>[junit4]   2>at 
>> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
>>[junit4]   2> Caused by: java.lang.AssertionError
>>[junit4]   2>at 
>> org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
>>[junit4]   2>at 
>> org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
>>[junit4]   2>at 
>> org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
>>[junit4]   2>at 
>> org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
>>[junit4]   2>at 
>> org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
>>[junit4]   2>at 
>> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
>>[junit4]   2>at 
>> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
>>[junit4]   2>at 
>> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
>>[junit4]   2>at 
>> org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
>>[junit4]   2>at 
>> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)
>>[junit4]   2>
>>[junit4]   2> NOTE: reproduce with: ant test  
>> -Dtestcase=TestIndexWriterWithThreads 
>> -Dtests.method=testRollbackAndCommitWithThreads 
>> -Dtests.seed=A2CAC9704F740906 -Dtests.slow=true -Dtests.locale=da 
>> -Dtests.timezone=Australia/LHI -Dtests.file.encoding=UTF-8
>>[junit4] ERROR   0.94s J2 | 
>> TestIndexWriterWithThreads.testRollbackAndCommitWithThreads <<<
>>[junit4]> Throwable #1: java.lang.AssertionError
>>[junit4]>at 
>> org.apache.lucene.index.TestIndexWriterWithThreads.tes

[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955546#comment-13955546
 ] 

Mark Miller commented on SOLR-5935:
---

bq. Mark - thread dumps are attached in the zip file,

Sorry - was following along via email.

Yeah, these are all in lease connection. Seems like a connection pool 
configuration issue. I think we recently exposed config for some of that to the 
user, but I'll have to go dig that up.

> SolrCloud hangs under certain conditions
> 
>
> Key: SOLR-5935
> URL: https://issues.apache.org/jira/browse/SOLR-5935
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
>Reporter: Rafał Kuć
>Priority: Critical
> Attachments: thread dumps.zip
>
>
> As discussed in a mailing list - let's try to find the reason while under 
> certain conditions SolrCloud can hang.
> I have an issue with one of the SolrCloud deployments. Six machines, a 
> collection with 6 shards with a replication factor of 3. It all runs on 6 
> physical servers, each with 24 cores. We've indexed about 32 million 
> documents and everything was fine until that point.
> Now, during performance tests, we run into an issue - SolrCloud hangs
> when querying and indexing is run at the same time. First we see a
> normal load on the machines, than the load starts to drop and thread
> dump shown numerous threads like this:
> {noformat}
> Thread 12624: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
> @bci=42, line=2043 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
> line=131 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
> java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
>  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.get(long, 
> java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
>  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
>  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
>  - 
> org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
> line=456 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
> line=906 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
>  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
>  @bci=6, line=784 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
>  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
> (Interpreted frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
>  @bci=17, line=199 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
>  @bci=132, line=285 (Interpreted frame)
>  - 
> org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
>  java.util.List) @bci=13, line=214 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
> line=161 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
> line=118 (Interpreted frame)
>  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
> (Interpreted frame)
>  - java.util.concurren

[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955540#comment-13955540
 ] 

Rafał Kuć commented on SOLR-5935:
-

Mark - thread dumps are attached in the zip file, made with jstack. In the 
archive there are - stack_1 and stack_2 when Solr was still able to respond, 
stack_3 is Solr barely alive (with more than 80 - 90% errors when reported by 
JMeter) and stack_4 is Solr not responding at all.

> SolrCloud hangs under certain conditions
> 
>
> Key: SOLR-5935
> URL: https://issues.apache.org/jira/browse/SOLR-5935
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
>Reporter: Rafał Kuć
>Priority: Critical
> Attachments: thread dumps.zip
>
>
> As discussed in a mailing list - let's try to find the reason while under 
> certain conditions SolrCloud can hang.
> I have an issue with one of the SolrCloud deployments. Six machines, a 
> collection with 6 shards with a replication factor of 3. It all runs on 6 
> physical servers, each with 24 cores. We've indexed about 32 million 
> documents and everything was fine until that point.
> Now, during performance tests, we run into an issue - SolrCloud hangs
> when querying and indexing is run at the same time. First we see a
> normal load on the machines, than the load starts to drop and thread
> dump shown numerous threads like this:
> {noformat}
> Thread 12624: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
> @bci=42, line=2043 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
> line=131 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
> java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
>  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.get(long, 
> java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
>  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
>  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
>  - 
> org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
> line=456 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
> line=906 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
>  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
>  @bci=6, line=784 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
>  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
> (Interpreted frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
>  @bci=17, line=199 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
>  @bci=132, line=285 (Interpreted frame)
>  - 
> org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
>  java.util.List) @bci=13, line=214 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
> line=161 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
> line=118 (Interpreted frame)
>  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
> (Interpreted frame)
>  - java.util.concurrent.Futu

[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955541#comment-13955541
 ] 

Yonik Seeley commented on SOLR-5935:


bq. I wonder if you are hitting a connection pool limit or something.

That was my thought - sounds like distributed deadlock (the same reason we 
don't have a practical limit on the number of threads configured in jetty).
We should not have a connection limit for any request that could possibly cause 
another synchronous request to come back to us.

> SolrCloud hangs under certain conditions
> 
>
> Key: SOLR-5935
> URL: https://issues.apache.org/jira/browse/SOLR-5935
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
>Reporter: Rafał Kuć
>Priority: Critical
> Attachments: thread dumps.zip
>
>
> As discussed in a mailing list - let's try to find the reason while under 
> certain conditions SolrCloud can hang.
> I have an issue with one of the SolrCloud deployments. Six machines, a 
> collection with 6 shards with a replication factor of 3. It all runs on 6 
> physical servers, each with 24 cores. We've indexed about 32 million 
> documents and everything was fine until that point.
> Now, during performance tests, we run into an issue - SolrCloud hangs
> when querying and indexing is run at the same time. First we see a
> normal load on the machines, than the load starts to drop and thread
> dump shown numerous threads like this:
> {noformat}
> Thread 12624: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
> @bci=42, line=2043 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
> line=131 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
> java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
>  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.get(long, 
> java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
>  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
>  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
>  - 
> org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
> line=456 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
> line=906 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
>  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
>  @bci=6, line=784 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
>  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
> (Interpreted frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
>  @bci=17, line=199 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
>  @bci=132, line=285 (Interpreted frame)
>  - 
> org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
>  java.util.List) @bci=13, line=214 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
> line=161 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
> line=118 (Interpreted frame)
>  - java.util.concurrent.FutureTask$Sync.innerRun

[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955533#comment-13955533
 ] 

Mark Miller commented on SOLR-5935:
---

I wonder if you are hitting a connection pool limit or something. Have you been 
able to grab any stack traces during the hang?

> SolrCloud hangs under certain conditions
> 
>
> Key: SOLR-5935
> URL: https://issues.apache.org/jira/browse/SOLR-5935
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
>Reporter: Rafał Kuć
>Priority: Critical
> Attachments: thread dumps.zip
>
>
> As discussed in a mailing list - let's try to find the reason while under 
> certain conditions SolrCloud can hang.
> I have an issue with one of the SolrCloud deployments. Six machines, a 
> collection with 6 shards with a replication factor of 3. It all runs on 6 
> physical servers, each with 24 cores. We've indexed about 32 million 
> documents and everything was fine until that point.
> Now, during performance tests, we run into an issue - SolrCloud hangs
> when querying and indexing is run at the same time. First we see a
> normal load on the machines, than the load starts to drop and thread
> dump shown numerous threads like this:
> {noformat}
> Thread 12624: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
> @bci=42, line=2043 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
> line=131 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
> java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
>  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.get(long, 
> java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
>  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
>  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
>  - 
> org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
> line=456 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
> line=906 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
>  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
>  @bci=6, line=784 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
>  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
> (Interpreted frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
>  @bci=17, line=199 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
>  @bci=132, line=285 (Interpreted frame)
>  - 
> org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
>  java.util.List) @bci=13, line=214 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
> line=161 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
> line=118 (Interpreted frame)
>  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
> (Interpreted frame)
>  - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
>  - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=471 
> (Interpreted frame)
>  - j

[jira] [Created] (SOLR-5940) Make post.jar report back detailed error in case of 400 responses

2014-03-31 Thread Sameer Maggon (JIRA)
Sameer Maggon created SOLR-5940:
---

 Summary: Make post.jar report back detailed error in case of 400 
responses
 Key: SOLR-5940
 URL: https://issues.apache.org/jira/browse/SOLR-5940
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 4.7
Reporter: Sameer Maggon


Currently post.jar does not print detailed error message that is encountered 
during indexing. In certain use cases, it's helpful to see the error message so 
that clients can take appropriate actions.

In 4.7, here's what gets shown if there is an error during indexing:

SimplePostTool: WARNING: Solr returned an error #400 Bad Request
SimplePostTool: WARNING: IOException while reading response: 
java.io.IOException: Server returned HTTP response code: 400 for URL: 
http://localhost:8983/solr/update

It would be helpful to print out the "msg" that is returned from Solr.




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-03-31 Thread Adrien Grand
+1
SUCCESS! [1:30:20.918150]

On Mon, Mar 31, 2014 at 5:40 PM, david.w.smi...@gmail.com
 wrote:
> +1
>
> SUCCESS! [1:51:37.952160]
>
>
>
> On Sat, Mar 29, 2014 at 4:46 AM, Steve Rowe  wrote:
>>
>> Please vote for the second Release Candidate for Lucene/Solr 4.7.1.
>>
>> Download it here:
>>
>> 
>>
>> Smoke tester cmdline (from the lucene_solr_4_7 branch):
>>
>> python3.2 -u dev-tools/scripts/smokeTestRelease.py \
>>
>> https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
>> \
>> 1582953 4.7.1 /tmp/4.7.1-smoke
>>
>> The smoke tester passed for me: SUCCESS! [0:50:29.936732]
>>
>> My vote: +1
>>
>> Steve
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>



-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955494#comment-13955494
 ] 

Rafał Kuć commented on SOLR-5935:
-

Low indexing rate and high indexing - whenever the queries are present the 
cluster is finally going into a locked state. When locked it doesn't respond to 
any requests - any queries, indexing or for example loading admin pages.

> SolrCloud hangs under certain conditions
> 
>
> Key: SOLR-5935
> URL: https://issues.apache.org/jira/browse/SOLR-5935
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
>Reporter: Rafał Kuć
>Priority: Critical
> Attachments: thread dumps.zip
>
>
> As discussed in a mailing list - let's try to find the reason while under 
> certain conditions SolrCloud can hang.
> I have an issue with one of the SolrCloud deployments. Six machines, a 
> collection with 6 shards with a replication factor of 3. It all runs on 6 
> physical servers, each with 24 cores. We've indexed about 32 million 
> documents and everything was fine until that point.
> Now, during performance tests, we run into an issue - SolrCloud hangs
> when querying and indexing is run at the same time. First we see a
> normal load on the machines, than the load starts to drop and thread
> dump shown numerous threads like this:
> {noformat}
> Thread 12624: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
> @bci=42, line=2043 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
> line=131 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
> java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
>  - 
> org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
>  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
> org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
>  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
> java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
>  - org.apache.http.pool.PoolEntryFuture.get(long, 
> java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
>  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
>  - 
> org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
>  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
>  - 
> org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
> line=456 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
>  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
> line=906 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
>  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
>  - 
> org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
>  @bci=6, line=784 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
>  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
> (Interpreted frame)
>  - 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
>  @bci=17, line=199 (Compiled frame)
>  - 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
>  @bci=132, line=285 (Interpreted frame)
>  - 
> org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
>  java.util.List) @bci=13, line=214 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
> line=161 (Compiled frame)
>  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
> line=118 (Interpreted frame)
>  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
> (Interpreted frame)
>  - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
>  - ja

[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Gary Yue (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955475#comment-13955475
 ] 

Gary Yue commented on SOLR-5931:


Is there a good workaround in the meantime?
We need a quick way to switch master URL on individual the slaves in case of 
site issues. Updating solrconfig.xml directly for particular slave doesn't work 
well because the change will get overwritten on each replication (unless we 
change this in master's solrconfig as well)

Also, looks like i can't even call "CREATE" with the new property b/c starting 
in solr 4.3+, it will return throw an error and ask you to call "RELOAD" 
instead. (where as in solr3.x this i s essentially doing a RELOAD with new 
properties) 

thx!

> solrcore.properties is not reloaded when core is reloaded
> -
>
> Key: SOLR-5931
> URL: https://issues.apache.org/jira/browse/SOLR-5931
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.7
>Reporter: Gunnlaugur Thor Briem
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>
> When I change solrcore.properties for a core, and then reload the core, the 
> previous values of the properties in that file are still in effect. If I 
> *unload* the core and then add it back, in the “Core Admin” section of the 
> admin UI, then the changes in solrcore.properties do take effect.
> My specific test case is a DataImportHandler where {{db-data-config.xml}} 
> uses a property to decide which DB host to talk to:
> {code:xml}
>  url="jdbc:postgresql://${dbhost}/${solr.core.name}" .../>
> {code}
> When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
> the core, the next dataimport operation still connects to the previous DB 
> host. Reloading the dataimport config does not help. I have to unload the 
> core (or fully restart the whole Solr) for the properties change to take 
> effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955386#comment-13955386
 ] 

Shalin Shekhar Mangar commented on SOLR-5931:
-

Yeah, this makes sense. The properties should be reloaded upon core reload. The 
thing is that I can't find how these properties make their way into DIH. I'll 
have to setup an example and step through with a debugger. I don't think I'll 
find the time this week.

> solrcore.properties is not reloaded when core is reloaded
> -
>
> Key: SOLR-5931
> URL: https://issues.apache.org/jira/browse/SOLR-5931
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.7
>Reporter: Gunnlaugur Thor Briem
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>
> When I change solrcore.properties for a core, and then reload the core, the 
> previous values of the properties in that file are still in effect. If I 
> *unload* the core and then add it back, in the “Core Admin” section of the 
> admin UI, then the changes in solrcore.properties do take effect.
> My specific test case is a DataImportHandler where {{db-data-config.xml}} 
> uses a property to decide which DB host to talk to:
> {code:xml}
>  url="jdbc:postgresql://${dbhost}/${solr.core.name}" .../>
> {code}
> When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
> the core, the next dataimport operation still connects to the previous DB 
> host. Reloading the dataimport config does not help. I have to unload the 
> core (or fully restart the whole Solr) for the properties change to take 
> effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5936) Deprecate non-Trie-based numeric (and date) field types in 4.x and remove them from 5.0

2014-03-31 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955381#comment-13955381
 ] 

Hoss Man commented on SOLR-5936:


bq. +1 to rename for 5.0

What exactly do you suggest renaming these Solr FieldType's to?

If you are suggesting "TrieFooField -> FooField" then i am a _*HUGE*_ -1 to 
that idea.

It's one thing to say that things like the (text based) IntField is deprecated, 
and will not work in 5.0 and people have to reindex.  but if we _also_ rename 
TrieIntField to IntField, then people who are still using the (text based) 
IntField in their schema.xml and attempt upgrading will get really weird, and 
hard to understand errors.

If folks think Trie is a confusing word in the name and want to change that 
then fine -- I'm certainly open to the idea --  But we really should not re-use 
the name of an existing (deprecated/removed) field type in a way that isn't 
backcompat.



In any event, a lot of what's being discussed here in comments feels like it 
should really be tracked in discreet issues (these can all be dealt with 
independnet of this issue, and eachother):

* better jdocs for the trie numeric fields
* renaming the trie numeric fields
* simplifying configuration of the trie numeric fields

...let's please keep this issue focused on the deprecation & removal of the 
non-trie fields, and folks who care about these other idea can file other 
jira's to track them

> Deprecate non-Trie-based numeric (and date) field types in 4.x and remove 
> them from 5.0
> ---
>
> Key: SOLR-5936
> URL: https://issues.apache.org/jira/browse/SOLR-5936
> Project: Solr
>  Issue Type: Task
>  Components: Schema and Analysis
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5936.branch_4x.patch, SOLR-5936.branch_4x.patch, 
> SOLR-5936.branch_4x.patch
>
>
> We've been discouraging people from using non-Trie numeric&date field types 
> for years, it's time we made it official.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955382#comment-13955382
 ] 

Erick Erickson commented on SOLR-5488:
--

OK, I'll probably commit this to trunk tonight and hold off on merging into 4.x 
for a bit to address the interface questions. I want to get some assurance that 
the test errors are gone in all environments.

I'd _really_ like to get them addressed and be able to merge in the near future 
though, but that'll be another JIRA I'd expect.

> Fix up test failures for Analytics Component
> 
>
> Key: SOLR-5488
> URL: https://issues.apache.org/jira/browse/SOLR-5488
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.7, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, eoe.errors
>
>
> The analytics component has a few test failures, perhaps 
> environment-dependent. This is just to collect the test fixes in one place 
> for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5859) Harden the Overseer restart mechanism

2014-03-31 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5859:
-

Attachment: (was: SOLR-5859.patch)

> Harden the Overseer restart mechanism
> -
>
> Key: SOLR-5859
> URL: https://issues.apache.org/jira/browse/SOLR-5859
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5859.patch, SOLR-5859.patch, SOLR-5859.patch
>
>
> SOLR-5476 depends on Overseer restart.The current strategy is to remove the 
> zk node for leader election and wait for STATUS_UPDATE_DELAY +100 ms and  
> start the new overseer.
> Though overseer ops are short running,  it is not a 100% foolproof strategy 
> because if an operation takes longer than the wait period there can be race 
> condition. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5859) Harden the Overseer restart mechanism

2014-03-31 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5859:
-

Attachment: SOLR-5859.patch

> Harden the Overseer restart mechanism
> -
>
> Key: SOLR-5859
> URL: https://issues.apache.org/jira/browse/SOLR-5859
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5859.patch, SOLR-5859.patch, SOLR-5859.patch
>
>
> SOLR-5476 depends on Overseer restart.The current strategy is to remove the 
> zk node for leader election and wait for STATUS_UPDATE_DELAY +100 ms and  
> start the new overseer.
> Though overseer ops are short running,  it is not a 100% foolproof strategy 
> because if an operation takes longer than the wait period there can be race 
> condition. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5908) Make REQUESTSTATUS call non-blocking and non-blocked

2014-03-31 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955372#comment-13955372
 ] 

Shalin Shekhar Mangar commented on SOLR-5908:
-

+1

Looks good to me!

> Make REQUESTSTATUS call non-blocking and non-blocked
> 
>
> Key: SOLR-5908
> URL: https://issues.apache.org/jira/browse/SOLR-5908
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-5908.patch
>
>
> Currently REQUESTSTATUS Collection API call is blocked by any other call in 
> the OCP work queue.
> Make it independent and non-blocked/non-blocking.
> This would be handled as a part of having the OCP multi-threaded but I'm 
> opening this issue to explore other possible options of handling this.
> If the final fix happens via SOLR-5681, will resolve it when SOLR-5681 gets 
> resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955360#comment-13955360
 ] 

Mark Miller commented on SOLR-5939:
---

Just wanted to make sure I wasn't having deja vu - it def deserves it's own 
issue.

> Wrong request potentially on Error from StreamingSolrServer
> ---
>
> Key: SOLR-5939
> URL: https://issues.apache.org/jira/browse/SOLR-5939
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Per Steffensen
>  Labels: error, retry
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5939_demo_problem.patch
>
>
> In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
> the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
> _error_'s created. This is also true for subsequent requests sent through the 
> retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
> first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
> if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955356#comment-13955356
 ] 

Per Steffensen edited comment on SOLR-5939 at 3/31/14 4:32 PM:
---

Well, I happened to be reading the code trying to understand how it works, 
during my work with SOLR-4470. I realized that this piece of code would not 
work - by code inspection alone. Instead of fixing it (I had to concentrate on 
SOLR-4470 stuff) I just added a few FIXME lines around where the problem is. 
[~janhoy] is handling the SOLR-4470 patch now, and prefer that I open an issue 
(this SOLR-5939) about the problem and just reference that from the code 
instead of the FIXME description. So yes, I mentioned it before, in comments to 
SOLR-4470.

I mentioned another issue with SolrCmdDistributor long time ago (see 
SOLR-3428). I do not know exactly what happened to that one. We have the fix in 
our version of Solr, but I am not sure what you did about it.


was (Author: steff1193):
Well, I happened to be reading the code trying to understand how it works, 
during my work with SOLR-4470. Instead of fixing it (I had to concentrate on 
SOLR-4470 stuff) I just added a few FIXME lines around where the problem is. 
[~janhoy] is handling the SOLR-4470 patch now, and prefer that I open an issue 
(this SOLR-5939) about the problem and just reference that from the code 
instead of the FIXME description. So yes, I mentioned it before, in comments to 
SOLR-4470.

I mentioned another issue with SolrCmdDistributor long time ago (see 
SOLR-3428). I do not know exactly what happened to that one. We have the fix in 
our version of Solr, but I am not sure what you did about it.

> Wrong request potentially on Error from StreamingSolrServer
> ---
>
> Key: SOLR-5939
> URL: https://issues.apache.org/jira/browse/SOLR-5939
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Per Steffensen
>  Labels: error, retry
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5939_demo_problem.patch
>
>
> In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
> the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
> _error_'s created. This is also true for subsequent requests sent through the 
> retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
> first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
> if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955356#comment-13955356
 ] 

Per Steffensen commented on SOLR-5939:
--

Well, I happened to be reading the code trying to understand how it works, 
during my work with SOLR-4470. Instead of fixing it (I had to concentrate on 
SOLR-4470 stuff) I just added a few FIXME lines around where the problem is. 
[~janhoy] is handling the SOLR-4470 patch now, and prefer that I open an issue 
(this SOLR-5939) about the problem and just reference that from the code 
instead of the FIXME description. So yes, I mentioned it before, in comments to 
SOLR-4470.

I mentioned another issue with SolrCmdDistributor long time ago (see 
SOLR-3428). I do not know exactly what happened to that one. We have the fix in 
our version of Solr, but I am not sure what you did about it.

> Wrong request potentially on Error from StreamingSolrServer
> ---
>
> Key: SOLR-5939
> URL: https://issues.apache.org/jira/browse/SOLR-5939
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Per Steffensen
>  Labels: error, retry
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5939_demo_problem.patch
>
>
> In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
> the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
> _error_'s created. This is also true for subsequent requests sent through the 
> retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
> first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
> if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5934) LBHttpSolrServer exception handling improvement and small test improvements

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955349#comment-13955349
 ] 

ASF subversion and git services commented on SOLR-5934:
---

Commit 1583369 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583369 ]

SOLR-5934: Commit again to 4x - different JIRA caused the failes - 
LBHttpSolrServer exception handling improvement and small test improvements.

> LBHttpSolrServer exception handling improvement and small test improvements
> ---
>
> Key: SOLR-5934
> URL: https://issues.apache.org/jira/browse/SOLR-5934
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.8, 5.0
>Reporter: Gregory Chanan
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5934.patch
>
>
> The error handling in LBHttpSolrServer can be simplified -- right now almost 
> identical code is run whether the server is a zombie or not, which sometimes 
> doesn't make complete sense.  For example, the zombie code goes through some 
> effort to throw an exception or save the exception based on the type of 
> exception, but the end result is the same -- an exception is thrown.  It's 
> simpler if the same code is run each time.
> Also, made some minor changes to test cases:
> - made sure SolrServer.shutdown is called in finally, so it happens even if a 
> request throws an exception
> - got rid of some unnecessary checks
> - normalized some functions/variables so the functions are public scope and 
> the variables aren't



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 81070 - Failure!

2014-03-31 Thread Michael McCandless
Doesn't repro for me but I think it's related to LUCENE-5544; it's
happening in the very last part of IW.rollbackInternal:

try {
  processEvents(false, true);
} finally {
  notifyAll();
}

Down in DocumentsWriterFlushQueue, the assert is angry that we are
sync'd on the IW instance.

Is it even necessary to process events after rollback has "finished"?
What could the events even do (the IW is closed)...

Mike McCandless

http://blog.mikemccandless.com


On Mon, Mar 31, 2014 at 5:16 AM,   wrote:
> Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81070/
>
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.index.TestIndexWriterWithThreads.testRollbackAndCommitWithThreads
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=164, name=Thread-98, 
> state=RUNNABLE, group=TGRP-TestIndexWriterWithThreads]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=164, name=Thread-98, state=RUNNABLE, 
> group=TGRP-TestIndexWriterWithThreads]
> Caused by: java.lang.RuntimeException: java.lang.AssertionError
> at __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
> at 
> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
> Caused by: java.lang.AssertionError
> at 
> org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
> at 
> org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
> at org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
> at 
> org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
> at 
> org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
> at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
> at org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
> at 
> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)
>
>
>
>
> Build Log:
> [...truncated 690 lines...]
>[junit4] Suite: org.apache.lucene.index.TestIndexWriterWithThreads
>[junit4]   2> mar 31, 2014 8:14:01 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Thread-98,5,TGRP-TestIndexWriterWithThreads]
>[junit4]   2> java.lang.RuntimeException: java.lang.AssertionError
>[junit4]   2>at 
> __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
>[junit4]   2>at 
> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
>[junit4]   2> Caused by: java.lang.AssertionError
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
>[junit4]   2>at 
> org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
>[junit4]   2>at 
> org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
>[junit4]   2>at 
> org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)
>[junit4]   2>
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterWithThreads 
> -Dtests.method=testRollbackAndCommitWithThreads -Dtests.seed=A2CAC9704F740906 
> -Dtests.slow=true -Dtests.locale=da -Dtests.timezone=Australia/LHI 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.94s J2 | 
> TestIndexWriterWithThreads.testRollbackAndCommitWithThreads <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> org.apache.lucene.index.TestIndexWriterWithThreads.testRollbackAndCommitWithThreads(TestIndexWriterWithThreads.java:632)
>[junit4]>at java.lang.Thread.run(Thread.java:724)Throwable #2: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=164, name=Threa

[jira] [Updated] (LUCENE-5052) bitset codec for off heap filters

2014-03-31 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated LUCENE-5052:
-

Attachment: LUCENE-5052-1.patch

Only DOCS_ONLY index option is supported. IllegalArgumentException is thrown 
for anything else.

> bitset codec for off heap filters
> -
>
> Key: LUCENE-5052
> URL: https://issues.apache.org/jira/browse/LUCENE-5052
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Mikhail Khludnev
>  Labels: features
> Fix For: 5.0
>
> Attachments: LUCENE-5052-1.patch, LUCENE-5052.patch, bitsetcodec.zip, 
> bitsetcodec.zip
>
>
> Colleagues,
> When we filter we don’t care any of scoring factors i.e. norms, positions, 
> tf, but it should be fast. The obvious way to handle this is to decode 
> postings list and cache it in heap (CachingWrappingFilter, Solr’s DocSet). 
> Both of consuming a heap and decoding as well are expensive. 
> Let’s write a posting list as a bitset, if df is greater than segment's 
> maxdocs/8  (what about skiplists? and overall performance?). 
> Beside of the codec implementation, the trickiest part to me is to design API 
> for this. How we can let the app know that a term query don’t need to be 
> cached in heap, but can be held as an mmaped bitset?
> WDYT?  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-5052) bitset codec for off heap filters

2014-03-31 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated LUCENE-5052:
-

Comment: was deleted

(was: Only DOCS_ONLY index option is supported. IllegalArgumentException is 
thrown otherwise.)

> bitset codec for off heap filters
> -
>
> Key: LUCENE-5052
> URL: https://issues.apache.org/jira/browse/LUCENE-5052
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Mikhail Khludnev
>  Labels: features
> Fix For: 5.0
>
> Attachments: LUCENE-5052.patch, bitsetcodec.zip, bitsetcodec.zip
>
>
> Colleagues,
> When we filter we don’t care any of scoring factors i.e. norms, positions, 
> tf, but it should be fast. The obvious way to handle this is to decode 
> postings list and cache it in heap (CachingWrappingFilter, Solr’s DocSet). 
> Both of consuming a heap and decoding as well are expensive. 
> Let’s write a posting list as a bitset, if df is greater than segment's 
> maxdocs/8  (what about skiplists? and overall performance?). 
> Beside of the codec implementation, the trickiest part to me is to design API 
> for this. How we can let the app know that a term query don’t need to be 
> cached in heap, but can be held as an mmaped bitset?
> WDYT?  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5052) bitset codec for off heap filters

2014-03-31 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated LUCENE-5052:
-

Attachment: (was: LUCENE-5052-1.patch)

> bitset codec for off heap filters
> -
>
> Key: LUCENE-5052
> URL: https://issues.apache.org/jira/browse/LUCENE-5052
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Mikhail Khludnev
>  Labels: features
> Fix For: 5.0
>
> Attachments: LUCENE-5052.patch, bitsetcodec.zip, bitsetcodec.zip
>
>
> Colleagues,
> When we filter we don’t care any of scoring factors i.e. norms, positions, 
> tf, but it should be fast. The obvious way to handle this is to decode 
> postings list and cache it in heap (CachingWrappingFilter, Solr’s DocSet). 
> Both of consuming a heap and decoding as well are expensive. 
> Let’s write a posting list as a bitset, if df is greater than segment's 
> maxdocs/8  (what about skiplists? and overall performance?). 
> Beside of the codec implementation, the trickiest part to me is to design API 
> for this. How we can let the app know that a term query don’t need to be 
> cached in heap, but can be held as an mmaped bitset?
> WDYT?  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5052) bitset codec for off heap filters

2014-03-31 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated LUCENE-5052:
-

Attachment: LUCENE-5052-1.patch

Only DOCS_ONLY index option is supported. IllegalArgumentException is thrown 
otherwise.

> bitset codec for off heap filters
> -
>
> Key: LUCENE-5052
> URL: https://issues.apache.org/jira/browse/LUCENE-5052
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/codecs
>Reporter: Mikhail Khludnev
>  Labels: features
> Fix For: 5.0
>
> Attachments: LUCENE-5052-1.patch, LUCENE-5052.patch, bitsetcodec.zip, 
> bitsetcodec.zip
>
>
> Colleagues,
> When we filter we don’t care any of scoring factors i.e. norms, positions, 
> tf, but it should be fast. The obvious way to handle this is to decode 
> postings list and cache it in heap (CachingWrappingFilter, Solr’s DocSet). 
> Both of consuming a heap and decoding as well are expensive. 
> Let’s write a posting list as a bitset, if df is greater than segment's 
> maxdocs/8  (what about skiplists? and overall performance?). 
> Beside of the codec implementation, the trickiest part to me is to design API 
> for this. How we can let the app know that a term query don’t need to be 
> cached in heap, but can be held as an mmaped bitset?
> WDYT?  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-03-31 Thread david.w.smi...@gmail.com
+1

SUCCESS! [1:51:37.952160]


On Sat, Mar 29, 2014 at 4:46 AM, Steve Rowe  wrote:

> Please vote for the second Release Candidate for Lucene/Solr 4.7.1.
>
> Download it here:
> <
> https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
> >
>
> Smoke tester cmdline (from the lucene_solr_4_7 branch):
>
> python3.2 -u dev-tools/scripts/smokeTestRelease.py \
>
> https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/\
> 1582953 4.7.1 /tmp/4.7.1-smoke
>
> The smoke tester passed for me: SUCCESS! [0:50:29.936732]
>
> My vote: +1
>
> Steve
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955283#comment-13955283
 ] 

Steven Bower commented on SOLR-5488:


I've not perf tested since but the sorts seem to over things that are very 
short (lists of requests/etc..) so I doubt there will be much of a change

Also I moved the call to getTopFilter() out of loop over requests so this might 
actually make things a bit faster when there is a large number of requests

> Fix up test failures for Analytics Component
> 
>
> Key: SOLR-5488
> URL: https://issues.apache.org/jira/browse/SOLR-5488
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.7, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, eoe.errors
>
>
> The analytics component has a few test failures, perhaps 
> environment-dependent. This is just to collect the test fixes in one place 
> for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955281#comment-13955281
 ] 

Yonik Seeley commented on SOLR-5488:


bq. If there are no objections, I'll commit this early next week to trunk, and 
if nothing pops out in a few days merge it into 4x.

Remember, this was only committed to trunk not because of test failures (which 
we didn't know about when it was committed to trunk), but to give time to 
solidify the API (which is much harder to change once it's "released").  After 
a quick look, there's probably more to do here.  The biggest thing that popped 
out at me was the structure of the response - NamedList in some places that 
should probably be SimpleOrderedMap.  Add "wt=json&indent=true" to some sample 
requests and it's much easier to see.

> Fix up test failures for Analytics Component
> 
>
> Key: SOLR-5488
> URL: https://issues.apache.org/jira/browse/SOLR-5488
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.7, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, eoe.errors
>
>
> The analytics component has a few test failures, perhaps 
> environment-dependent. This is just to collect the test fixes in one place 
> for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955264#comment-13955264
 ] 

Houston Putman commented on SOLR-5488:
--

The changes look good to me. Thanks for the fixes [~vzhovtiuk]. [~sbower] does 
the performance look the same? Just curious since we have switched maps and are 
sorting more.

> Fix up test failures for Analytics Component
> 
>
> Key: SOLR-5488
> URL: https://issues.apache.org/jira/browse/SOLR-5488
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.7, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, eoe.errors
>
>
> The analytics component has a few test failures, perhaps 
> environment-dependent. This is just to collect the test fixes in one place 
> for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955255#comment-13955255
 ] 

Uwe Schindler commented on LUCENE-5560:
---

Use Lucene's {{IOUtils.UTF_8}} for that use-case (see javadocs). This one uses 
the above method and provides the shortcut constant as {{String}}. The commit 
does this partially. Although I did not rewrite all instances of the UTF-8 
string, there are still many of them in tests (which does not hurt).

This also applies to commons-io stuff. But we should nuke commons-io in later 
issues! commons-io is mostly useless with later Java versions. And it has 
partially unmaintained, horrible methods which violate lots of standards 
(auto-closing, default charsets,...).

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5908) Make REQUESTSTATUS call non-blocking and non-blocked

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955246#comment-13955246
 ] 

Mark Miller commented on SOLR-5908:
---

That makes sense to me - doesn't seem any strong reason to send a status 
request to the OverseerCollectionProcessor.

> Make REQUESTSTATUS call non-blocking and non-blocked
> 
>
> Key: SOLR-5908
> URL: https://issues.apache.org/jira/browse/SOLR-5908
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-5908.patch
>
>
> Currently REQUESTSTATUS Collection API call is blocked by any other call in 
> the OCP work queue.
> Make it independent and non-blocked/non-blocking.
> This would be handled as a part of having the OCP multi-threaded but I'm 
> opening this issue to explore other possible options of handling this.
> If the final fix happens via SOLR-5681, will resolve it when SOLR-5681 gets 
> resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5938) ConcurrentUpdateSolrServer don't parser the response while response status code isn't 200

2014-03-31 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5938:
--

Fix Version/s: 5.0
   4.8

> ConcurrentUpdateSolrServer don't parser the response while response status 
> code isn't 200
> -
>
> Key: SOLR-5938
> URL: https://issues.apache.org/jira/browse/SOLR-5938
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6.1
> Environment: one cloud has two server, one shard, one leader one 
> replica, send the index into replica server, replica server forward leader 
> server.
>Reporter: Raintung Li
>  Labels: solrj
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5938.txt
>
>
> ConcurrentUpdateSolrServer only give back the error that don't parser the 
> response body, you can't get the error reason from remote server. 
> EX.
> You send the index request to one solr server, this server forward the other 
> leader server. forward case invoke the ConcurrentUpdateSolrServer.java, you 
> can't get the right error message only check it in the leader server if 
> happen error. Actually leader server had sent the error message to the 
> forwarding server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 1228 - Failure!

2014-03-31 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/1228/

1 tests failed.
REGRESSION:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

Error Message:
some thread(s) failed

Stack Trace:
java.lang.RuntimeException: some thread(s) failed
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:533)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:901)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
at java.lang.Thread.run(Thread.java:724)




Build Log:
[...truncated 1115 lines...]
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=true text='ura cobarde e amb\u00edgu'
   [junit4]   2> mar 31, 2014 11:07:06 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[Thread-133,5,TGRP-TestRandomChains]
   [junit4]   2> java.lang.OutOfMemoryError: Java heap space
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([AE9BD047EDEF]:0)
   [junit4]   2>at java.util.Arrays.copyOfRange(Arrays.java:2694)
   [junit4]   2>at java.lang.String.(String.java:203)
   [junit4]   2>  

[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955223#comment-13955223
 ] 

Steven Bower commented on SOLR-5488:


Reviewed.. looks good

> Fix up test failures for Analytics Component
> 
>
> Key: SOLR-5488
> URL: https://issues.apache.org/jira/browse/SOLR-5488
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.7, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
> SOLR-5488.patch, SOLR-5488.patch, eoe.errors
>
>
> The analytics component has a few test failures, perhaps 
> environment-dependent. This is just to collect the test fixes in one place 
> for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955212#comment-13955212
 ] 

Mark Miller commented on SOLR-5939:
---

This sounds so familiar - did you bring this up before in another issue?

> Wrong request potentially on Error from StreamingSolrServer
> ---
>
> Key: SOLR-5939
> URL: https://issues.apache.org/jira/browse/SOLR-5939
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Per Steffensen
>  Labels: error, retry
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5939_demo_problem.patch
>
>
> In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
> the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
> _error_'s created. This is also true for subsequent requests sent through the 
> retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
> first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
> if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5939:
--

Fix Version/s: 5.0
   4.8

> Wrong request potentially on Error from StreamingSolrServer
> ---
>
> Key: SOLR-5939
> URL: https://issues.apache.org/jira/browse/SOLR-5939
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Per Steffensen
>  Labels: error, retry
> Fix For: 4.8, 5.0
>
> Attachments: SOLR-5939_demo_problem.patch
>
>
> In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
> the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
> _error_'s created. This is also true for subsequent requests sent through the 
> retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
> first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
> if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5859) Harden the Overseer restart mechanism

2014-03-31 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5859:
-

Attachment: SOLR-5859.patch

Added a couple of tests. 
Check if the system is shutting down , if not ,rejoin election 

Actually the logging was added for debugging , I removed all those extra logging

> Harden the Overseer restart mechanism
> -
>
> Key: SOLR-5859
> URL: https://issues.apache.org/jira/browse/SOLR-5859
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5859.patch, SOLR-5859.patch, SOLR-5859.patch
>
>
> SOLR-5476 depends on Overseer restart.The current strategy is to remove the 
> zk node for leader election and wait for STATUS_UPDATE_DELAY +100 ms and  
> start the new overseer.
> Though overseer ops are short running,  it is not a 100% foolproof strategy 
> because if an operation takes longer than the wait period there can be race 
> condition. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated LUCENE-5560:
-

Attachment: LUCENE-5560.patch

bq. Look at the patch. I refactored many thing of Solr, too. 
Sorry I looked at https://svn.apache.org/r1583315 and it conceals solr changes 
at first glance.

URLDecoder does not accept Charset instances either.

It is okey to use {code} URLDecoder.decode(, 
StandardCharsets.UTF_8.name()); {code} in such cases?

Is this patch usable in that sense?

> Cleanup charset handling for Java 7
> ---
>
> Key: LUCENE-5560
> URL: https://issues.apache.org/jira/browse/LUCENE-5560
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.8, 5.0
>
> Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
> LUCENE-5560.patch, LUCENE-5560.patch
>
>
> As we are now on Java 7, we should cleanup our charset handling to use the 
> official constants added by Java 7: {{StandardCharsets}}
> This issue is just a small code refactoring, trying to nuke the IOUtils 
> constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5562) LuceneSuggester does not work on Android

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5562.
---

Resolution: Not a Problem
  Assignee: Uwe Schindler

> LuceneSuggester does not work on Android
> 
>
> Key: LUCENE-5562
> URL: https://issues.apache.org/jira/browse/LUCENE-5562
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
> Environment: Android 4.4.2
>Reporter: Giovanni Cuccu
>Assignee: Uwe Schindler
>Priority: Minor
> Attachments: AnalyzingSuggester.java, Sort.java
>
>
> I'm developing an application on android and I'm using lucene for indexing 
> and searching. When I try to use AnalyzingSuggester (even the Fuzzy version) 
> I got an Exception the BufferedOutputStream is already closed.
> I tracked the problem and it seems that in
> org.apache.lucene.search.suggest.Sort
> and in org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester
> the outputstream is closed twice hence the exception on android. 
> The same code on windows runs without a problem.
> It seems that the Android jvm does some additional checks. I attach two 
> patche files, the classes close the output stream once. (check for 
> writerClosed in the code to see what I did)  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5562) LuceneSuggester does not work on Android

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955184#comment-13955184
 ] 

Uwe Schindler edited comment on LUCENE-5562 at 3/31/14 2:00 PM:


This is not a bug in Lucene. The Java Closeable interface is idempotent, as its 
documentation states in 
[http://docs.oracle.com/javase/7/docs/api/java/io/Closeable.html#close()]:

bq. Closes this stream and releases any system resources associated with it. If 
the stream is already closed then invoking this method has no effect.

If the implementation on Android does not implement this correctly, it is not a 
problem of Lucene.

Just to check if there is no other problem: Can you post the exact stack trace 
of the Exception on Android?

P.S.: Please note: Android is not Java compatible, so Lucene does not gurantee 
that it work correctly with Android. We also don't test on Android. Lucene 4.8 
will require Java 7, so it is unlikely that it will work on Android anymore.


was (Author: thetaphi):
This is not a bug in Lucene. The Java Closeable interface states in 
[http://docs.oracle.com/javase/7/docs/api/java/io/Closeable.html#close()]:

bq. Closes this stream and releases any system resources associated with it. If 
the stream is already closed then invoking this method has no effect.

If the implementation on Android does not implement this correctly, it is not a 
problem of Lucene.

> LuceneSuggester does not work on Android
> 
>
> Key: LUCENE-5562
> URL: https://issues.apache.org/jira/browse/LUCENE-5562
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
> Environment: Android 4.4.2
>Reporter: Giovanni Cuccu
>Priority: Minor
> Attachments: AnalyzingSuggester.java, Sort.java
>
>
> I'm developing an application on android and I'm using lucene for indexing 
> and searching. When I try to use AnalyzingSuggester (even the Fuzzy version) 
> I got an Exception the BufferedOutputStream is already closed.
> I tracked the problem and it seems that in
> org.apache.lucene.search.suggest.Sort
> and in org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester
> the outputstream is closed twice hence the exception on android. 
> The same code on windows runs without a problem.
> It seems that the Android jvm does some additional checks. I attach two 
> patche files, the classes close the output stream once. (check for 
> writerClosed in the code to see what I did)  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-03-31 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955186#comment-13955186
 ] 

Shalin Shekhar Mangar edited comment on SOLR-5473 at 3/31/14 1:55 PM:
--

All solr core tests pass with this patch. After discussing offline with Noble, 
I introduced a new method ClusterState.getCachedReplica which is exactly like 
getReplica except that it will fetch information available in locally cached 
data and never hit ZK. The older getReplica and this new getCachedReplica 
method are used only by SolrLogLayout and SolrLogFormatter classes so these 
should never hit ZK anyway.

However, there is a SolrJ test failure in CloudSolrServerTest on asserts added 
by SOLR-5715

{code}
  [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=CloudSolrServerTest 
-Dtests.method=testDistribSearch -Dtests.seed=5FAC2B1757C387B3 
-Dtests.slow=true -Dtests.locale=sv_SE -Dtests.timezone=Pacific/Samoa 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 22.1s J1 | CloudSolrServerTest.testDistribSearch <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Unexpected number of 
requests to expected URLs expected:<6> but was:<0>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5FAC2B1757C387B3:DE4AA50F209CE78F]:0)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:300)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
   [junit4]>at java.lang.Thread.run(Thread.java:744)
   [junit4]   2> 26918 T10 oas.SolrTestCaseJ4.deleteCore ###deleteCore
{code}


was (Author: shalinmangar):
All solr core tests pass with this patch. However, there is a SolrJ test 
failure in CloudSolrServerTest on asserts added by SOLR-5715

{code}
  [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=CloudSolrServerTest 
-Dtests.method=testDistribSearch -Dtests.seed=5FAC2B1757C387B3 
-Dtests.slow=true -Dtests.locale=sv_SE -Dtests.timezone=Pacific/Samoa 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 22.1s J1 | CloudSolrServerTest.testDistribSearch <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Unexpected number of 
requests to expected URLs expected:<6> but was:<0>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5FAC2B1757C387B3:DE4AA50F209CE78F]:0)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:300)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
   [junit4]>at java.lang.Thread.run(Thread.java:744)
   [junit4]   2> 26918 T10 oas.SolrTestCaseJ4.deleteCore ###deleteCore
{code}

> Make one state.json per collection
> --
>
> Key: SOLR-5473
> URL: https://issues.apache.org/jira/browse/SOLR-5473
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log
>
>
> As defined in the parent issue, store the states of each collection under 
> /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >