Re: [VOTE] Release PyLucene 4.10.1-1

2014-10-03 Thread Michael McCandless
+1 to release

I ran my usual smoke test, indexing, optimizing  searching first 100
K Wikipedia English docs...

Mike McCandless

http://blog.mikemccandless.com


On Wed, Oct 1, 2014 at 7:13 PM, Andi Vajda va...@apache.org wrote:

 The PyLucene 4.10.1-1 release tracking the recent release of Apache Lucene
 4.10.1 is ready.

 This release candidate fixes the regression found in the previous one,
 4.10.1-0, and is available from:
 http://people.apache.org/~vajda/staging_area/

 A list of changes in this release can be seen at:
 http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_10/CHANGES

 PyLucene 4.10.1 is built with JCC 2.21 included in these release artifacts.

 A list of Lucene Java changes can be seen at:
 http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_10_1/lucene/CHANGES.txt

 Please vote to release these artifacts as PyLucene 4.10.1-1.
 Anyone interested in this release can and should vote !

 Thanks !

 Andi..

 ps: the KEYS file for PyLucene release signing is at:
 http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
 http://people.apache.org/~vajda/staging_area/KEYS

 pps: here is my +1


Re: [VOTE] Release PyLucene 4.10.1-1

2014-10-03 Thread Steve Rowe
+1

After building jcc and pylucene, ‘make test’ passed.

I successfully ran the IndexFiles.py and SearchFiles.py samples from the 
distribution against the pylucene directory.

Steve

On Oct 1, 2014, at 7:13 PM, Andi Vajda va...@apache.org wrote:

 
 The PyLucene 4.10.1-1 release tracking the recent release of Apache Lucene 
 4.10.1 is ready.
 
 This release candidate fixes the regression found in the previous one, 
 4.10.1-0, and is available from:
 http://people.apache.org/~vajda/staging_area/
 
 A list of changes in this release can be seen at:
 http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_10/CHANGES
 
 PyLucene 4.10.1 is built with JCC 2.21 included in these release artifacts.
 
 A list of Lucene Java changes can be seen at:
 http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_10_1/lucene/CHANGES.txt
 
 Please vote to release these artifacts as PyLucene 4.10.1-1.
 Anyone interested in this release can and should vote !
 
 Thanks !
 
 Andi..
 
 ps: the KEYS file for PyLucene release signing is at:
 http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
 http://people.apache.org/~vajda/staging_area/KEYS
 
 pps: here is my +1



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_67) - Build # 4248 - Failure!

2014-10-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4248/
Java: 64bit/jdk1.7.0_67 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.TestCloudPivotFacet.testDistribSearch

Error Message:
init query failed: 
{main(facet=truefacet.pivot=pivot_tf%2Cpivot_lfacet.pivot=pivot_td%2Cdense_pivot_ti%2Cpivot_dtfacet.limit=11facet.offset=7),extra(rows=0q=*%3A*fq=id%3A%5B*+TO+260%5D)}:
 No live SolrServers available to handle this 
request:[http://127.0.0.1:61656/sn_z/uo/collection1, 
http://127.0.0.1:61666/sn_z/uo/collection1, 
http://127.0.0.1:61637/sn_z/uo/collection1, 
http://127.0.0.1:61646/sn_z/uo/collection1]

Stack Trace:
java.lang.RuntimeException: init query failed: 
{main(facet=truefacet.pivot=pivot_tf%2Cpivot_lfacet.pivot=pivot_td%2Cdense_pivot_ti%2Cpivot_dtfacet.limit=11facet.offset=7),extra(rows=0q=*%3A*fq=id%3A%5B*+TO+260%5D)}:
 No live SolrServers available to handle this 
request:[http://127.0.0.1:61656/sn_z/uo/collection1, 
http://127.0.0.1:61666/sn_z/uo/collection1, 
http://127.0.0.1:61637/sn_z/uo/collection1, 
http://127.0.0.1:61646/sn_z/uo/collection1]
at 
__randomizedtesting.SeedInfo.seed([EB4E61E19A7464B9:6AA8EFF9ED2B0485]:0)
at 
org.apache.solr.cloud.TestCloudPivotFacet.assertPivotCountsAreCorrect(TestCloudPivotFacet.java:223)
at 
org.apache.solr.cloud.TestCloudPivotFacet.doTest(TestCloudPivotFacet.java:197)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4901 - Still Failing

2014-10-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4901/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
expected:0 but was:1

Stack Trace:
java.lang.AssertionError: expected:0 but was:1
at 
__randomizedtesting.SeedInfo.seed([F3FB642B77797948:721DEA3300261974]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:153)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (LUCENE-5986) Incorrect character folding in Arabic

2014-10-03 Thread Jorge Cruanes (JIRA)
Jorge Cruanes created LUCENE-5986:
-

 Summary: Incorrect character folding in Arabic
 Key: LUCENE-5986
 URL: https://issues.apache.org/jira/browse/LUCENE-5986
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Jorge Cruanes


The function {{normalize(char s[], int len)}}, in the package 
{{org.apache.lucene.analysis.ar.ArabicNormalizer}}, makes an incorrect  
character folding in Arabic. The incorrect folding affects the letters Teh 
Marbuta (U+0629) and Heh (U+0647) at the end of a word (according to the study 
of El-Sherbiny et al., 2010, page 5).

To fix this bug the solution is inserting an if clause, where the folding is 
made only an if the Teh Marbuta is not at the end of the word. Suggestion for 
the new case code is following:
{quote}
case TEH_MARBUTA:
  if (i  (len-1))
s [ i ] = HEH;
  break;
{quote}

References:
El-Sherbiny, A., Farah, M., Oueichek, I., Al-Zoman, A. (2010) Linguistic 
Guidelines for the Use of the Arabic Language in Internet Domains. Internet 
Society Requests For Comment (RFCs) (5564). pp 1-11. Available at: 
http://tools.ietf.org/html/rfc5564.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b28) - Build # 11375 - Still Failing!

2014-10-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11375/
Java: 64bit/jdk1.9.0-ea-b28 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:54836/as_oq, https://127.0.0.1:48719/as_oq, 
https://127.0.0.1:59371/as_oq, https://127.0.0.1:43034/as_oq, 
https://127.0.0.1:38310/as_oq]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:54836/as_oq, 
https://127.0.0.1:48719/as_oq, https://127.0.0.1:59371/as_oq, 
https://127.0.0.1:43034/as_oq, https://127.0.0.1:38310/as_oq]
at 
__randomizedtesting.SeedInfo.seed([3E2E15D092187917:BFC89BC8E547192B]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:484)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6510) select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector

2014-10-03 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157794#comment-14157794
 ] 

Ramkumar Aiyengar commented on SOLR-6510:
-

I see bugs getting tagged for a 4.10.2, though that may not be indicative of a 
release..

 select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector
 --

 Key: SOLR-6510
 URL: https://issues.apache.org/jira/browse/SOLR-6510
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Christine Poerschke
Assignee: Joel Bernstein
Priority: Minor

 Affects branch_4x but not trunk, collapse field must be docValues=true and 
 shard empty (or with nothing indexed for the field?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5986) Incorrect character folding in Arabic

2014-10-03 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5986.
-
Resolution: Not a Problem

This character only appears at the end of words.

 Incorrect character folding in Arabic
 -

 Key: LUCENE-5986
 URL: https://issues.apache.org/jira/browse/LUCENE-5986
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Jorge Cruanes
  Labels: easyfix
   Original Estimate: 5m
  Remaining Estimate: 5m

 The function {{normalize(char s[], int len)}}, in the package 
 {{org.apache.lucene.analysis.ar.ArabicNormalizer}}, makes an incorrect  
 character folding in Arabic. The incorrect folding affects the letters Teh 
 Marbuta (U+0629) and Heh (U+0647) at the end of a word (according to the 
 study of El-Sherbiny et al., 2010, page 5).
 To fix this bug the solution is inserting an if clause, where the folding is 
 made only an if the Teh Marbuta is not at the end of the word. Suggestion for 
 the new case code is following:
 {quote}
 case TEH_MARBUTA:
   if (i  (len-1))
 s [ i ] = HEH;
   break;
 {quote}
 References:
 El-Sherbiny, A., Farah, M., Oueichek, I., Al-Zoman, A. (2010) Linguistic 
 Guidelines for the Use of the Arabic Language in Internet Domains. Internet 
 Society Requests For Comment (RFCs) (5564). pp 1-11. Available at: 
 http://tools.ietf.org/html/rfc5564.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5986) Incorrect character folding in Arabic

2014-10-03 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157842#comment-14157842
 ] 

Robert Muir commented on LUCENE-5986:
-

By the way, here is the paper: 
http://www.mtholyoke.edu/~lballest/Pubs/arab_stem05.pdf

Its referenced in the source code: this algorithm just implements the paper. 
Its not about opinions of what is right and what is wrong and what is good and 
what is bad.

 Incorrect character folding in Arabic
 -

 Key: LUCENE-5986
 URL: https://issues.apache.org/jira/browse/LUCENE-5986
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Jorge Cruanes
  Labels: easyfix
   Original Estimate: 5m
  Remaining Estimate: 5m

 The function {{normalize(char s[], int len)}}, in the package 
 {{org.apache.lucene.analysis.ar.ArabicNormalizer}}, makes an incorrect  
 character folding in Arabic. The incorrect folding affects the letters Teh 
 Marbuta (U+0629) and Heh (U+0647) at the end of a word (according to the 
 study of El-Sherbiny et al., 2010, page 5).
 To fix this bug the solution is inserting an if clause, where the folding is 
 made only an if the Teh Marbuta is not at the end of the word. Suggestion for 
 the new case code is following:
 {quote}
 case TEH_MARBUTA:
   if (i  (len-1))
 s [ i ] = HEH;
   break;
 {quote}
 References:
 El-Sherbiny, A., Farah, M., Oueichek, I., Al-Zoman, A. (2010) Linguistic 
 Guidelines for the Use of the Arabic Language in Internet Domains. Internet 
 Society Requests For Comment (RFCs) (5564). pp 1-11. Available at: 
 http://tools.ietf.org/html/rfc5564.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-10-03 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157843#comment-14157843
 ] 

Shalin Shekhar Mangar commented on SOLR-6511:
-

bq. Digging a bit further into the logs, maxTries is set to 1 because 
ensureReplicaInLeaderInitiatedRecovery throws a SessionExpiredException 
(presumably because ZK has noticed the network blip and removed the relevant 
ephemeral node).

It's not just SessionExpiredException. Sometime it might throw a 
ConnectionLossException which also should be handled in the same way. I got the 
following stack trace in my testing when a node was partitioned from ZooKeeper 
for a long time:
{code}
7984566 [qtp1600876769-17] ERROR 
org.apache.solr.update.processor.DistributedUpdateProcessor  – Leader failed to 
set replica http://n4:8983/solr/collection_5x3_shard4_replica3/ state to DOWN 
due to: org.apache.solr.common.SolrException: Failed to update data to down for 
znode: /collections/collection_5x3/leader_initiated_recovery/shard4/core_node10
org.apache.solr.common.SolrException: Failed to update data to down for znode: 
/collections/collection_5x3/leader_initiated_recovery/shard4/core_node10
at 
org.apache.solr.cloud.ZkController.updateLeaderInitiatedRecoveryState(ZkController.java:1959)
at 
org.apache.solr.cloud.ZkController.ensureReplicaInLeaderInitiatedRecovery(ZkController.java:1841)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:837)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1679)
at 
org.apache.solr.update.processor.LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:179)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:76)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:83)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:953)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
KeeperErrorCode = ConnectionLoss for 
/collections/collection_5x3/leader_initiated_recovery/shard4/core_node10
at 

[jira] [Updated] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-10-03 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5879:
---
Attachment: LUCENE-5879.patch

bq. I think we shouldn't add the FI option at this time? 

New patch with FieldType.setIndexRanges removed, but I don't think we
should commit this approach: the feature is basically so ridiculously
expert to use that only like 3 people in the world will figure out
how.

Sure, the servers built on top of Lucene can expose a simple API,
since they know the schema and can open up a index for range
searching boolean on a field and validate you are using a PF that
supports that... but I don't think it's right/fair to make new, strong
features of Lucene ridiculously hard to use by direct Lucene users.

It's wonderful Lucene has such pluggable codecs now, letting users
explore all sorts of custom formats, etc., but the nasty downside of
all this freedom is that new, complex features like this one, which
offer powerful improvements to the default codec that 99% of Lucene
users would have used, must either be implemented across the board for
all codecs (a very tall order) in order to have an intuitive API, or
must be exposed only via ridiculously expert codec-specific APIs.

I don't think either choice is acceptable.

So ... I tried exploring an uber helper/utility class, that lets you
add optimized for range/prefix fields to docs, and spies on you to
determine which fields should then use a customized PF, and then gives
you sugar APIs to build range/prefix/equals queries... but even as
best/simple as I can make this class it still feels way too
weird/heavy/external/uncomittable.

Maybe we should just go back to the always index auto-prefix terms on
DOCS_ONLY fields even though 1) I had to then choose weaker
defaults (less added index size; less performance gains), and 2) it's
a total waste to add such terms to NumericFields and probably spatial
fields which already build their own prefixes outside of Lucene.  This
is not a great solution either...


 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
 LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
 LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6583) Resuming connection with ZooKeeper causes log replay code to run

2014-10-03 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6583:
---

 Summary: Resuming connection with ZooKeeper causes log replay code 
to run
 Key: SOLR-6583
 URL: https://issues.apache.org/jira/browse/SOLR-6583
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, Trunk


If a node is partitioned from ZooKeeper for an extended period of time then 
upon resuming connection, the node re-registers itself causing recoverFromLog() 
method to be executed which fails with the following exception:
{code}
8091124 [Thread-71] ERROR org.apache.solr.update.UpdateLog  – Error inspecting 
tlog 
tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009869
 refcount=2}
java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678)
at 
org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784)
at 
org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89)
at 
org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125)
at java.io.InputStream.read(InputStream.java:101)
at 
org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218)
at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800)
at org.apache.solr.cloud.ZkController.register(ZkController.java:834)
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271)
at 
org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
8091125 [Thread-71] ERROR org.apache.solr.update.UpdateLog  – Error inspecting 
tlog 
tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009870
 refcount=2}
java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678)
at 
org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784)
at 
org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89)
at 
org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125)
at java.io.InputStream.read(InputStream.java:101)
at 
org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218)
at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800)
at org.apache.solr.cloud.ZkController.register(ZkController.java:834)
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271)
at 
org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
{code}

This is because the recoverFromLog uses transaction log references that were 
collected at startup and are no longer valid.

We shouldn't even be running recoverFromLog code for ZK re-connect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-10-03 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157857#comment-14157857
 ] 

Robert Muir commented on LUCENE-5879:
-

{quote}
but the nasty downside of
all this freedom is that new, complex features like this one, which
offer powerful improvements to the default codec that 99% of Lucene
users would have used, must either be implemented across the board for
all codecs (a very tall order) in order to have an intuitive API, or
must be exposed only via ridiculously expert codec-specific APIs.
{quote}

I don't think its a downside of the freedom, its just other problems.

However, there are way way too many experimental codecs. These are even more 
costly to maintain than backwards ones in some ways: they are rotated in all 
tests! For many recent changes I have spent just as much time fixing those as i 
have on backwards codecs. If we ever want to provide backwards compatibility 
for experimental codecs (which seems to confuse users constantly that we can't 
do this), then we have to tone them down anyway.

The existing trie-encoding is difficult to use, too. I dont think it should 
serve as your example for this feature. Remember that simple numeric range 
queries dont work with QP without the user doing subclassing, and numerics dont 
really work well with the parser at all because the analyzer is completely 
unaware of them (because, for some crazy reason, it is implemented as a 
tokenstream/special fields rather than being a more ordinary analysis chain 
integration).

The .document API is overengineered. I dont understand why it needs to be so 
complicated. Because of it taking on more than it can chew already, its 
impossible to even think about how it could work with the codec api: and I 
think this is causing a lot of your frustration.

The whole way that lucene is schemaless is fucking bogus, and only means that 
its on you, the user, to record and manage and track all this stuff yourself. 
Its no freedom to anyone, just pain. For example, we don't even know which 
fields have trie-encoded terms here, to do any kind of nice migration strategy 
from old numerics to this at all. Thats really sad and will cause users just 
more pain and confusion.

FieldInfo is a hardcore place to add an experimental option when we arent even 
sure how it should behave yet (e.g. should it really be limited to DOCS_ONLY? 
who knows?)

I can keep complaining too, we can rant about this stuff on this issue, but 
maybe you should commit what you have (yes, with the crappy hard-to-use codec 
option) so we can try to do something on another issue instead.

 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
 LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
 LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6545) Query field list with wild card on dynamic field fails

2014-10-03 Thread Sachin Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157863#comment-14157863
 ] 

Sachin Kale commented on SOLR-6545:
---

We are running 4.10.0 on Production and we are getting tons of 
NullPointerException due to this bug. Though we are using SolrCloud setup, we 
are having only one shard, so it is a basically master-slave configuration 
only. In one of the comments, it is mentioned that, this bug occurs when doing 
distributed queries only. How do I disable the distributed queries?

 Query field list with wild card on dynamic field fails
 --

 Key: SOLR-6545
 URL: https://issues.apache.org/jira/browse/SOLR-6545
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
 Environment: Mac OS X 10.9.5, Ubuntu 14.04.1 LTS
Reporter: Burke Webster
Assignee: Shalin Shekhar Mangar
Priority: Critical
 Attachments: SOLR-6545.patch


 Downloaded 4.10.0, unpacked, and setup a solrcloud 2-node cluster by running: 
   bin/solr -e cloud 
 Accepting all the default options and you will have a 2 node cloud running 
 with replication factor of 2.  
 Now add 2 documents by going to example/exampledocs, creating the following 
 file named my_test.xml:
 add
  doc
   field name=id1000/field
   field name=nametest 1/field
   field name=desc_tText about test 1./field
   field name=cat_A_sCategory A/field
  /doc
  doc
   field name=id1001/field
   field name=nametest 2/field
   field name=desc_tStuff about test 2./field
   field name=cat_B_sCategory B/field
  /doc
 /add
 Then import these documents by running:
   java -Durl=http://localhost:7574/solr/gettingstarted/update -jar post.jar 
 my_test.xml
 Verify the docs are there by hitting:
   http://localhost:8983/solr/gettingstarted/select?q=*:*
 Now run a query and ask for only the id and cat_*_s fields:
   http://localhost:8983/solr/gettingstarted/select?q=*:*fl=id,cat_*
 You will only get the id fields back.  Change the query a little to include a 
 third field:
   http://localhost:8983/solr/gettingstarted/select?q=*:*fl=id,name,cat_*
 You will now get the following exception:
 java.lang.NullPointerException
   at 
 org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)
   at 
 org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)
   at 
 org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
   at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2152 - Failure

2014-10-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2152/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([E6FF620F4DE9FB3E:6719EC173AB69B02]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-5596) Support for index/search large numeric field

2014-10-03 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157895#comment-14157895
 ] 

David Smiley commented on LUCENE-5596:
--

This is going to be largely obsolete by LUCENE-5879.  All that will remain to 
do is to encode the IPv6 into a single term, probably with the 16 byte 
representation, and that's it.  Alternatively you might use half of each byte 
and thus use 32 bytes which could result in even faster range queries... but I 
probably wouldn't bother.

 Support for index/search large numeric field
 

 Key: LUCENE-5596
 URL: https://issues.apache.org/jira/browse/LUCENE-5596
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Kevin Wang
Assignee: Uwe Schindler
 Attachments: LUCENE-5596.patch, LUCENE-5596.patch


 Currently if an number is larger than Long.MAX_VALUE, we can't index/search 
 that in lucene as a number. For example, IPv6 address is an 128 bit number, 
 so we can't index that as a numeric field and do numeric range query etc.
 It would be good to support BigInteger / BigDecimal
 I've tried use BigInteger for IPv6 in Elasticsearch and that works fine, but 
 there are still lots of things to do
 https://github.com/elasticsearch/elasticsearch/pull/5758



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-5596) Support for index/search large numeric field

2014-10-03 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand closed LUCENE-5596.

Resolution: Not a Problem

Agreed, I think we can close this issue.

 Support for index/search large numeric field
 

 Key: LUCENE-5596
 URL: https://issues.apache.org/jira/browse/LUCENE-5596
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Kevin Wang
Assignee: Uwe Schindler
 Attachments: LUCENE-5596.patch, LUCENE-5596.patch


 Currently if an number is larger than Long.MAX_VALUE, we can't index/search 
 that in lucene as a number. For example, IPv6 address is an 128 bit number, 
 so we can't index that as a numeric field and do numeric range query etc.
 It would be good to support BigInteger / BigDecimal
 I've tried use BigInteger for IPv6 in Elasticsearch and that works fine, but 
 there are still lots of things to do
 https://github.com/elasticsearch/elasticsearch/pull/5758



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6583) Resuming connection with ZooKeeper causes log replay

2014-10-03 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6583:

Summary: Resuming connection with ZooKeeper causes log replay  (was: 
Resuming connection with ZooKeeper causes log replay code)

 Resuming connection with ZooKeeper causes log replay
 

 Key: SOLR-6583
 URL: https://issues.apache.org/jira/browse/SOLR-6583
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, Trunk


 If a node is partitioned from ZooKeeper for an extended period of time then 
 upon resuming connection, the node re-registers itself causing 
 recoverFromLog() method to be executed which fails with the following 
 exception:
 {code}
 8091124 [Thread-71] ERROR org.apache.solr.update.UpdateLog  – Error 
 inspecting tlog 
 tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009869
  refcount=2}
 java.nio.channels.ClosedChannelException
 at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99)
 at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678)
 at 
 org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784)
 at 
 org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89)
 at 
 org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125)
 at java.io.InputStream.read(InputStream.java:101)
 at 
 org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218)
 at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800)
 at org.apache.solr.cloud.ZkController.register(ZkController.java:834)
 at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271)
 at 
 org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
 8091125 [Thread-71] ERROR org.apache.solr.update.UpdateLog  – Error 
 inspecting tlog 
 tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009870
  refcount=2}
 java.nio.channels.ClosedChannelException
 at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99)
 at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678)
 at 
 org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784)
 at 
 org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89)
 at 
 org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125)
 at java.io.InputStream.read(InputStream.java:101)
 at 
 org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218)
 at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800)
 at org.apache.solr.cloud.ZkController.register(ZkController.java:834)
 at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271)
 at 
 org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
 {code}
 This is because the recoverFromLog uses transaction log references that were 
 collected at startup and are no longer valid.
 We shouldn't even be running recoverFromLog code for ZK re-connect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6583) Resuming connection with ZooKeeper causes log replay code

2014-10-03 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6583:

Summary: Resuming connection with ZooKeeper causes log replay code  (was: 
Resuming connection with ZooKeeper causes log replay code to run)

 Resuming connection with ZooKeeper causes log replay code
 -

 Key: SOLR-6583
 URL: https://issues.apache.org/jira/browse/SOLR-6583
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, Trunk


 If a node is partitioned from ZooKeeper for an extended period of time then 
 upon resuming connection, the node re-registers itself causing 
 recoverFromLog() method to be executed which fails with the following 
 exception:
 {code}
 8091124 [Thread-71] ERROR org.apache.solr.update.UpdateLog  – Error 
 inspecting tlog 
 tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009869
  refcount=2}
 java.nio.channels.ClosedChannelException
 at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99)
 at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678)
 at 
 org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784)
 at 
 org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89)
 at 
 org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125)
 at java.io.InputStream.read(InputStream.java:101)
 at 
 org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218)
 at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800)
 at org.apache.solr.cloud.ZkController.register(ZkController.java:834)
 at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271)
 at 
 org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
 8091125 [Thread-71] ERROR org.apache.solr.update.UpdateLog  – Error 
 inspecting tlog 
 tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009870
  refcount=2}
 java.nio.channels.ClosedChannelException
 at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99)
 at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678)
 at 
 org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784)
 at 
 org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89)
 at 
 org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125)
 at java.io.InputStream.read(InputStream.java:101)
 at 
 org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218)
 at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800)
 at org.apache.solr.cloud.ZkController.register(ZkController.java:834)
 at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271)
 at 
 org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
 {code}
 This is because the recoverFromLog uses transaction log references that were 
 collected at startup and are no longer valid.
 We shouldn't even be running recoverFromLog code for ZK re-connect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6510) select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector

2014-10-03 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6510:
-
Fix Version/s: 4.10.2

 select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector
 --

 Key: SOLR-6510
 URL: https://issues.apache.org/jira/browse/SOLR-6510
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Christine Poerschke
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.10.2


 Affects branch_4x but not trunk, collapse field must be docValues=true and 
 shard empty (or with nothing indexed for the field?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6510) select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector

2014-10-03 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157927#comment-14157927
 ] 

Joel Bernstein commented on SOLR-6510:
--

Ok, let's assume they'll be a 4.10.2. 

David, if you can take this one that will be great. Trunk and 5x shouldn't be 
affected by this though and I'll confirm this while working on SOLR-6581.

 select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector
 --

 Key: SOLR-6510
 URL: https://issues.apache.org/jira/browse/SOLR-6510
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Christine Poerschke
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.10.2


 Affects branch_4x but not trunk, collapse field must be docValues=true and 
 shard empty (or with nothing indexed for the field?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6510) select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector

2014-10-03 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6510:
-
Assignee: (was: Joel Bernstein)

 select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector
 --

 Key: SOLR-6510
 URL: https://issues.apache.org/jira/browse/SOLR-6510
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Christine Poerschke
Priority: Minor
 Fix For: 4.10.2


 Affects branch_4x but not trunk, collapse field must be docValues=true and 
 shard empty (or with nothing indexed for the field?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6584) Export handler trips bug in prefetch with very small indexes.

2014-10-03 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-6584:


 Summary: Export handler trips bug in prefetch with very small 
indexes.
 Key: SOLR-6584
 URL: https://issues.apache.org/jira/browse/SOLR-6584
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Priority: Minor


When there are very few documents in the index the ExportQParserPlugin is 
creating a dummy docList which is larger then the number of documents in the 
index. This causes a bug during the prefetch stage of the QueryComponent.

There really needs to be two fixes here.

1) The dummy docList should never be larger then the number of documents in the 
index.

2) Prefetch should be turned off during exports as it's not doing anything 
useful.







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6584) Export handler causes bug in prefetch with very small indexes.

2014-10-03 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6584:
-
Summary: Export handler causes bug in prefetch with very small indexes.  
(was: Export handler trips bug in prefetch with very small indexes.)

 Export handler causes bug in prefetch with very small indexes.
 --

 Key: SOLR-6584
 URL: https://issues.apache.org/jira/browse/SOLR-6584
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Priority: Minor

 When there are very few documents in the index the ExportQParserPlugin is 
 creating a dummy docList which is larger then the number of documents in the 
 index. This causes a bug during the prefetch stage of the QueryComponent.
 There really needs to be two fixes here.
 1) The dummy docList should never be larger then the number of documents in 
 the index.
 2) Prefetch should be turned off during exports as it's not doing anything 
 useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-03 Thread Noble Paul (JIRA)
Noble Paul created SOLR-6585:


 Summary: Let a requestHandler handle sub paths as well
 Key: SOLR-6585
 URL: https://issues.apache.org/jira/browse/SOLR-6585
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul


If a request handler is registered at /path , it should be able to handle 
/path/a or /path/x/y if it chooses to without explicitly registering those 
paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6585) Let a requestHandler handle sub paths as well

2014-10-03 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6585:
-
Attachment: SOLR-6585.patch

no testcases yet

 Let a requestHandler handle sub paths as well
 -

 Key: SOLR-6585
 URL: https://issues.apache.org/jira/browse/SOLR-6585
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6585.patch


 If a request handler is registered at /path , it should be able to handle 
 /path/a or /path/x/y if it chooses to without explicitly registering those 
 paths. This will only work if those full paths are not explicitly registered



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6266) Couchbase plug-in for Solr

2014-10-03 Thread Karol Abramczyk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karol Abramczyk updated SOLR-6266:
--
Attachment: solr-couchbase-plugin-0.0.5-SNAPSHOT.tar.gz

I fixed couple of critical errors in this plugin, like setting numVBuckets 
parameter for running this plugin on Macs, synchronization speed improvements, 
using ant fixcrlf instead of executing dos2unix to patch solr configuration 
files and few more. I attach latest 0.0.5 version snapshot.

 Couchbase plug-in for Solr
 --

 Key: SOLR-6266
 URL: https://issues.apache.org/jira/browse/SOLR-6266
 Project: Solr
  Issue Type: New Feature
Reporter: Varun
Assignee: Joel Bernstein
 Attachments: solr-couchbase-plugin-0.0.3-SNAPSHOT.tar.gz, 
 solr-couchbase-plugin-0.0.5-SNAPSHOT.tar.gz, solr-couchbase-plugin.tar.gz, 
 solr-couchbase-plugin.tar.gz


 It would be great if users could connect Couchbase and Solr so that updates 
 to Couchbase can automatically flow to Solr. Couchbase provides some very 
 nice API's which allow applications to mimic the behavior of a Couchbase 
 server so that it can receive updates via Couchbase's normal cross data 
 center replication (XDCR).
 One possible design for this is to create a CouchbaseLoader that extends 
 ContentStreamLoader. This new loader would embed the couchbase api's that 
 listen for incoming updates from couchbase, then marshal the couchbase 
 updates into the normal Solr update process. 
 Instead of marshaling couchbase updates into the normal Solr update process, 
 we could also embed a SolrJ client to relay the request through the http 
 interfaces. This may be necessary if we have to handle mapping couchbase 
 buckets to Solr collections on the Solr side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b04) - Build # 11224 - Failure!

2014-10-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11224/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[https://127.0.0.1:55457, 
https://127.0.0.1:51327, https://127.0.0.1:38369]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:55457, https://127.0.0.1:51327, 
https://127.0.0.1:38369]
at 
__randomizedtesting.SeedInfo.seed([4906A6A22C65089C:C8E028BA5B3A68A0]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-03 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157990#comment-14157990
 ] 

Steve Molloy commented on SOLR-6351:


Ok, applied locally and see that most is combined. One thing though, what is 
expected behavior for pivots where count is 0? Currently, you'll get the full 
entry with NaN, infinity and such in it. Should it be null or empty instead? Or 
should it even show up at all?

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158029#comment-14158029
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629207 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629207 ]

LUCENE-5969: add cfs to TestIWExceptions2

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6584) Export handler causes bug in prefetch with very small indexes.

2014-10-03 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6584:
-
Attachment: SOLR-6584.patch

Patch that ensures that doclist will never be larger then the number of docs in 
the index.

Actually turning off prefetch will involve adding new parameters to Solr. The 
dummy doclist will be cached in the document cache after the first run anyway 
so the pre-fetch will have very little impact on performance. So I think it can 
remain for now.

 Export handler causes bug in prefetch with very small indexes.
 --

 Key: SOLR-6584
 URL: https://issues.apache.org/jira/browse/SOLR-6584
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-6584.patch


 When there are very few documents in the index the ExportQParserPlugin is 
 creating a dummy docList which is larger then the number of documents in the 
 index. This causes a bug during the prefetch stage of the QueryComponent.
 There really needs to be two fixes here.
 1) The dummy docList should never be larger then the number of documents in 
 the index.
 2) Prefetch should be turned off during exports as it's not doing anything 
 useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6584) Export handler causes bug in prefetch with very small indexes.

2014-10-03 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158039#comment-14158039
 ] 

Joel Bernstein edited comment on SOLR-6584 at 10/3/14 2:35 PM:
---

Patch that ensures that the doclist will never be larger then the number of 
docs in the index.

Actually turning off prefetch will involve adding new parameters to Solr. The 
dummy doclist will be cached in the document cache after the first run anyway 
so the pre-fetch will have very little impact on performance. So I think it can 
remain for now.


was (Author: joel.bernstein):
Patch that ensures that doclist will never be larger then the number of docs in 
the index.

Actually turning off prefetch will involve adding new parameters to Solr. The 
dummy doclist will be cached in the document cache after the first run anyway 
so the pre-fetch will have very little impact on performance. So I think it can 
remain for now.

 Export handler causes bug in prefetch with very small indexes.
 --

 Key: SOLR-6584
 URL: https://issues.apache.org/jira/browse/SOLR-6584
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-6584.patch


 When there are very few documents in the index the ExportQParserPlugin is 
 creating a dummy docList which is larger then the number of documents in the 
 index. This causes a bug during the prefetch stage of the QueryComponent.
 There really needs to be two fixes here.
 1) The dummy docList should never be larger then the number of documents in 
 the index.
 2) Prefetch should be turned off during exports as it's not doing anything 
 useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-10-03 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reopened SOLR-6511:
--

Re-opening this to address the problem Shalin noticed.

 Fencepost error in LeaderInitiatedRecoveryThread
 

 Key: SOLR-6511
 URL: https://issues.apache.org/jira/browse/SOLR-6511
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
Assignee: Timothy Potter
 Fix For: 5.0

 Attachments: SOLR-6511.patch, SOLR-6511.patch


 At line 106:
 {code}
 while (continueTrying  ++tries  maxTries) {
 {code}
 should be
 {code}
 while (continueTrying  ++tries = maxTries) {
 {code}
 This is only a problem when called from DistributedUpdateProcessor, as it can 
 have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6249) Schema API changes return success before all cores are updated

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158089#comment-14158089
 ] 

ASF subversion and git services commented on SOLR-6249:
---

Commit 1629229 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1629229 ]

SOLR-6249: support re-establishing a new watcher on the managed schema znode 
after zk session expiration.

 Schema API changes return success before all cores are updated
 --

 Key: SOLR-6249
 URL: https://issues.apache.org/jira/browse/SOLR-6249
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis, SolrCloud
Reporter: Gregory Chanan
Assignee: Timothy Potter
 Attachments: SOLR-6249.patch, SOLR-6249.patch, SOLR-6249.patch, 
 SOLR-6249_reconnect.patch, SOLR-6249_reconnect.patch


 See SOLR-6137 for more details.
 The basic issue is that Schema API changes return success when the first core 
 is updated, but other cores asynchronously read the updated schema from 
 ZooKeeper.
 So a client application could make a Schema API change and then index some 
 documents based on the new schema that may fail on other nodes.
 Possible fixes:
 1) Make the Schema API calls synchronous
 2) Give the client some ability to track the state of the schema.  They can 
 already do this to a certain extent by checking the Schema API on all the 
 replicas and verifying that the field has been added, though this is pretty 
 cumbersome.  Maybe it makes more sense to do this sort of thing on the 
 collection level, i.e. Schema API changes return the zk version to the 
 client.  We add an API to return the current zk version.  On a replica, if 
 the zk version is = the version the client has, the client knows that 
 replica has at least seen the schema change.  We could also provide an API to 
 do the distribution and checking across the different replicas of the 
 collection so that clients don't need ot do that themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6249) Schema API changes return success before all cores are updated

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158134#comment-14158134
 ] 

ASF subversion and git services commented on SOLR-6249:
---

Commit 1629246 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1629246 ]

SOLR-6249: support re-establishing a new watcher on the managed schema znode 
after zk session expiration.

 Schema API changes return success before all cores are updated
 --

 Key: SOLR-6249
 URL: https://issues.apache.org/jira/browse/SOLR-6249
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis, SolrCloud
Reporter: Gregory Chanan
Assignee: Timothy Potter
 Fix For: 5.0

 Attachments: SOLR-6249.patch, SOLR-6249.patch, SOLR-6249.patch, 
 SOLR-6249_reconnect.patch, SOLR-6249_reconnect.patch


 See SOLR-6137 for more details.
 The basic issue is that Schema API changes return success when the first core 
 is updated, but other cores asynchronously read the updated schema from 
 ZooKeeper.
 So a client application could make a Schema API change and then index some 
 documents based on the new schema that may fail on other nodes.
 Possible fixes:
 1) Make the Schema API calls synchronous
 2) Give the client some ability to track the state of the schema.  They can 
 already do this to a certain extent by checking the Schema API on all the 
 replicas and verifying that the field has been added, though this is pretty 
 cumbersome.  Maybe it makes more sense to do this sort of thing on the 
 collection level, i.e. Schema API changes return the zk version to the 
 client.  We add an API to return the current zk version.  On a replica, if 
 the zk version is = the version the client has, the client knows that 
 replica has at least seen the schema change.  We could also provide an API to 
 do the distribution and checking across the different replicas of the 
 collection so that clients don't need ot do that themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6249) Schema API changes return success before all cores are updated

2014-10-03 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6249.
--
   Resolution: Fixed
Fix Version/s: 5.0

 Schema API changes return success before all cores are updated
 --

 Key: SOLR-6249
 URL: https://issues.apache.org/jira/browse/SOLR-6249
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis, SolrCloud
Reporter: Gregory Chanan
Assignee: Timothy Potter
 Fix For: 5.0

 Attachments: SOLR-6249.patch, SOLR-6249.patch, SOLR-6249.patch, 
 SOLR-6249_reconnect.patch, SOLR-6249_reconnect.patch


 See SOLR-6137 for more details.
 The basic issue is that Schema API changes return success when the first core 
 is updated, but other cores asynchronously read the updated schema from 
 ZooKeeper.
 So a client application could make a Schema API change and then index some 
 documents based on the new schema that may fail on other nodes.
 Possible fixes:
 1) Make the Schema API calls synchronous
 2) Give the client some ability to track the state of the schema.  They can 
 already do this to a certain extent by checking the Schema API on all the 
 replicas and verifying that the field has been added, though this is pretty 
 cumbersome.  Maybe it makes more sense to do this sort of thing on the 
 collection level, i.e. Schema API changes return the zk version to the 
 client.  We add an API to return the current zk version.  On a replica, if 
 the zk version is = the version the client has, the client knows that 
 replica has at least seen the schema change.  We could also provide an API to 
 do the distribution and checking across the different replicas of the 
 collection so that clients don't need ot do that themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6586) JmxMonitoredMap#getAttribute is not very efficient.

2014-10-03 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6586:
--
Summary: JmxMonitoredMap#getAttribute is not very efficient.  (was: 
JmxMonitoredMap#getAtrribute is not very efficient.)

 JmxMonitoredMap#getAttribute is not very efficient.
 ---

 Key: SOLR-6586
 URL: https://issues.apache.org/jira/browse/SOLR-6586
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6586) JmxMonitoredMap#getAtrribute is not very efficient.

2014-10-03 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6586:
-

 Summary: JmxMonitoredMap#getAtrribute is not very efficient.
 Key: SOLR-6586
 URL: https://issues.apache.org/jira/browse/SOLR-6586
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6586) JmxMonitoredMap#getAttribute is not very efficient.

2014-10-03 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158182#comment-14158182
 ] 

Mark Miller commented on SOLR-6586:
---

When using JmxMonitoredMap in a pattern of:

JmxMonitoredMap#getMBeanInfo // to get the attributes
JmxMonitoredMap#getAttribute
JmxMonitoredMap#getAttribute

 JmxMonitoredMap#getAttribute is not very efficient.
 ---

 Key: SOLR-6586
 URL: https://issues.apache.org/jira/browse/SOLR-6586
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_40-ea-b04) - Build # 4351 - Failure!

2014-10-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4351/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:58010/rh_ya/i, http://127.0.0.1:58025/rh_ya/i, 
http://127.0.0.1:58034/rh_ya/i, http://127.0.0.1:58043/rh_ya/i, 
http://127.0.0.1:58052/rh_ya/i]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:58010/rh_ya/i, 
http://127.0.0.1:58025/rh_ya/i, http://127.0.0.1:58034/rh_ya/i, 
http://127.0.0.1:58043/rh_ya/i, http://127.0.0.1:58052/rh_ya/i]
at 
__randomizedtesting.SeedInfo.seed([87FC3AD3713174D4:61AB4CB066E14E8]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

[jira] [Comment Edited] (SOLR-6586) JmxMonitoredMap#getAttribute is not very efficient.

2014-10-03 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158182#comment-14158182
 ] 

Mark Miller edited comment on SOLR-6586 at 10/3/14 4:56 PM:


When using JmxMonitoredMap in a pattern of:

JmxMonitoredMap#getMBeanInfo // to get the attributes
JmxMonitoredMap#getAttribute
JmxMonitoredMap#getAttribute
...

Each call of getAttribute calls getStatistics on the SolrInfoMBean.

If there is any expense to the getStatistics call, this can be fairly painful. 
For example, the ReplicationHandler is registered so that it's getStatistics 
neeeds to be called twice if you go through all of the SolrInfoMBeans. However, 
because it's called for each attribute, it's actually called 2 * number of 
atrribs times. Because the replication handler does things like getting the 
size of the index directory, this is fairly wasteful.

It seems one option around it is to implement getters for each attribute on the 
ReplicationHandler and other SolrInfoMBeans. That seems quite cumbersome and 
long term annoying though.


was (Author: markrmil...@gmail.com):
When using JmxMonitoredMap in a pattern of:

JmxMonitoredMap#getMBeanInfo // to get the attributes
JmxMonitoredMap#getAttribute
JmxMonitoredMap#getAttribute

 JmxMonitoredMap#getAttribute is not very efficient.
 ---

 Key: SOLR-6586
 URL: https://issues.apache.org/jira/browse/SOLR-6586
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-03 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158244#comment-14158244
 ] 

Hoss Man commented on SOLR-6351:


bq. One thing though, what is expected behavior for pivots where count is 0? 
Currently, you'll get the full entry with NaN, infinity and such in it. Should 
it be null or empty instead? Or should it even show up at all?

Great question. 

I think in this case we should leave out stats subsection completely for 
brevity -- similar to how the pivot subsection is left out whenever there are 
no sub-pivots to report counts for.  it's also mostly consistent with the 
common case of sub-pivots when the parent count is 0 in distributed pivots 
since mincount=0 isn't really viable (SOLR-6329).

But i could be persuaded that we should leave in an empty 'stats' section ... a 
stats section full of fields reporting a bunch of NaN and Infinity counts seems 
likeit's just asking for causing user error though.




 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158247#comment-14158247
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629272 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629272 ]

LUCENE-5969: start porting over tests to BaseCompoundFormatTestCase

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2153 - Still Failing

2014-10-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2153/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:25765/lx_/pd, https://127.0.0.1:25741/lx_/pd, 
https://127.0.0.1:25751/lx_/pd, https://127.0.0.1:25773/lx_/pd, 
https://127.0.0.1:25733/lx_/pd]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:25765/lx_/pd, 
https://127.0.0.1:25741/lx_/pd, https://127.0.0.1:25751/lx_/pd, 
https://127.0.0.1:25773/lx_/pd, https://127.0.0.1:25733/lx_/pd]
at 
__randomizedtesting.SeedInfo.seed([DCD0B5AD8AC7607:8C2B8542AFF3163B]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-4212) Support for facet pivot query for filtered count

2014-10-03 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158312#comment-14158312
 ] 

Hoss Man commented on SOLR-4212:


I haven't had a chance to review this patch, but in response to a dev@lucene 
thread about it...

bq. Base idea is to have something like: 
{{facet.pivot=field1,field2,field3f.field2.facet.pivot.q=somequeryf.field3.facet.pivot.q=somedate:\[NOW-1YEAR
 TO NOW\]f.field3.facet.pivot.q=someotherquery}} ... Which would add results 
similar to facet queries, at the appropriate level in the pivots.

From a functionality standpoint, what you are setting out to do here seems 
like a great idea -- but personally i think that syntax looks really 
cumbersome?

From a user API standpoint, It seems your gola here would gel really well with 
the syntax i proposed in SOLR-6348/(SOLR-6351,SOLR-6352,SOLR-6353) if you 
think about it in terms of hanging query facets off of pivots ... ie:

{noformat}
facet.pivot={!query=r1}category,manufacturer
facet.query={!tag=r1}somequery
facet.query={!tag=r1}somedate:[NOW-1YEAR TO NOW]
{noformat}

that seems like it might be cleaner, and fit better with some of the other 
ongoing work, what do you think?

 Support for facet pivot query for filtered count
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
 Fix For: 4.9, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, patch-4212.txt


 Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to add a facet.pivot.q parameter that would allow to 
 specify one or more queries (per field) that would be intersected with DocSet 
 used to calculate pivot count, stored in separate qcounts list, each entry 
 keyed by the query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6587) Misleading exception when creating collections in SolrCloud with bad configuration

2014-10-03 Thread JIRA
Tomás Fernández Löbbe created SOLR-6587:
---

 Summary: Misleading exception when creating collections in 
SolrCloud with bad configuration
 Key: SOLR-6587
 URL: https://issues.apache.org/jira/browse/SOLR-6587
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.1, 5.0, Trunk
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
Priority: Minor


I uploaded a configuration in bad shape to Zookeeper, then tried to create a 
collection and I was getting: 

{noformat}
ERROR - 2014-10-03 16:48:25.712; org.apache.solr.core.CoreContainer; Error 
creating core [tflobbe_collection1_shard2_replica2]: Could not load conf for 
core tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not support 
getConfigDir() - likely, what you are trying to do is not supported in 
ZooKeeper mode

org.apache.solr.common.SolrException: Could not load conf for core 
tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not support 
getConfigDir() - likely, what you are trying to do is not supported in 
ZooKeeper mode

at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:66)

at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)

at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)

at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)

at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)

at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)

at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)

at 
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)

at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)

at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)

at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)

at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)

at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)

at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)

at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)

at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)

at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)

at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)

at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)

at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)

at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)

at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)

at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)

at org.eclipse.jetty.server.Server.handle(Server.java:368)

at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)

at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)

at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)

at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)

at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)

at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)

at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)

at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)

at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)

at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
ZkSolrResourceLoader does not support getConfigDir() - likely, what you are 
trying to do is not supported in ZooKeeper mode

at 
org.apache.solr.cloud.ZkSolrResourceLoader.getConfigDir(ZkSolrResourceLoader.java:101)

at 
org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:147)

at 
org.apache.solr.core.ConfigSetService.createSolrConfig(ConfigSetService.java:80)

at 

[jira] [Updated] (SOLR-6587) Misleading exception when creating collections in SolrCloud with bad configuration

2014-10-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6587:

Attachment: SOLR-6587.patch

 Misleading exception when creating collections in SolrCloud with bad 
 configuration
 --

 Key: SOLR-6587
 URL: https://issues.apache.org/jira/browse/SOLR-6587
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.1, 5.0, Trunk
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-6587.patch


 I uploaded a configuration in bad shape to Zookeeper, then tried to create a 
 collection and I was getting: 
 {noformat}
 ERROR - 2014-10-03 16:48:25.712; org.apache.solr.core.CoreContainer; Error 
 creating core [tflobbe_collection1_shard2_replica2]: Could not load conf for 
 core tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not 
 support getConfigDir() - likely, what you are trying to do is not supported 
 in ZooKeeper mode
 org.apache.solr.common.SolrException: Could not load conf for core 
 tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not support 
 getConfigDir() - likely, what you are trying to do is not supported in 
 ZooKeeper mode
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:66)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
 at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
 ZkSolrResourceLoader does not support getConfigDir() - likely, what you are 
 trying to do is not supported in ZooKeeper mode
 at 
 

[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158370#comment-14158370
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629288 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629288 ]

LUCENE-5969: add/port more tests

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4212) Support for facet pivot query for filtered count

2014-10-03 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158374#comment-14158374
 ] 

Steve Molloy commented on SOLR-4212:


That's what I'm starting to realize by looking into SOLR-6351... :) It makes a 
lot of sense, I'll try to adapt and see if I can get facet ranges (SOLR-6353) 
covered at the same time, they should be similar with your proposed approach.

 Support for facet pivot query for filtered count
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
 Fix For: 4.9, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, patch-4212.txt


 Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to add a facet.pivot.q parameter that would allow to 
 specify one or more queries (per field) that would be intersected with DocSet 
 used to calculate pivot count, stored in separate qcounts list, each entry 
 keyed by the query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_20) - Build # 4249 - Still Failing!

2014-10-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4249/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
expected:0 but was:1

Stack Trace:
java.lang.AssertionError: expected:0 but was:1
at 
__randomizedtesting.SeedInfo.seed([E75A346024483F09:66BCBA7853175F35]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:153)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor53.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
   

[JENKINS] Lucene-Solr-trunk-Linux (32bit/ibm-j9-jdk7) - Build # 11378 - Failure!

2014-10-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11378/
Java: 32bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

1 tests failed.
REGRESSION:  
org.apache.solr.handler.component.DistributedFacetPivotLongTailTest.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:59208/esrf/c

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:59208/esrf/c
at 
__randomizedtesting.SeedInfo.seed([A8F48A312141DDBD:29120429561EBD81]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:580)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
at 
org.apache.solr.handler.component.DistributedFacetPivotLongTailTest.doTest(DistributedFacetPivotLongTailTest.java:81)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158446#comment-14158446
 ] 

ASF subversion and git services commented on SOLR-6476:
---

Commit 1629301 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1629301 ]

SOLR-6476

 Create a bulk mode for schema API
 -

 Key: SOLR-6476
 URL: https://issues.apache.org/jira/browse/SOLR-6476
 Project: Solr
  Issue Type: New Feature
  Components: Schema and Analysis
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: managedResource
 Fix For: 5.0, Trunk

 Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch


 The current schema API does one operation at a time and the normal usecase is 
 that users add multiple fields/fieldtypes/copyFields etc in one shot.
 example 
 {code:javascript}
 curl http://localhost:8983/solr/collection1/schema -H 
 'Content-type:application/json'  -d '{
 add-field: {
 name:sell-by,
 type:tdate,
 stored:true
 },
 add-field:{
 name:catchall,
 type:text_general,
 stored:false
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158460#comment-14158460
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629303 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629303 ]

LUCENE-5969: port two remaining TestCompoundFile tests

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6587) Misleading exception when creating collections in SolrCloud with bad configuration

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158465#comment-14158465
 ] 

ASF subversion and git services commented on SOLR-6587:
---

Commit 1629305 from [~tomasflobbe] in branch 'dev/trunk'
[ https://svn.apache.org/r1629305 ]

SOLR-6587: Correct exception thrown on bad collection configuration in 
SolrCloud mode

 Misleading exception when creating collections in SolrCloud with bad 
 configuration
 --

 Key: SOLR-6587
 URL: https://issues.apache.org/jira/browse/SOLR-6587
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.1, 5.0, Trunk
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-6587.patch


 I uploaded a configuration in bad shape to Zookeeper, then tried to create a 
 collection and I was getting: 
 {noformat}
 ERROR - 2014-10-03 16:48:25.712; org.apache.solr.core.CoreContainer; Error 
 creating core [tflobbe_collection1_shard2_replica2]: Could not load conf for 
 core tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not 
 support getConfigDir() - likely, what you are trying to do is not supported 
 in ZooKeeper mode
 org.apache.solr.common.SolrException: Could not load conf for core 
 tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not support 
 getConfigDir() - likely, what you are trying to do is not supported in 
 ZooKeeper mode
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:66)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
 at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
 at 

[jira] [Commented] (SOLR-6587) Misleading exception when creating collections in SolrCloud with bad configuration

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158480#comment-14158480
 ] 

ASF subversion and git services commented on SOLR-6587:
---

Commit 1629311 from [~tomasflobbe] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1629311 ]

SOLR-6587: Correct exception thrown on bad collection configuration in 
SolrCloud mode

 Misleading exception when creating collections in SolrCloud with bad 
 configuration
 --

 Key: SOLR-6587
 URL: https://issues.apache.org/jira/browse/SOLR-6587
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.1, 5.0, Trunk
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-6587.patch


 I uploaded a configuration in bad shape to Zookeeper, then tried to create a 
 collection and I was getting: 
 {noformat}
 ERROR - 2014-10-03 16:48:25.712; org.apache.solr.core.CoreContainer; Error 
 creating core [tflobbe_collection1_shard2_replica2]: Could not load conf for 
 core tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not 
 support getConfigDir() - likely, what you are trying to do is not supported 
 in ZooKeeper mode
 org.apache.solr.common.SolrException: Could not load conf for core 
 tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not support 
 getConfigDir() - likely, what you are trying to do is not supported in 
 ZooKeeper mode
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:66)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
 at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
 at 

[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158484#comment-14158484
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1629313 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1629313 ]

LUCENE-5969: port remaining tests

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-03 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-6351:
---
Attachment: SOLR-6351.patch

Augmented previous 3 patches, added logic to not include stats entry if it's 
empty, fixed distributed logic by actually merging stats from shards. Currently 
have unit tests failing in solrj that I need to look at.

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 639 - Failure

2014-10-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/639/

4 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
The Monkey ran for over 20 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 20 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([90BC362B74A11273:115AB83303FE724F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:535)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:140)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158577#comment-14158577
 ] 

ASF subversion and git services commented on SOLR-5986:
---

Commit 1629329 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1629329 ]

SOLR-5986: comment out failing assertion in TestDistributedSearch until anshum 
can review/fix

 Don't allow runaway queries from harming Solr cluster health or search 
 performance
 --

 Key: SOLR-5986
 URL: https://issues.apache.org/jira/browse/SOLR-5986
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
 SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
 SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
 SOLR-5986.patch


 The intent of this ticket is to have all distributed search requests stop 
 wasting CPU cycles on requests that have already timed out or are so 
 complicated that they won't be able to execute. We have come across a case 
 where a nasty wildcard query within a proximity clause was causing the 
 cluster to enumerate terms for hours even though the query timeout was set to 
 minutes. This caused a noticeable slowdown within the system which made us 
 restart the replicas that happened to service that one request, the worst 
 case scenario are users with a relatively low zk timeout value will have 
 nodes start dropping from the cluster due to long GC pauses.
 [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
 BLUR-142 (see commit comment for code, though look at the latest code on the 
 trunk for newer bug fixes).
 Solr should be able to either prevent these problematic queries from running 
 by some heuristic (possibly estimated size of heap usage) or be able to 
 execute a thread interrupt on all query threads once the time threshold is 
 met. This issue mirrors what others have discussed on the mailing list: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-10-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158581#comment-14158581
 ] 

ASF subversion and git services commented on SOLR-5986:
---

Commit 1629330 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1629330 ]

SOLR-5986: comment out failing assertion in TestDistributedSearch until anshum 
can review/fix (merge r1629329)

 Don't allow runaway queries from harming Solr cluster health or search 
 performance
 --

 Key: SOLR-5986
 URL: https://issues.apache.org/jira/browse/SOLR-5986
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
 SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
 SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
 SOLR-5986.patch


 The intent of this ticket is to have all distributed search requests stop 
 wasting CPU cycles on requests that have already timed out or are so 
 complicated that they won't be able to execute. We have come across a case 
 where a nasty wildcard query within a proximity clause was causing the 
 cluster to enumerate terms for hours even though the query timeout was set to 
 minutes. This caused a noticeable slowdown within the system which made us 
 restart the replicas that happened to service that one request, the worst 
 case scenario are users with a relatively low zk timeout value will have 
 nodes start dropping from the cluster due to long GC pauses.
 [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
 BLUR-142 (see commit comment for code, though look at the latest code on the 
 trunk for newer bug fixes).
 Solr should be able to either prevent these problematic queries from running 
 by some heuristic (possibly estimated size of heap usage) or be able to 
 execute a thread interrupt on all query threads once the time threshold is 
 met. This issue mirrors what others have discussed on the mailing list: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-10-03 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158609#comment-14158609
 ] 

Hoss Man commented on SOLR-5986:


FYI: this assertion (modified by r1627622/r1627635) has been failing in jenkins 
several times since committed...

{noformat}
1498993 shalin   // test group query
1627635 anshum   // TODO: Remove this? This doesn't make any real sense 
now that timeAllowed might trigger early
1627635 anshum   //   termination of the request during Terms 
enumeration/Query expansion.
1627635 anshum   //   During such an exit, partial results isn't 
supported as it wouldn't make any sense.
1627635 anshum   // Increasing the timeAllowed from 1 to 100 for now.
1498993 shalin   queryPartialResults(upShards, upClients,
1498993 shalin   q, *:*,
1498993 shalin   rows, 100,
1498993 shalin   fl, id, + i1,
1498993 shalin   group, true,
1498993 shalin   group.query, t1 + :kings OR  + t1 + :eggs,
1498993 shalin   group.limit, 10,
1498993 shalin   sort, i1 +  asc, id asc,
1627635 anshum   CommonParams.TIME_ALLOWED, 100,
1498993 shalin   ShardParams.SHARDS_INFO, true,
1498993 shalin   ShardParams.SHARDS_TOLERANT, true);
{noformat}

example: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11221/
{noformat}
Error Message:
Request took too long during query expansion. Terminating request.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Request 
took too long during query expansion. Terminating request.
at 
__randomizedtesting.SeedInfo.seed([377AFD4F005F159A:B69C7357770075A6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:596)
at 
org.apache.solr.TestDistributedSearch.doTest(TestDistributedSearch.java:499)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:875)
{noformat}

I'm not fully understanding what anshum ment by this TODO, and I think he's 
offline for the next few days, so i went ahead and comment this out with a link 
back to this jira for him to look at before resolving this jira.

 Don't allow runaway queries from harming Solr cluster health or search 
 performance
 --

 Key: SOLR-5986
 URL: https://issues.apache.org/jira/browse/SOLR-5986
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
 SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
 SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
 SOLR-5986.patch


 The intent of this ticket is to have all distributed search requests stop 
 wasting CPU cycles on requests that have already timed out or are so 
 complicated that they won't be able to execute. We have come across a case 
 where a nasty wildcard query within a proximity clause was causing the 
 cluster to enumerate terms for hours even though the query timeout was set to 
 minutes. This caused a noticeable slowdown within the system which made us 
 restart the replicas that happened to service that one request, the worst 
 case scenario are users with a relatively low zk timeout value will have 
 nodes start dropping from the cluster due to long GC pauses.
 [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
 BLUR-142 (see commit comment for code, though look at the latest code on the 
 trunk for newer bug fixes).
 Solr should be able to either prevent these problematic queries from running 
 by some heuristic (possibly estimated size of heap usage) or be able to 
 execute a thread interrupt on all query threads once the time threshold is 
 met. This issue mirrors what others have discussed on the mailing list: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, 

[jira] [Commented] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-03 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158661#comment-14158661
 ] 

Hoss Man commented on SOLR-6351:


i haven't done an extensive review, but here's some quick comments/questions 
based on skim of the latest patch...

1) this block of new code in PivotFacetProcessor (which pops up twice at diff 
points in the patch?) doesn't make any sense to me ... the StatsValues returned 
from {{computeLocalStatsValues()}} is totally ignored ?
{noformat}
+  for(StatsField statsField : statsFields) {
+ statsField.computeLocalStatsValues(docSet);
+  }
{noformat}

2) i don't think {{StatsValues.hasValues()}} really makes sense ... we 
shouldn't be computing StatsValues against the subset and then adding them to 
the response if and only if they {{hasValues()}}  -- we should instead be 
skipping the computation of the StatsValues completely unless the pivot subset 
is non-empty.

this isn't just a question of optimizing away the stats computation -- it's a 
very real difference in the fundamental logic.  there could be a non-empty set 
of documents (ie: pivot count  0), but the stats we've been asked to compute 
(ie: over some field X) might result in a stats count that's 0 (if none of hte 
docs in that set have a value in field X) in which case we should still include 
the stats in the response.

3) why is a {{CommonParams.STATS}} constant being added?  isn't this what 
{{StatsParams.STATS}} is for?

4) i'm not really understanding the point of the two new 
{{SolrExampleTests.testPivotFacetsStatsNotSupported*}} methods ... what's the 
goal behind having these tests?
If nothing else, as things stand right now, these seem like they make really 
brittle assumptions about the _exact_ error message they expect -- we should 
change them to use substring/regex to sanity check just the key pieces of 
information we care about finding in the error message.
We should also assert that the error code on these exceptions is definitely a 
4xx error and not a 5xx

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_20) - Build # 11227 - Failure!

2014-10-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11227/
Java: 32bit/jdk1.8.0_20 -server -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:53229/tv/ry, http://127.0.0.1:42341/tv/ry, 
http://127.0.0.1:52122/tv/ry]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:53229/tv/ry, 
http://127.0.0.1:42341/tv/ry, http://127.0.0.1:52122/tv/ry]
at 
__randomizedtesting.SeedInfo.seed([3A0F9EF5AF1567E3:BBE910EDD84A07DF]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4904 - Failure

2014-10-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4904/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
expected:0 but was:1

Stack Trace:
java.lang.AssertionError: expected:0 but was:1
at 
__randomizedtesting.SeedInfo.seed([94F80D5773E7A309:151E834F04B8C335]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:153)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2154 - Still Failing

2014-10-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2154/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[https://127.0.0.1:12203, 
https://127.0.0.1:12250, https://127.0.0.1:12232]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:12203, https://127.0.0.1:12250, 
https://127.0.0.1:12232]
at 
__randomizedtesting.SeedInfo.seed([433F666DF74A394E:C2D9E87580155972]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b04) - Build # 11381 - Failure!

2014-10-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11381/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:49119/ov/d, http://127.0.0.1:60075/ov/d, 
http://127.0.0.1:36939/ov/d, http://127.0.0.1:57509/ov/d, 
http://127.0.0.1:44093/ov/d]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:49119/ov/d, 
http://127.0.0.1:60075/ov/d, http://127.0.0.1:36939/ov/d, 
http://127.0.0.1:57509/ov/d, http://127.0.0.1:44093/ov/d]
at 
__randomizedtesting.SeedInfo.seed([7226C6D3AD3C3D0F:F3C048CBDA635D33]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 199 - Failure

2014-10-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/199/

No tests ran.

Build Log:
[...truncated 51231 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 254 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (18.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.0.0-src.tgz...
   [smoker] 27.7 MB in 0.04 sec (649.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.tgz...
   [smoker] 61.1 MB in 0.14 sec (429.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.zip...
   [smoker] 70.5 MB in 0.10 sec (672.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5575 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5575 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.disableHdfs=true -Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 215 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (44.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.0.0-src.tgz...
   [smoker] 33.9 MB in 0.14 sec (236.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.tgz...
   [smoker] 143.4 MB in 0.75 sec (192.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.zip...
   [smoker] 149.5 MB in 0.70 sec (214.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
   [smoker]   startup done
   [smoker]   test utf8...
   [smoker]   index example docs...
   [smoker]   run query...
   [smoker]   stop server (SIGINT)...
   [smoker]   unpack solr-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has 

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_20) - Build # 4250 - Still Failing!

2014-10-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4250/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:49755/yk/s, https://127.0.0.1:49740/yk/s, 
https://127.0.0.1:49764/yk/s]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:49755/yk/s, 
https://127.0.0.1:49740/yk/s, https://127.0.0.1:49764/yk/s]
at 
__randomizedtesting.SeedInfo.seed([515D820F13088A91:D0BB0C176457EAAD]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

tests.monster failing on branch_5x

2014-10-03 Thread Shawn Heisey
I've been running an inclusive set of tests on branch_5x to do what I
can for the release effort.  It kept failing with OOME, so I kept
increasing the heap size. After trying 2GB and 3GB, I finally bumped it
all the way to 8GB and dropped the JVM count to 1, but that resulted in
different problems.  Here's the commandline that I used, followed by the
list of failures:

ant -Dtests.jvms=1 -Dtests.heapsize=8g -Dtests.nightly=true
-Dtests.weekly=true -Dtests.monster=true clean test | tee ~/b5x-testlog.txt

   [junit4] Tests with failures:
   [junit4]   - org.apache.lucene.index.Test2BTerms (suite)
   [junit4]   - org.apache.lucene.index.Test2BNumericDocValues.testNumerics
   [junit4]   - org.apache.lucene.index.Test2BNumericDocValues (suite)
   [junit4]   -
org.apache.lucene.index.Test2BSortedDocValues.testFixedSorted
   [junit4]   - org.apache.lucene.index.Test2BSortedDocValues.test2BOrds
   [junit4]   - org.apache.lucene.index.Test2BSortedDocValues (suite)
   [junit4]
   [junit4]
   [junit4] JVM J0: 0.90 .. 76575.00 = 76574.10s
   [junit4] Execution time total: 21 hours 16 minutes 15 seconds

All of them except for Test2BTerms failed because of this problem:

   [junit4] Throwable #1: java.lang.IllegalStateException: number
of documents in the index cannot exceed 2147483519

Test2BTerms failed for an entirely different reason:

   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=Test2BTerms
-Dtests.seed=9F2773FB226B1E02 -Dtests.nightly=true -Dtests.weekly=true
-Dtests.slow=true -Dtests.locale=es_PE
-Dtests.timezone=America/Los_Angeles -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.00s | Test2BTerms (suite) 
   [junit4] Throwable #1: java.lang.AssertionError: The test or
suite printed 3012118 bytes to stdout and stderr, even though the limit
was set to 8192 bytes. Increase the limit with @Limit, ignore it
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
   [junit4]at
__randomizedtesting.SeedInfo.seed([9F2773FB226B1E02]:0)
   [junit4]at java.lang.Thread.run(Thread.java:745)

I'm clueless about how to fix the number of documents going too high.  I
could probably fix the other one, if someone can tell me what the
preferred fix is.

I haven't tried this on the 4_10 branch, because it takes so long to
run.  I've started a similar commandline in branch_5x/solr to see what
happens.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org