[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1908 - Still Failing!

2014-10-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1908/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:54227/uh

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:54227/uh
at 
__randomizedtesting.SeedInfo.seed([50ABDED3DC7409A0:D14D50CBAB2B699C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:581)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:532)
at 
org.apache.solr.cloud.ShardSplitTest.incompleteOrOverlappingCustomRangeTest(ShardSplitTest.java:151)
at org.apache.solr.cloud.ShardSplitTest.doTest(ShardSplitTest.java:103)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor78.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Created] (SOLR-6656) FreeTextLookupFactory QUERY_ANALYZER constants value should be suggestAnalyzerFieldType

2014-10-27 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-6656:
---

 Summary: FreeTextLookupFactory QUERY_ANALYZER constants value 
should be suggestAnalyzerFieldType
 Key: SOLR-6656
 URL: https://issues.apache.org/jira/browse/SOLR-6656
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.1, 4.10, 4.9.1, 4.9, 4.8.1, 4.8
Reporter: Varun Thacker
 Fix For: 5.0, Trunk


In our lookup factories, the value for the constant QUERY_ANALYZER is :

AnalyzingLookupFactory and AnalyzingInfixLookupFactory = 
suggestAnalyzerFieldType

FuzzyLookupFactory refers to AnalyzingLookupFactory.QUERY_ANALYZER

While FreeTextLookupFactory uses suggestFreeTextAnalyzerFieldType

I think we should keep the constant QUERY_ANALYZER in LookupFactory and let all 
the factories that use them reference it.

Patch would be very simple but should we deprecate in branch_5x and remove it 
in trunk , or remove it in both branches?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6657) DocumentDictionaryFactory requires weightField to be mandatory, but it shouldn't

2014-10-27 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-6657:
---

 Summary:  DocumentDictionaryFactory requires weightField to be 
mandatory, but it shouldn't
 Key: SOLR-6657
 URL: https://issues.apache.org/jira/browse/SOLR-6657
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.1, 4.10, 4.9.1, 4.9, 4.8.1, 4.8
Reporter: Varun Thacker
 Fix For: 5.0, Trunk


 DocumentDictionaryFactory requires weightField to be mandatory, but it doesn't 
need to as DocumentDictionary allows it to be null

So one has to define the weight field in the solrconfig.xml even if their data 
doesn't contain any weights. We shouldn't make the weightField mandatory in 
DocumentDictionaryFactory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6658) SearchHandler should accept POST requests with JSON data in content stream for customized plug-in components

2014-10-27 Thread Mark Peng (JIRA)
Mark Peng created SOLR-6658:
---

 Summary: SearchHandler should accept POST requests with JSON data 
in content stream for customized plug-in components
 Key: SOLR-6658
 URL: https://issues.apache.org/jira/browse/SOLR-6658
 Project: Solr
  Issue Type: Improvement
  Components: search, SearchComponents - other
Affects Versions: 4.10.1, 4.10, 4.9.1, 4.9, 4.8.1, 4.8, 4.7.2, 4.7.1, 4.7
Reporter: Mark Peng


This issue relates to the following one:
*Return HTTP error on POST requests with no Content-Type*
[https://issues.apache.org/jira/browse/SOLR-5517]

The original consideration of the above is to make sure that incoming POST 
requests to SearchHandler have corresponding content-type specified. That is 
quite reasonable, however, the following lines in the patch cause to reject all 
POST requests with content stream data, which is not necessary to that issue:

{code}
Index: solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
===
--- solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java 
(revision 1546817)
+++ solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java 
(working copy)
@@ -22,9 +22,11 @@
 import java.util.List;
 
 import org.apache.solr.common.SolrException;
+import org.apache.solr.common.SolrException.ErrorCode;
 import org.apache.solr.common.params.CommonParams;
 import org.apache.solr.common.params.ModifiableSolrParams;
 import org.apache.solr.common.params.ShardParams;
+import org.apache.solr.common.util.ContentStream;
 import org.apache.solr.core.CloseHook;
 import org.apache.solr.core.PluginInfo;
 import org.apache.solr.core.SolrCore;
@@ -165,6 +167,10 @@
   {
 // int sleep = req.getParams().getInt(sleep,0);
 // if (sleep  0) {log.error(SLEEPING for  + sleep);  
Thread.sleep(sleep);}
+if (req.getContentStreams() != null  
req.getContentStreams().iterator().hasNext()) {
+  throw new SolrException(ErrorCode.BAD_REQUEST, Search requests cannot 
accept content streams);
+}
+
 ResponseBuilder rb = new ResponseBuilder(req, rsp, components);
 if (rb.requestInfo != null) {
   rb.requestInfo.setResponseBuilder(rb);
{code}

We are using Solr 4.5.1 in our production services and considering to upgrade 
to 4.9/5.0 to support more features. But due to this issue, we cannot have a 
chance to upgrade because we have some important customized SearchComponent 
plug-ins that need to get POST data from SearchHandler to do further processing.

Therefore, we are requesting if it is possible to remove the content stream 
constraint shown above and to let SearchHandler accept POST requests with 
*Content-Type: application/json* to allow further components to get the data.

Thank you.

Best regards,
Mark Peng




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6659) Deprecate/Remove Suggester.java (old style of building suggesters)

2014-10-27 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-6659:
---

 Summary: Deprecate/Remove Suggester.java (old style of building 
suggesters)
 Key: SOLR-6659
 URL: https://issues.apache.org/jira/browse/SOLR-6659
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
 Fix For: 5.0, Trunk


SOLR-5378 added the new SuggestComponent in Solr 4.7

I think we can deprecate/remove Suggester.java which is now the old style of 
building suggesters.

Any thoughts on what should be the correct approach?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6658) SearchHandler should accept POST requests with JSON data in content stream for customized plug-in components

2014-10-27 Thread Evan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184950#comment-14184950
 ] 

Evan commented on SOLR-6658:


+1

 SearchHandler should accept POST requests with JSON data in content stream 
 for customized plug-in components
 

 Key: SOLR-6658
 URL: https://issues.apache.org/jira/browse/SOLR-6658
 Project: Solr
  Issue Type: Improvement
  Components: search, SearchComponents - other
Affects Versions: 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Mark Peng

 This issue relates to the following one:
 *Return HTTP error on POST requests with no Content-Type*
 [https://issues.apache.org/jira/browse/SOLR-5517]
 The original consideration of the above is to make sure that incoming POST 
 requests to SearchHandler have corresponding content-type specified. That is 
 quite reasonable, however, the following lines in the patch cause to reject 
 all POST requests with content stream data, which is not necessary to that 
 issue:
 {code}
 Index: solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
 ===
 --- solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (revision 1546817)
 +++ solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (working copy)
 @@ -22,9 +22,11 @@
  import java.util.List;
  
  import org.apache.solr.common.SolrException;
 +import org.apache.solr.common.SolrException.ErrorCode;
  import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.ModifiableSolrParams;
  import org.apache.solr.common.params.ShardParams;
 +import org.apache.solr.common.util.ContentStream;
  import org.apache.solr.core.CloseHook;
  import org.apache.solr.core.PluginInfo;
  import org.apache.solr.core.SolrCore;
 @@ -165,6 +167,10 @@
{
  // int sleep = req.getParams().getInt(sleep,0);
  // if (sleep  0) {log.error(SLEEPING for  + sleep);  
 Thread.sleep(sleep);}
 +if (req.getContentStreams() != null  
 req.getContentStreams().iterator().hasNext()) {
 +  throw new SolrException(ErrorCode.BAD_REQUEST, Search requests cannot 
 accept content streams);
 +}
 +
  ResponseBuilder rb = new ResponseBuilder(req, rsp, components);
  if (rb.requestInfo != null) {
rb.requestInfo.setResponseBuilder(rb);
 {code}
 We are using Solr 4.5.1 in our production services and considering to upgrade 
 to 4.9/5.0 to support more features. But due to this issue, we cannot have a 
 chance to upgrade because we have some important customized SearchComponent 
 plug-ins that need to get POST data from SearchHandler to do further 
 processing.
 Therefore, we are requesting if it is possible to remove the content stream 
 constraint shown above and to let SearchHandler accept POST requests with 
 *Content-Type: application/json* to allow further components to get the data.
 Thank you.
 Best regards,
 Mark Peng



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-6658) SearchHandler should accept POST requests with JSON data in content stream for customized plug-in components

2014-10-27 Thread Evan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan updated SOLR-6658:
---
Comment: was deleted

(was: +1)

 SearchHandler should accept POST requests with JSON data in content stream 
 for customized plug-in components
 

 Key: SOLR-6658
 URL: https://issues.apache.org/jira/browse/SOLR-6658
 Project: Solr
  Issue Type: Improvement
  Components: search, SearchComponents - other
Affects Versions: 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Mark Peng

 This issue relates to the following one:
 *Return HTTP error on POST requests with no Content-Type*
 [https://issues.apache.org/jira/browse/SOLR-5517]
 The original consideration of the above is to make sure that incoming POST 
 requests to SearchHandler have corresponding content-type specified. That is 
 quite reasonable, however, the following lines in the patch cause to reject 
 all POST requests with content stream data, which is not necessary to that 
 issue:
 {code}
 Index: solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
 ===
 --- solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (revision 1546817)
 +++ solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (working copy)
 @@ -22,9 +22,11 @@
  import java.util.List;
  
  import org.apache.solr.common.SolrException;
 +import org.apache.solr.common.SolrException.ErrorCode;
  import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.ModifiableSolrParams;
  import org.apache.solr.common.params.ShardParams;
 +import org.apache.solr.common.util.ContentStream;
  import org.apache.solr.core.CloseHook;
  import org.apache.solr.core.PluginInfo;
  import org.apache.solr.core.SolrCore;
 @@ -165,6 +167,10 @@
{
  // int sleep = req.getParams().getInt(sleep,0);
  // if (sleep  0) {log.error(SLEEPING for  + sleep);  
 Thread.sleep(sleep);}
 +if (req.getContentStreams() != null  
 req.getContentStreams().iterator().hasNext()) {
 +  throw new SolrException(ErrorCode.BAD_REQUEST, Search requests cannot 
 accept content streams);
 +}
 +
  ResponseBuilder rb = new ResponseBuilder(req, rsp, components);
  if (rb.requestInfo != null) {
rb.requestInfo.setResponseBuilder(rb);
 {code}
 We are using Solr 4.5.1 in our production services and considering to upgrade 
 to 4.9/5.0 to support more features. But due to this issue, we cannot have a 
 chance to upgrade because we have some important customized SearchComponent 
 plug-ins that need to get POST data from SearchHandler to do further 
 processing.
 Therefore, we are requesting if it is possible to remove the content stream 
 constraint shown above and to let SearchHandler accept POST requests with 
 *Content-Type: application/json* to allow further components to get the data.
 Thank you.
 Best regards,
 Mark Peng



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6660) Improve the usability for the new Suggester

2014-10-27 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-6660:
---

 Summary: Improve the usability for the new Suggester
 Key: SOLR-6660
 URL: https://issues.apache.org/jira/browse/SOLR-6660
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
 Fix For: 5.0, Trunk


Creating a parent Jira to track the issues which need to be fixed to improve 
the experience of using the suggester in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6659) Deprecate/Remove Suggester.java (old style of building suggesters)

2014-10-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6659:

Issue Type: Sub-task  (was: Improvement)
Parent: SOLR-6660

 Deprecate/Remove Suggester.java (old style of building suggesters)
 --

 Key: SOLR-6659
 URL: https://issues.apache.org/jira/browse/SOLR-6659
 Project: Solr
  Issue Type: Sub-task
Reporter: Varun Thacker
  Labels: suggester
 Fix For: 5.0, Trunk


 SOLR-5378 added the new SuggestComponent in Solr 4.7
 I think we can deprecate/remove Suggester.java which is now the old style of 
 building suggesters.
 Any thoughts on what should be the correct approach?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6658) SearchHandler should accept POST requests with JSON data in content stream for customized plug-in components

2014-10-27 Thread chiehheng.lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184958#comment-14184958
 ] 

chiehheng.lin commented on SOLR-6658:
-

I've got a same problem, somebody else?

 SearchHandler should accept POST requests with JSON data in content stream 
 for customized plug-in components
 

 Key: SOLR-6658
 URL: https://issues.apache.org/jira/browse/SOLR-6658
 Project: Solr
  Issue Type: Improvement
  Components: search, SearchComponents - other
Affects Versions: 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Mark Peng

 This issue relates to the following one:
 *Return HTTP error on POST requests with no Content-Type*
 [https://issues.apache.org/jira/browse/SOLR-5517]
 The original consideration of the above is to make sure that incoming POST 
 requests to SearchHandler have corresponding content-type specified. That is 
 quite reasonable, however, the following lines in the patch cause to reject 
 all POST requests with content stream data, which is not necessary to that 
 issue:
 {code}
 Index: solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
 ===
 --- solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (revision 1546817)
 +++ solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (working copy)
 @@ -22,9 +22,11 @@
  import java.util.List;
  
  import org.apache.solr.common.SolrException;
 +import org.apache.solr.common.SolrException.ErrorCode;
  import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.ModifiableSolrParams;
  import org.apache.solr.common.params.ShardParams;
 +import org.apache.solr.common.util.ContentStream;
  import org.apache.solr.core.CloseHook;
  import org.apache.solr.core.PluginInfo;
  import org.apache.solr.core.SolrCore;
 @@ -165,6 +167,10 @@
{
  // int sleep = req.getParams().getInt(sleep,0);
  // if (sleep  0) {log.error(SLEEPING for  + sleep);  
 Thread.sleep(sleep);}
 +if (req.getContentStreams() != null  
 req.getContentStreams().iterator().hasNext()) {
 +  throw new SolrException(ErrorCode.BAD_REQUEST, Search requests cannot 
 accept content streams);
 +}
 +
  ResponseBuilder rb = new ResponseBuilder(req, rsp, components);
  if (rb.requestInfo != null) {
rb.requestInfo.setResponseBuilder(rb);
 {code}
 We are using Solr 4.5.1 in our production services and considering to upgrade 
 to 4.9/5.0 to support more features. But due to this issue, we cannot have a 
 chance to upgrade because we have some important customized SearchComponent 
 plug-ins that need to get POST data from SearchHandler to do further 
 processing.
 Therefore, we are requesting if it is possible to remove the content stream 
 constraint shown above and to let SearchHandler accept POST requests with 
 *Content-Type: application/json* to allow further components to get the data.
 Thank you.
 Best regards,
 Mark Peng



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6657) DocumentDictionaryFactory requires weightField to be mandatory, but it shouldn't

2014-10-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6657:

Issue Type: Sub-task  (was: Bug)
Parent: SOLR-6660

  DocumentDictionaryFactory requires weightField to be mandatory, but it 
 shouldn't
 -

 Key: SOLR-6657
 URL: https://issues.apache.org/jira/browse/SOLR-6657
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
  Labels: suggester
 Fix For: 5.0, Trunk


  DocumentDictionaryFactory requires weightField to be mandatory, but it 
 doesn't need to as DocumentDictionary allows it to be null
 So one has to define the weight field in the solrconfig.xml even if their 
 data doesn't contain any weights. We shouldn't make the weightField mandatory 
 in DocumentDictionaryFactory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2014-10-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6648:

Issue Type: Sub-task  (was: Bug)
Parent: SOLR-6660

 AnalyzingInfixLookupFactory always highlights suggestions
 -

 Key: SOLR-6648
 URL: https://issues.apache.org/jira/browse/SOLR-6648
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
  Labels: suggester
 Fix For: 5.0, Trunk


 When using AnalyzingInfixLookupFactory suggestions always return with the 
 match term as highlighted and 'allTermsRequired' is always set to true.
 We should be able to configure those.
 Steps to reproduce - 
 schema additions
 {code}
 searchComponent name=suggest class=solr.SuggestComponent
 lst name=suggester
   str name=namemySuggester/str
   str name=lookupImplAnalyzingInfixLookupFactory/str
   str name=dictionaryImplDocumentDictionaryFactory/str 
   str name=fieldsuggestField/str
   str name=weightFieldweight/str
   str name=suggestAnalyzerFieldTypetextSuggest/str
 /lst
   /searchComponent
   requestHandler name=/suggest class=solr.SearchHandler startup=lazy
 lst name=defaults
   str name=suggesttrue/str
   str name=suggest.count10/str
 /lst
 arr name=components
   strsuggest/str
 /arr
   /requestHandler
 {code}
 solrconfig changes -
 {code}
 fieldType class=solr.TextField name=textSuggest 
 positionIncrementGap=100
analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.StandardFilterFactory/
   filter class=solr.LowerCaseFilterFactory/
/analyzer
   /fieldType
field name=suggestField type=textSuggest indexed=true 
 stored=true/
 {code}
 Add 3 documents - 
 {code}
 curl http://localhost:8983/solr/update/json?commit=true -H 
 'Content-type:application/json' -d '
 [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField 
 : sea bass}, {id : 3, suggestField : sea bass fishing} ]
 '
 {code}
 Query -
 {code}
 http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on
 {code}
 Response 
 {code}
 {
   responseHeader:{
 status:0,
 QTime:25},
   command:build,
   suggest:{mySuggester:{
   bass:{
 numFound:3,
 suggestions:[{
 term:bbass/b fishing,
 weight:0,
 payload:},
   {
 term:sea bbass/b,
 weight:0,
 payload:},
   {
 term:sea bbass/b fishing,
 weight:0,
 payload:}]
 {code}
 The problem is in SolrSuggester Line 200 where we say lookup.lookup()
 This constructor does not take allTermsRequired and doHighlight since it's 
 only tuneable to AnalyzingInfixSuggester and not the other lookup 
 implementations.
 If different Lookup implementations have different params as their 
 constructors, these sort of issues will always keep happening. Maybe we 
 should not keep it generic and do instanceof checks and set params 
 accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6656) FreeTextLookupFactory QUERY_ANALYZER constants value should be suggestAnalyzerFieldType

2014-10-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6656:

Issue Type: Sub-task  (was: Bug)
Parent: SOLR-6660

 FreeTextLookupFactory QUERY_ANALYZER constants value should be 
 suggestAnalyzerFieldType
 -

 Key: SOLR-6656
 URL: https://issues.apache.org/jira/browse/SOLR-6656
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Varun Thacker
  Labels: suggester
 Fix For: 5.0, Trunk


 In our lookup factories, the value for the constant QUERY_ANALYZER is :
 AnalyzingLookupFactory and AnalyzingInfixLookupFactory = 
 suggestAnalyzerFieldType
 FuzzyLookupFactory refers to AnalyzingLookupFactory.QUERY_ANALYZER
 While FreeTextLookupFactory uses suggestFreeTextAnalyzerFieldType
 I think we should keep the constant QUERY_ANALYZER in LookupFactory and let 
 all the factories that use them reference it.
 Patch would be very simple but should we deprecate in branch_5x and remove it 
 in trunk , or remove it in both branches?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2014-10-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6246:

Issue Type: Sub-task  (was: Bug)
Parent: SOLR-6660

 Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
 

 Key: SOLR-6246
 URL: https://issues.apache.org/jira/browse/SOLR-6246
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 4.8, 4.8.1, 4.9
Reporter: Varun Thacker
 Fix For: 4.10, Trunk

 Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
 SOLR-6246.patch


 LUCENE-5477 - added near-real-time suggest building to 
 AnalyzingInfixSuggester. One of the changes that went in was a writer is 
 persisted now to support real time updates via the add() and update() methods.
 When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
 is created. When trying to create a new writer on the same Directory a lock 
 cannot be obtained and Solr fails to reload the core.
 Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
 pass along the original message.
 I am not sure what should be the approach to fix it. Should we have a 
 reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6024) Improve oal.util.BitSet's bulk and/or/and_not

2014-10-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184962#comment-14184962
 ] 

Adrien Grand commented on LUCENE-6024:
--

The impact of this change is also noticeable on the charts: 
http://people.apache.org/~jpountz/doc_id_sets6.html the sparse set is now 
always almost as fast to build from another DocIdSet as a FixedBitSet.

 Improve oal.util.BitSet's bulk and/or/and_not
 -

 Key: LUCENE-6024
 URL: https://issues.apache.org/jira/browse/LUCENE-6024
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6024.patch


 LUCENE-6021 introduced oal.util.BitSet with default impls taken from 
 FixedBitSet. However, these default impls could be more efficient (and eg. 
 perform an actual leap frog for AND and AND_NOT).
 Additionally, SparseFixedBitSet could benefit from some specialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6660) Improve the usability for the new Suggester

2014-10-27 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184963#comment-14184963
 ] 

Varun Thacker commented on SOLR-6660:
-

I could not make LUCENE-5833 into a sub-task but we should fix that too.

 Improve the usability for the new Suggester
 ---

 Key: SOLR-6660
 URL: https://issues.apache.org/jira/browse/SOLR-6660
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
  Labels: suggester
 Fix For: 5.0, Trunk


 Creating a parent Jira to track the issues which need to be fixed to improve 
 the experience of using the suggester in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6024) Improve oal.util.BitSet's bulk and/or/and_not

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184968#comment-14184968
 ] 

ASF subversion and git services commented on LUCENE-6024:
-

Commit 1634478 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1634478 ]

LUCENE-6024: Speed-up BitSet.or/and/andNot.

 Improve oal.util.BitSet's bulk and/or/and_not
 -

 Key: LUCENE-6024
 URL: https://issues.apache.org/jira/browse/LUCENE-6024
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6024.patch


 LUCENE-6021 introduced oal.util.BitSet with default impls taken from 
 FixedBitSet. However, these default impls could be more efficient (and eg. 
 perform an actual leap frog for AND and AND_NOT).
 Additionally, SparseFixedBitSet could benefit from some specialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6024) Improve oal.util.BitSet's bulk and/or/and_not

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184969#comment-14184969
 ] 

ASF subversion and git services commented on LUCENE-6024:
-

Commit 1634479 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634479 ]

LUCENE-6024: Speed-up BitSet.or/and/andNot.

 Improve oal.util.BitSet's bulk and/or/and_not
 -

 Key: LUCENE-6024
 URL: https://issues.apache.org/jira/browse/LUCENE-6024
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6024.patch


 LUCENE-6021 introduced oal.util.BitSet with default impls taken from 
 FixedBitSet. However, these default impls could be more efficient (and eg. 
 perform an actual leap frog for AND and AND_NOT).
 Additionally, SparseFixedBitSet could benefit from some specialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6376) Edismax field alias bug

2014-10-27 Thread Thomas Egense (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184981#comment-14184981
 ] 

Thomas Egense commented on SOLR-6376:
-

Thank you for looking into this bug. I tested your patch and can confirm it 
passes unit-tests and you indeed have set all the parameters correct for the 
unit test.
But I also back-ported you patch it to a bugged version (4.9.1)  and the 
unit-test still passed!  So somehow this bug can not be reproduced by this 
unit-test setup.
I also tested my bug on the solr-example that comes with 4.10.1 and the bug can 
still very easy be reproduced  with this setup I described by adding one line 
to the /browse requesthandler (str name=f.name_features.qfname features 
XXX/str)
and run the two queries
 name_features:video (works)
 name_features:video AND name_features:video  (bugs)

I just realized this bug is a duplicate of SOLR-5052 which is also still 
unresolved. The setup to reproduce the bug is more simple in my descriptions as 
I cooked it down the essentials.

Thomas Egense




 Edismax field alias bug
 ---

 Key: SOLR-6376
 URL: https://issues.apache.org/jira/browse/SOLR-6376
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.6.1, 4.7, 4.7.2, 4.8, 4.9
Reporter: Thomas Egense
Priority: Minor
  Labels: difficulty-easy, edismax, impact-low
 Attachments: SOLR-6376.patch


 If you create a field alias that maps to a nonexistent field, the query will 
 be parsed to utter garbage.
 The bug can reproduced very easily. Add the following line to the /browse 
 request handler in the tutorial example solrconfig.xml
 str name=f.name_features.qfname features XXX/str
 (XXX is a nonexistent field)
 This simple query will actually work correctly: 
 name_features:video
 and it will be parsed to  (features:video | name:video) and return 3 results. 
 It has simply discarded the nonexistent field and the result set is correct.
 However if you change the query to:
 name_features:video AND name_features:video
 you will now get 0 result and the query is parsed to 
 +(((features:video | name:video) (id:AND^10.0 | author:and^2.0 | 
 title:and^10.0 | cat:AND^1.4 | text:and^0.5 | keywords:and^5.0 | manu:and^1.1 
 | description:and^5.0 | resourcename:and | name:and^1.2 | features:and) 
 (features:video | name:video))~3)
 Notice the AND operator is now used a term! The parsed query can turn out 
 even worse and produce query parts such as:
 title:2~2
 title:and^2.0^10.0  
 Prefered solution: During start up, shut down Solr if there is a nonexistant 
 field alias. Just as is the case if the cycle-detection detects a cycle:
 Acceptable solution: Ignore the nonexistant field totally.
 Thomas Egense



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6658) SearchHandler should accept POST requests with JSON data in content stream for customized plug-in components

2014-10-27 Thread BearChen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184993#comment-14184993
 ] 

BearChen commented on SOLR-6658:


Hi, Solr Developer, our company also have this problem, we need to get POST 
data from SearchHandler, please help to consider whether to open this feature? 
I think, should be no security issues in here, right? Please enable this 
function or give us option, if possible.

 SearchHandler should accept POST requests with JSON data in content stream 
 for customized plug-in components
 

 Key: SOLR-6658
 URL: https://issues.apache.org/jira/browse/SOLR-6658
 Project: Solr
  Issue Type: Improvement
  Components: search, SearchComponents - other
Affects Versions: 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Mark Peng

 This issue relates to the following one:
 *Return HTTP error on POST requests with no Content-Type*
 [https://issues.apache.org/jira/browse/SOLR-5517]
 The original consideration of the above is to make sure that incoming POST 
 requests to SearchHandler have corresponding content-type specified. That is 
 quite reasonable, however, the following lines in the patch cause to reject 
 all POST requests with content stream data, which is not necessary to that 
 issue:
 {code}
 Index: solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
 ===
 --- solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (revision 1546817)
 +++ solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (working copy)
 @@ -22,9 +22,11 @@
  import java.util.List;
  
  import org.apache.solr.common.SolrException;
 +import org.apache.solr.common.SolrException.ErrorCode;
  import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.ModifiableSolrParams;
  import org.apache.solr.common.params.ShardParams;
 +import org.apache.solr.common.util.ContentStream;
  import org.apache.solr.core.CloseHook;
  import org.apache.solr.core.PluginInfo;
  import org.apache.solr.core.SolrCore;
 @@ -165,6 +167,10 @@
{
  // int sleep = req.getParams().getInt(sleep,0);
  // if (sleep  0) {log.error(SLEEPING for  + sleep);  
 Thread.sleep(sleep);}
 +if (req.getContentStreams() != null  
 req.getContentStreams().iterator().hasNext()) {
 +  throw new SolrException(ErrorCode.BAD_REQUEST, Search requests cannot 
 accept content streams);
 +}
 +
  ResponseBuilder rb = new ResponseBuilder(req, rsp, components);
  if (rb.requestInfo != null) {
rb.requestInfo.setResponseBuilder(rb);
 {code}
 We are using Solr 4.5.1 in our production services and considering to upgrade 
 to 4.9/5.0 to support more features. But due to this issue, we cannot have a 
 chance to upgrade because we have some important customized SearchComponent 
 plug-ins that need to get POST data from SearchHandler to do further 
 processing.
 Therefore, we are requesting if it is possible to remove the content stream 
 constraint shown above and to let SearchHandler accept POST requests with 
 *Content-Type: application/json* to allow further components to get the data.
 Thank you.
 Best regards,
 Mark Peng



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3191) field exclusion from fl

2014-10-27 Thread Roman Kliewer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184996#comment-14184996
 ] 

Roman Kliewer commented on SOLR-3191:
-

I think this feature should be higher prioritized since the atomic updates do 
require _all fields stored_.

In my case there are millions of documents and i do need to frequently update 
the ACL field, so storing of the default search field is required. This causes 
of course a much slower search because the default search field is returned 
every time since it can not be excluded and almost all other fields are dynamic.

IMHO the absence of this feature renders atomic update feature completely 
unusable.

 field exclusion from fl
 ---

 Key: SOLR-3191
 URL: https://issues.apache.org/jira/browse/SOLR-3191
 Project: Solr
  Issue Type: Improvement
Reporter: Luca Cavanna
Priority: Minor
 Attachments: SOLR-3191.patch, SOLR-3191.patch, SOLR-3191.patch


 I think it would be useful to add a way to exclude field from the Solr 
 response. If I have for example 100 stored fields and I want to return all of 
 them but one, it would be handy to list just the field I want to exclude 
 instead of the 99 fields for inclusion through fl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6376) Edismax field alias bug

2014-10-27 Thread Thomas Egense (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Egense updated SOLR-6376:

Affects Version/s: 4.10.1

 Edismax field alias bug
 ---

 Key: SOLR-6376
 URL: https://issues.apache.org/jira/browse/SOLR-6376
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.6.1, 4.7, 4.7.2, 4.8, 4.9, 4.10.1
Reporter: Thomas Egense
Priority: Minor
  Labels: difficulty-easy, edismax, impact-low
 Attachments: SOLR-6376.patch


 If you create a field alias that maps to a nonexistent field, the query will 
 be parsed to utter garbage.
 The bug can reproduced very easily. Add the following line to the /browse 
 request handler in the tutorial example solrconfig.xml
 str name=f.name_features.qfname features XXX/str
 (XXX is a nonexistent field)
 This simple query will actually work correctly: 
 name_features:video
 and it will be parsed to  (features:video | name:video) and return 3 results. 
 It has simply discarded the nonexistent field and the result set is correct.
 However if you change the query to:
 name_features:video AND name_features:video
 you will now get 0 result and the query is parsed to 
 +(((features:video | name:video) (id:AND^10.0 | author:and^2.0 | 
 title:and^10.0 | cat:AND^1.4 | text:and^0.5 | keywords:and^5.0 | manu:and^1.1 
 | description:and^5.0 | resourcename:and | name:and^1.2 | features:and) 
 (features:video | name:video))~3)
 Notice the AND operator is now used a term! The parsed query can turn out 
 even worse and produce query parts such as:
 title:2~2
 title:and^2.0^10.0  
 Prefered solution: During start up, shut down Solr if there is a nonexistant 
 field alias. Just as is the case if the cycle-detection detects a cycle:
 Acceptable solution: Ignore the nonexistant field totally.
 Thomas Egense



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6024) Improve oal.util.BitSet's bulk and/or/and_not

2014-10-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6024.
--
Resolution: Fixed

 Improve oal.util.BitSet's bulk and/or/and_not
 -

 Key: LUCENE-6024
 URL: https://issues.apache.org/jira/browse/LUCENE-6024
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6024.patch


 LUCENE-6021 introduced oal.util.BitSet with default impls taken from 
 FixedBitSet. However, these default impls could be more efficient (and eg. 
 perform an actual leap frog for AND and AND_NOT).
 Additionally, SparseFixedBitSet could benefit from some specialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1866 - Failure!

2014-10-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1866/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.embedded.MultiCoreExampleJettyTest.testMultiCore

Error Message:
IOException occured when talking to server at: 
https://127.0.0.1:60283/example/core0

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:60283/example/core0
at 
__randomizedtesting.SeedInfo.seed([7E4851194D557286:FA604AEA82867FB3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:584)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at 
org.apache.solr.client.solrj.MultiCoreExampleTestBase.testMultiCore(MultiCoreExampleTestBase.java:184)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Updated] (SOLR-6658) SearchHandler should accept POST requests with JSON data in content stream for customized plug-in components

2014-10-27 Thread Mark Peng (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Peng updated SOLR-6658:

Attachment: SOLR-6658.patch

 SearchHandler should accept POST requests with JSON data in content stream 
 for customized plug-in components
 

 Key: SOLR-6658
 URL: https://issues.apache.org/jira/browse/SOLR-6658
 Project: Solr
  Issue Type: Improvement
  Components: search, SearchComponents - other
Affects Versions: 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Mark Peng
 Attachments: SOLR-6658.patch


 This issue relates to the following one:
 *Return HTTP error on POST requests with no Content-Type*
 [https://issues.apache.org/jira/browse/SOLR-5517]
 The original consideration of the above is to make sure that incoming POST 
 requests to SearchHandler have corresponding content-type specified. That is 
 quite reasonable, however, the following lines in the patch cause to reject 
 all POST requests with content stream data, which is not necessary to that 
 issue:
 {code}
 Index: solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
 ===
 --- solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (revision 1546817)
 +++ solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (working copy)
 @@ -22,9 +22,11 @@
  import java.util.List;
  
  import org.apache.solr.common.SolrException;
 +import org.apache.solr.common.SolrException.ErrorCode;
  import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.ModifiableSolrParams;
  import org.apache.solr.common.params.ShardParams;
 +import org.apache.solr.common.util.ContentStream;
  import org.apache.solr.core.CloseHook;
  import org.apache.solr.core.PluginInfo;
  import org.apache.solr.core.SolrCore;
 @@ -165,6 +167,10 @@
{
  // int sleep = req.getParams().getInt(sleep,0);
  // if (sleep  0) {log.error(SLEEPING for  + sleep);  
 Thread.sleep(sleep);}
 +if (req.getContentStreams() != null  
 req.getContentStreams().iterator().hasNext()) {
 +  throw new SolrException(ErrorCode.BAD_REQUEST, Search requests cannot 
 accept content streams);
 +}
 +
  ResponseBuilder rb = new ResponseBuilder(req, rsp, components);
  if (rb.requestInfo != null) {
rb.requestInfo.setResponseBuilder(rb);
 {code}
 We are using Solr 4.5.1 in our production services and considering to upgrade 
 to 4.9/5.0 to support more features. But due to this issue, we cannot have a 
 chance to upgrade because we have some important customized SearchComponent 
 plug-ins that need to get POST data from SearchHandler to do further 
 processing.
 Therefore, we are requesting if it is possible to remove the content stream 
 constraint shown above and to let SearchHandler accept POST requests with 
 *Content-Type: application/json* to allow further components to get the data.
 Thank you.
 Best regards,
 Mark Peng



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release 4.10.2 RC1

2014-10-27 Thread Adrien Grand
+1
SUCCESS! [0:56:11.020611]

On Sun, Oct 26, 2014 at 4:45 PM, Simon Willnauer
simon.willna...@gmail.com wrote:
 Tests now pass for me too!! thanks mike

 +1

 On Sun, Oct 26, 2014 at 12:22 PM, Michael McCandless
 luc...@mikemccandless.com wrote:
 Artifacts: 
 http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC1-rev1634293

 Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py
 http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC1-rev1634293
 1634293 4.10.2 /tmp/smoke4102 True

 I ran smoke tester:

   SUCCESS! [0:30:16.520543]

 And also confirmed Elasticsearch tests pass with this RC.

 Here's my +1

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



phrase query in solr 4

2014-10-27 Thread Robust Links
Hi

We are trying to upgrade our index from 3.6.1 to 4.9.1 and I wanted to make
sure our existing indexing strategy is still valid or not. The statistics
of the raw corpus are:

- 4.8 Billon total number of tokens in the entire corpus.

- 13MM documents


We have 3 requirements


1) we want to index and search all tokens in a document (i.e. we do not
rely on external stores)

2) we need search time to be fast and willing to pay larger indexing time
and index size,

3)  be able to search as fast as possible ngrams of 3 tokens or less (i.e,
unigrams, bigrams and trigrams).


To satisfy (1) we used the default
maxFieldLength2147483647/maxFieldLength in
solrconfig.xml of 3.6.1 index to specify the total number of tokens to
index in an article. In solr 4 we are specifying it via the tokenizer in
the analyzer chain


tokenizer class=solr.ClassicTokenizerFactory maxTokenLength=2147483647
/


To satisfy 2 and 3 in our 3.6.1 index we indexed using the following
shingedFilterFactory in the analyzer chain


filter class=solr.ShingleFilterFactory outputUnigrams=true
maxShingleSize=3”/


This was based on this thread:

http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200808.mbox/%3c856ac15f0808161539p54417df2ga5a6fdfa35889...@mail.gmail.com%3E


The open questions we are trying to understand now are:


1) whether shingling is still the best strategy for phrase (ngram) search
given our requirements above?

2) if not then what would be a better strategy.


thank you in advance for your help


Peyman


[jira] [Commented] (SOLR-5025) Implement true re-sharding for SolrCloud

2014-10-27 Thread Tomer Levi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185088#comment-14185088
 ] 

Tomer Levi commented on SOLR-5025:
--

Another possible solution is to use HBase style sharding,
   1. Let Solr automatically split a shard as it grows above some limit (number 
of documents/size in MB like HBase).
   2. As we add more Solr instances to the cluster, Solr will auto balance 
itself by moving shards from one instance to the newly added instance. 
   For example:
   ==
   -Lets say we initially started with 4 shards on 2 Solr instances.
   - At some point the Shards split into 8 shards.
   -Later we added 1 more Solr instance (2+1=3)
   -The automatic balance, will migrate 1 shards form each original 
instance to the new Sor instance.
 Eventually we would end up with:
-
Instance 1 : 3 shards
Instance 2: 3 shards
Instance 3: 2 shards 

In other words, instead of moving documents we can migrate entire shards using 
an automatic balancing.

 Implement true re-sharding for SolrCloud
 

 Key: SOLR-5025
 URL: https://issues.apache.org/jira/browse/SOLR-5025
 Project: Solr
  Issue Type: Wish
  Components: SolrCloud
Reporter: Shawn Heisey

 Shard splitting is an incredibly nice thing to have, but it doesn't 
 completely address the idea of re-sharding.
 Let's say that you currently have three shards, only your index is three or 
 four times as big as you ever expected it to get when you first built it.  
 You've added nodes, which helps, but doesn't address the fundamental fact 
 that each of your shards is too big for an individual server.  If you had 
 created eight shards up front, everything would be smooth.  It's not possible 
 with shard splitting to go from three equal size shards to eight equal size 
 shards.
 A new feature to accomplish true re-sharding would solve this.  One 
 implementation possibility:  Create a new collection with the new numShards, 
 split all the documents accordingly to the new replicas, then rename/swap the 
 collection and core names.
 There are a number of sticky points to iron out regardless of the 
 implementation method chosen, some of which could be really hairy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5025) Implement true re-sharding for SolrCloud

2014-10-27 Thread Tomer Levi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185088#comment-14185088
 ] 

Tomer Levi edited comment on SOLR-5025 at 10/27/14 12:23 PM:
-

Another possible solution is to use HBase style sharding,
   1. Let Solr automatically split a shard as it grows above some limit (number 
of documents/size in MB like HBase).
   2. As we add more Solr instances to the cluster, Solr will auto balance 
itself by moving shards from one instance to the newly added instance. 
   For example:
   ==
   -Let's say we initially started with 4 shards on 2 Solr instances.
   - At some point the Shards split into 8 shards.
   -Later we added 1 more Solr instance (2+1=3)
   -The automatic balance, will migrate 1 shards form each original 
instance to the new Sor instance.
 Eventually we would end up with:
-
Instance 1 : 3 shards
Instance 2: 3 shards
Instance 3: 2 shards 

In other words, instead of moving documents we can migrate entire shards using 
an automatic balancing.


was (Author: tomerlevi1983):
Another possible solution is to use HBase style sharding,
   1. Let Solr automatically split a shard as it grows above some limit (number 
of documents/size in MB like HBase).
   2. As we add more Solr instances to the cluster, Solr will auto balance 
itself by moving shards from one instance to the newly added instance. 
   For example:
   ==
   -Lets say we initially started with 4 shards on 2 Solr instances.
   - At some point the Shards split into 8 shards.
   -Later we added 1 more Solr instance (2+1=3)
   -The automatic balance, will migrate 1 shards form each original 
instance to the new Sor instance.
 Eventually we would end up with:
-
Instance 1 : 3 shards
Instance 2: 3 shards
Instance 3: 2 shards 

In other words, instead of moving documents we can migrate entire shards using 
an automatic balancing.

 Implement true re-sharding for SolrCloud
 

 Key: SOLR-5025
 URL: https://issues.apache.org/jira/browse/SOLR-5025
 Project: Solr
  Issue Type: Wish
  Components: SolrCloud
Reporter: Shawn Heisey

 Shard splitting is an incredibly nice thing to have, but it doesn't 
 completely address the idea of re-sharding.
 Let's say that you currently have three shards, only your index is three or 
 four times as big as you ever expected it to get when you first built it.  
 You've added nodes, which helps, but doesn't address the fundamental fact 
 that each of your shards is too big for an individual server.  If you had 
 created eight shards up front, everything would be smooth.  It's not possible 
 with shard splitting to go from three equal size shards to eight equal size 
 shards.
 A new feature to accomplish true re-sharding would solve this.  One 
 implementation possibility:  Create a new collection with the new numShards, 
 split all the documents accordingly to the new replicas, then rename/swap the 
 collection and core names.
 There are a number of sticky points to iron out regardless of the 
 implementation method chosen, some of which could be really hairy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5025) Implement true re-sharding for SolrCloud

2014-10-27 Thread Tomer Levi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185088#comment-14185088
 ] 

Tomer Levi edited comment on SOLR-5025 at 10/27/14 12:24 PM:
-

Another possible solution is to use HBase style sharding,
   1. Let Solr automatically split a shard as it grows above some limit (number 
of documents/size in MB like HBase).
   2. As we add more Solr instances to the cluster, Solr will auto balance 
itself by moving shards from one instance to the newly added instance. 

   For example:
   ==
   -Let's say we initially started with 4 shards on 2 Solr instances.
   - At some point the Shards split into 8 shards.
   -Later we added 1 more Solr instance (2+1=3)
   -The automatic balance, will migrate 1 shards form each original 
instance to the new Sor instance.

 Eventually we would end up with:
-
Instance 1 : 3 shards
Instance 2: 3 shards
Instance 3: 2 shards 

In other words, instead of moving documents we can migrate entire shards using 
an automatic balancing.


was (Author: tomerlevi1983):
Another possible solution is to use HBase style sharding,
   1. Let Solr automatically split a shard as it grows above some limit (number 
of documents/size in MB like HBase).
   2. As we add more Solr instances to the cluster, Solr will auto balance 
itself by moving shards from one instance to the newly added instance. 
   For example:
   ==
   -Let's say we initially started with 4 shards on 2 Solr instances.
   - At some point the Shards split into 8 shards.
   -Later we added 1 more Solr instance (2+1=3)
   -The automatic balance, will migrate 1 shards form each original 
instance to the new Sor instance.
 Eventually we would end up with:
-
Instance 1 : 3 shards
Instance 2: 3 shards
Instance 3: 2 shards 

In other words, instead of moving documents we can migrate entire shards using 
an automatic balancing.

 Implement true re-sharding for SolrCloud
 

 Key: SOLR-5025
 URL: https://issues.apache.org/jira/browse/SOLR-5025
 Project: Solr
  Issue Type: Wish
  Components: SolrCloud
Reporter: Shawn Heisey

 Shard splitting is an incredibly nice thing to have, but it doesn't 
 completely address the idea of re-sharding.
 Let's say that you currently have three shards, only your index is three or 
 four times as big as you ever expected it to get when you first built it.  
 You've added nodes, which helps, but doesn't address the fundamental fact 
 that each of your shards is too big for an individual server.  If you had 
 created eight shards up front, everything would be smooth.  It's not possible 
 with shard splitting to go from three equal size shards to eight equal size 
 shards.
 A new feature to accomplish true re-sharding would solve this.  One 
 implementation possibility:  Create a new collection with the new numShards, 
 split all the documents accordingly to the new replicas, then rename/swap the 
 collection and core names.
 There are a number of sticky points to iron out regardless of the 
 implementation method chosen, some of which could be really hairy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6021) Make FixedBitSet and SparseFixedBitSet share a wider common interface

2014-10-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6021.
--
Resolution: Fixed

 Make FixedBitSet and SparseFixedBitSet share a wider common interface
 -

 Key: LUCENE-6021
 URL: https://issues.apache.org/jira/browse/LUCENE-6021
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6021.patch, LUCENE-6021.patch


 Today, the only common interfaces that these two classes share are Bits and 
 Accountable. I would like to add a BitSet base class that would be both 
 extended by FixedBitSet and SparseFixedBitSet. The idea is to share more code 
 between these two impls and make them interchangeable for more use-cases so 
 that we could just use one or the other based on the density of the data that 
 we are working on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6025) Add BitSet.prevSetBit

2014-10-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6025:
-
Attachment: LUCENE-6025.patch

Here is a patch. It adds BitSet.prevSetBit and cuts over the join module to 
BitSet instead of FixedBitSet.

 Add BitSet.prevSetBit
 -

 Key: LUCENE-6025
 URL: https://issues.apache.org/jira/browse/LUCENE-6025
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6025.patch


 This would allow the join module to work with any BitSet as opposed to only 
 FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6591) Cluster state updates can be lost on exception in main queue loop

2014-10-27 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6591:

Attachment: SOLR-6591-constructStateFix.patch

The overseer can still use stale cached cluster state for collections with 
state format  1 because ZkStateReader.updateClusterState returns cached state. 
Here is a patch which fixes that.

 Cluster state updates can be lost on exception in main queue loop
 -

 Key: SOLR-6591
 URL: https://issues.apache.org/jira/browse/SOLR-6591
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: Trunk
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk

 Attachments: SOLR-6591-constructStateFix.patch, SOLR-6591.patch


 I found this bug while going through the failure on jenkins:
 https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/648/
 {code}
 2 tests failed.
 REGRESSION:  
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch
 Error Message:
 Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
 core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
 core: halfcollection_shard1_replica1
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
 CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
 [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
 halfcollection_shard1_replica1
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b34) - Build # 11369 - Failure!

2014-10-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11369/
Java: 64bit/jdk1.9.0-ea-b34 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([3844CB8059F89048]:0)


REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([3844CB8059F89048]:0)




Build Log:
[...truncated 12430 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeySafeLeaderTest-3844CB8059F89048-001/init-core-data-001
   [junit4]   2 1119276 T3247 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (true)
   [junit4]   2 1119276 T3247 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2 1119286 T3247 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 1119286 T3247 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 1119287 T3248 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 1119387 T3247 oasc.ZkTestServer.run start zk server on 
port:43200
   [junit4]   2 1119387 T3247 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1119388 T3247 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1119390 T3254 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@7c5d599c 
name:ZooKeeperConnection Watcher:127.0.0.1:43200 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 1119390 T3247 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1119390 T3247 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1119390 T3247 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 1119392 T3247 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1119393 T3247 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1119394 T3256 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@4afb7569 
name:ZooKeeperConnection Watcher:127.0.0.1:43200/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 1119394 T3247 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1119394 T3247 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1119394 T3247 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 1119396 T3247 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 1119397 T3247 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 1119397 T3247 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 1119398 T3247 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 1119399 T3247 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 1119400 T3247 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2 1119400 T3247 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2 1119401 T3247 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 1119402 T3247 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 1119403 T3247 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2 1119403 T3247 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2 1119404 T3247 oasc.AbstractZkTestCase.putConfig put 

[jira] [Comment Edited] (SOLR-6591) Cluster state updates can be lost on exception in main queue loop

2014-10-27 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185118#comment-14185118
 ] 

Shalin Shekhar Mangar edited comment on SOLR-6591 at 10/27/14 1:58 PM:
---

The overseer can still use stale cached cluster state because 
ZkStateReader.updateClusterState returns cached state for watched collections. 
Here is a patch which fixes that by reading collection state live during 
ZkStateReader.updateClusterState and setting it into the 
watchedCollectionStates map.


was (Author: shalinmangar):
The overseer can still use stale cached cluster state for collections with 
state format  1 because ZkStateReader.updateClusterState returns cached state. 
Here is a patch which fixes that.

 Cluster state updates can be lost on exception in main queue loop
 -

 Key: SOLR-6591
 URL: https://issues.apache.org/jira/browse/SOLR-6591
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: Trunk
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk

 Attachments: SOLR-6591-constructStateFix.patch, SOLR-6591.patch


 I found this bug while going through the failure on jenkins:
 https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/648/
 {code}
 2 tests failed.
 REGRESSION:  
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch
 Error Message:
 Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
 core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
 core: halfcollection_shard1_replica1
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
 CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
 [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
 halfcollection_shard1_replica1
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6661) Address relative path lib references

2014-10-27 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-6661:
--

 Summary: Address relative path lib references
 Key: SOLR-6661
 URL: https://issues.apache.org/jira/browse/SOLR-6661
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: 5.0


Relative paths in solrconfig.xml's lib references are wrong, and require 
manual adjusting, when the base directory moves to different place in the file 
system tree.  This can happen when cloning a configuration, such as the new 
start scripts in -e cloud mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6661) Address relative path lib references

2014-10-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185173#comment-14185173
 ] 

Erik Hatcher commented on SOLR-6661:


This has historically been a problem, and I've always adjusted the example 
configurations when I build prototypes/demos to have this ${solr.install.dir} 
system property concept.  Now that we have start scripts, we can simply pass in 
the already known Solr install (under which contrib/ lives, most importantly 
for this issue) directory. 

 Address relative path lib references
 --

 Key: SOLR-6661
 URL: https://issues.apache.org/jira/browse/SOLR-6661
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: 5.0


 Relative paths in solrconfig.xml's lib references are wrong, and require 
 manual adjusting, when the base directory moves to different place in the 
 file system tree.  This can happen when cloning a configuration, such as the 
 new start scripts in -e cloud mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6661) Address relative path lib references

2014-10-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6661:
---
Description: 
Relative paths in solrconfig.xml's lib references are wrong, and require 
manual adjusting, when the base directory moves to different place in the file 
system tree.  This can happen when cloning a configuration, such as the new 
start scripts in -e cloud mode.

Having an incorrect relative path manifests itself as /browse not working 
(can't find VelocityResponseWriter related JARs), and likewise with 
/update/extract because of wrong paths to Tika, etc.

  was:Relative paths in solrconfig.xml's lib references are wrong, and 
require manual adjusting, when the base directory moves to different place in 
the file system tree.  This can happen when cloning a configuration, such as 
the new start scripts in -e cloud mode.


 Address relative path lib references
 --

 Key: SOLR-6661
 URL: https://issues.apache.org/jira/browse/SOLR-6661
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: 5.0


 Relative paths in solrconfig.xml's lib references are wrong, and require 
 manual adjusting, when the base directory moves to different place in the 
 file system tree.  This can happen when cloning a configuration, such as the 
 new start scripts in -e cloud mode.
 Having an incorrect relative path manifests itself as /browse not working 
 (can't find VelocityResponseWriter related JARs), and likewise with 
 /update/extract because of wrong paths to Tika, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6661) Address relative path lib references

2014-10-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185173#comment-14185173
 ] 

Erik Hatcher edited comment on SOLR-6661 at 10/27/14 2:25 PM:
--

This has historically been a problem, and I've always adjusted the example 
configurations when I build prototypes/demos to have this $\{solr.install.dir} 
system property concept.  Now that we have start scripts, we can simply pass in 
the already known Solr install (under which contrib/ lives, most importantly 
for this issue) directory. 


was (Author: ehatcher):
This has historically been a problem, and I've always adjusted the example 
configurations when I build prototypes/demos to have this ${solr.install.dir} 
system property concept.  Now that we have start scripts, we can simply pass in 
the already known Solr install (under which contrib/ lives, most importantly 
for this issue) directory. 

 Address relative path lib references
 --

 Key: SOLR-6661
 URL: https://issues.apache.org/jira/browse/SOLR-6661
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: 5.0


 Relative paths in solrconfig.xml's lib references are wrong, and require 
 manual adjusting, when the base directory moves to different place in the 
 file system tree.  This can happen when cloning a configuration, such as the 
 new start scripts in -e cloud mode.
 Having an incorrect relative path manifests itself as /browse not working 
 (can't find VelocityResponseWriter related JARs), and likewise with 
 /update/extract because of wrong paths to Tika, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6661) Address relative path lib references

2014-10-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6661:
---
Attachment: SOLR-6661.patch

This patch adjusts all example configurations with relative lib paths to keep 
the default, but override with ${solr.install.dir} system property.  The ('nix 
only!) start script has been adjusted to pass this property in. 

 Address relative path lib references
 --

 Key: SOLR-6661
 URL: https://issues.apache.org/jira/browse/SOLR-6661
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6661.patch


 Relative paths in solrconfig.xml's lib references are wrong, and require 
 manual adjusting, when the base directory moves to different place in the 
 file system tree.  This can happen when cloning a configuration, such as the 
 new start scripts in -e cloud mode.
 Having an incorrect relative path manifests itself as /browse not working 
 (can't find VelocityResponseWriter related JARs), and likewise with 
 /update/extract because of wrong paths to Tika, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6661) Address relative path lib references

2014-10-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185183#comment-14185183
 ] 

Erik Hatcher edited comment on SOLR-6661 at 10/27/14 2:33 PM:
--

This patch adjusts all example configurations with relative lib paths to keep 
the default, but override with $\{solr.install.dir} system property.  The ('nix 
only!) start script has been adjusted to pass this property in. 


was (Author: ehatcher):
This patch adjusts all example configurations with relative lib paths to keep 
the default, but override with ${solr.install.dir} system property.  The ('nix 
only!) start script has been adjusted to pass this property in. 

 Address relative path lib references
 --

 Key: SOLR-6661
 URL: https://issues.apache.org/jira/browse/SOLR-6661
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6661.patch


 Relative paths in solrconfig.xml's lib references are wrong, and require 
 manual adjusting, when the base directory moves to different place in the 
 file system tree.  This can happen when cloning a configuration, such as the 
 new start scripts in -e cloud mode.
 Having an incorrect relative path manifests itself as /browse not working 
 (can't find VelocityResponseWriter related JARs), and likewise with 
 /update/extract because of wrong paths to Tika, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-27 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185201#comment-14185201
 ] 

Timothy Potter commented on SOLR-6650:
--

hmmm ... can you post the URL of the pull request here (or just post an updated 
patch to this ticket)? When I click on the 102 link above, it doesn't look like 
the latest code ...

 Add optional slow request logging at WARN level
 ---

 Key: SOLR-6650
 URL: https://issues.apache.org/jira/browse/SOLR-6650
 Project: Solr
  Issue Type: Improvement
Reporter: Jessica Cheng Mallet
Assignee: Timothy Potter
  Labels: logging
 Fix For: 5.0


 At super high request rates, logging all the requests can become a bottleneck 
 and therefore INFO logging is often turned off. However, it is still useful 
 to be able to set a latency threshold above which a request is considered 
 slow and log that request at WARN level so we can easily identify slow 
 queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_67) - Build # 4394 - Still Failing!

2014-10-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4394/
Java: 32bit/jdk1.7.0_67 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([37808CEB4DB8BA4]:0)


FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([37808CEB4DB8BA4]:0)




Build Log:
[...truncated 11007 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.ChaosMonkeySafeLeaderTest-37808CEB4DB8BA4-001\init-core-data-001
   [junit4]   2 2622065 T6263 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2 2622065 T6263 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2 2622073 T6263 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 2622073 T6263 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 2622076 T6264 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 2622184 T6263 oasc.ZkTestServer.run start zk server on 
port:65090
   [junit4]   2 2622184 T6263 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 2622187 T6263 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 2622193 T6270 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@db8a72 name:ZooKeeperConnection 
Watcher:127.0.0.1:65090 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 2622193 T6263 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 2622193 T6263 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 2622193 T6263 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 2622202 T6263 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 2622205 T6263 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 2622211 T6272 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@663558 name:ZooKeeperConnection 
Watcher:127.0.0.1:65090/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2 2622212 T6263 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 2622212 T6263 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 2622212 T6263 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 2622216 T6263 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 2622219 T6263 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 261 T6263 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 264 T6263 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 264 T6263 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 2622231 T6263 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2 2622231 T6263 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2 2622235 T6263 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 2622235 T6263 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 2622239 T6263 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2 2622239 T6263 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2 2622243 T6263 oasc.AbstractZkTestCase.putConfig put 

[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185204#comment-14185204
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1634553 from [~sar...@syr.edu] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1634553 ]

SOLR-6058: commit Fran's patch to fix the hash-slash problem

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
 Solr_Icons.pdf, Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, 
 Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, 
 Solr_Logo_on_white.png, Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6591) Cluster state updates can be lost on exception in main queue loop

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185205#comment-14185205
 ] 

ASF subversion and git services commented on SOLR-6591:
---

Commit 1634554 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1634554 ]

SOLR-6591: ZkStateReader.updateClusterState should refresh cluster state for 
watched collections

 Cluster state updates can be lost on exception in main queue loop
 -

 Key: SOLR-6591
 URL: https://issues.apache.org/jira/browse/SOLR-6591
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: Trunk
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk

 Attachments: SOLR-6591-constructStateFix.patch, SOLR-6591.patch


 I found this bug while going through the failure on jenkins:
 https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/648/
 {code}
 2 tests failed.
 REGRESSION:  
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch
 Error Message:
 Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
 core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
 core: halfcollection_shard1_replica1
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
 CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
 [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
 halfcollection_shard1_replica1
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6631) DistributedQueue spinning on calling zookeeper getChildren()

2014-10-27 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6631:


Assignee: Timothy Potter

 DistributedQueue spinning on calling zookeeper getChildren()
 

 Key: SOLR-6631
 URL: https://issues.apache.org/jira/browse/SOLR-6631
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Jessica Cheng Mallet
Assignee: Timothy Potter
  Labels: solrcloud

 The change from SOLR-6336 introduced a bug where now I'm stuck in a loop 
 making getChildren() request to zookeeper with this thread dump:
 {quote}
 Thread-51 [WAITING] CPU time: 1d 15h 0m 57s
 java.lang.Object.wait()
 org.apache.zookeeper.ClientCnxn.submitRequest(RequestHeader, Record, Record, 
 ZooKeeper$WatchRegistration)
 org.apache.zookeeper.ZooKeeper.getChildren(String, Watcher)
 org.apache.solr.common.cloud.SolrZkClient$6.execute()2 recursive calls
 org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkOperation)
 org.apache.solr.common.cloud.SolrZkClient.getChildren(String, Watcher, 
 boolean)
 org.apache.solr.cloud.DistributedQueue.orderedChildren(Watcher)
 org.apache.solr.cloud.DistributedQueue.getChildren(long)
 org.apache.solr.cloud.DistributedQueue.peek(long)
 org.apache.solr.cloud.DistributedQueue.peek(boolean)
 org.apache.solr.cloud.Overseer$ClusterStateUpdater.run()
 java.lang.Thread.run()
 {quote}
 Looking at the code, I think the issue is that LatchChildWatcher#process 
 always sets the event to its member variable event, regardless of its type, 
 but the problem is that once the member event is set, the await no longer 
 waits. In this state, the while loop in getChildren(long), when called with 
 wait being Integer.MAX_VALUE will loop back, NOT wait at await because event 
 != null, but then it still will not get any children.
 {quote}
 while (true) \{
   if (!children.isEmpty()) break;
   watcher.await(wait == Long.MAX_VALUE ? DEFAULT_TIMEOUT : wait);
   if (watcher.getWatchedEvent() != null)
 \{ children = orderedChildren(null); \}
   if (wait != Long.MAX_VALUE) break;
 \}
 {quote}
 I think the fix would be to only set the event in the watcher if the type is 
 not None.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6591) Cluster state updates can be lost on exception in main queue loop

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185207#comment-14185207
 ] 

ASF subversion and git services commented on SOLR-6591:
---

Commit 1634555 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634555 ]

SOLR-6591: ZkStateReader.updateClusterState should refresh cluster state for 
watched collections

 Cluster state updates can be lost on exception in main queue loop
 -

 Key: SOLR-6591
 URL: https://issues.apache.org/jira/browse/SOLR-6591
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: Trunk
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk

 Attachments: SOLR-6591-constructStateFix.patch, SOLR-6591.patch


 I found this bug while going through the failure on jenkins:
 https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/648/
 {code}
 2 tests failed.
 REGRESSION:  
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch
 Error Message:
 Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
 core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
 core: halfcollection_shard1_replica1
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
 CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
 [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
 halfcollection_shard1_replica1
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6661) Address relative path lib references

2014-10-27 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185214#comment-14185214
 ] 

Anshum Gupta commented on SOLR-6661:


+1 on this! LGTM.

 Address relative path lib references
 --

 Key: SOLR-6661
 URL: https://issues.apache.org/jira/browse/SOLR-6661
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: 5.0

 Attachments: SOLR-6661.patch


 Relative paths in solrconfig.xml's lib references are wrong, and require 
 manual adjusting, when the base directory moves to different place in the 
 file system tree.  This can happen when cloning a configuration, such as the 
 new start scripts in -e cloud mode.
 Having an incorrect relative path manifests itself as /browse not working 
 (can't find VelocityResponseWriter related JARs), and likewise with 
 /update/extract because of wrong paths to Tika, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6026) Give AbstractPagedMutable 'Accountable' rather than its own ramBytesUsed() api

2014-10-27 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6026:
---

 Summary: Give AbstractPagedMutable 'Accountable' rather than its 
own ramBytesUsed() api
 Key: LUCENE-6026
 URL: https://issues.apache.org/jira/browse/LUCENE-6026
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6026.patch

This should just implement Accountable rather than re-specifying ramBytesUsed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6026) Give AbstractPagedMutable 'Accountable' rather than its own ramBytesUsed() api

2014-10-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6026:

Attachment: LUCENE-6026.patch

 Give AbstractPagedMutable 'Accountable' rather than its own ramBytesUsed() api
 --

 Key: LUCENE-6026
 URL: https://issues.apache.org/jira/browse/LUCENE-6026
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6026.patch


 This should just implement Accountable rather than re-specifying ramBytesUsed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-27 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185241#comment-14185241
 ] 

Steven Bower commented on SOLR-6058:


The new site looks great.. Couple things:

1. The Solr powers some of the heavily-trafficked sites on the web does not 
really apply to Bloomberg as we don't use it on our website.. so maybe some 
more generic language that doesn't tie solr to websites would be a better 
description. Maybe something like Solr powers some of the largest, most 
heavily trafficked, search engines

2. For Bloomberg, and I suspect other companies as well, will need to have the 
use of our logo approved from a trademark usage perspective. We're happy to do 
this we just need go through the process.

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
 Solr_Icons.pdf, Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, 
 Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, 
 Solr_Logo_on_white.png, Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6026) Give AbstractPagedMutable 'Accountable' rather than its own ramBytesUsed() api

2014-10-27 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185261#comment-14185261
 ] 

Ryan Ernst commented on LUCENE-6026:


+1

 Give AbstractPagedMutable 'Accountable' rather than its own ramBytesUsed() api
 --

 Key: LUCENE-6026
 URL: https://issues.apache.org/jira/browse/LUCENE-6026
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6026.patch


 This should just implement Accountable rather than re-specifying ramBytesUsed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6645) Refactored DocumentObjectBinder and added AnnotationListeners

2014-10-27 Thread Fabio Piro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185279#comment-14185279
 ] 

Fabio Piro commented on SOLR-6645:
--

Hi [~erickerickson], are there any problems regarding the .patch file?

 Refactored DocumentObjectBinder and added AnnotationListeners
 -

 Key: SOLR-6645
 URL: https://issues.apache.org/jira/browse/SOLR-6645
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Affects Versions: 4.10.2
Reporter: Fabio Piro
  Labels: annotations, binder, listener, solrj
 Fix For: 5.0, Trunk

 Attachments: SOLR-6645.patch


 Hello good people.
 It is understandable that the priority of SolrJ is to provide a stable API 
 for java and not a rich-feature client, I'm well aware of that. On the other 
 hand more features nowadays mean most of the time Spring Solr Data. Although 
 I appreciate the enrichment work of that lib, sometimes depending on its 
 monolithic dependencies and magic is not a valid option.
 So, I was thinking that the official DocumentObjectBinder could benefit from 
 some love, and I had implemented a listener pattern for the annotations. 
 You can register your annotations and they relate listeners in the binder, 
 and it will invoke the corresponding method in the listener on getBean and on 
 toSolrInputDocument, therefore granting the chance to do something during the 
 ongoing process.
 Changes are:
 * [MOD] */beans/DocumentObjectBinder*:  The new logic and a new constructor 
 for registering the annotations
 * [ADD] */impl/AccessorAnnotationListener*: Abstract utility class with the 
 former get(), set(), isArray, isList, isContainedInMap etc...
 * [ADD] */impl/FieldAnnotationListener*: all the rest of DocField for dealing 
 with @Field
 * [ADD] */AnnotationListener*: the base listener class
 * [MOD] */SolrServer*: added setBinder (this is the only tricky change, I 
 hope it's not a problem).
 It's all well documented and the code is very easy to read. Tests are all 
 green, it should be 100% backward compatible and the performance impact is 
 void (the logic flow is exactly the same as now, and I only changed the bare 
 essentials and nothing more, anyway).
 Some Examples (they are not part of the pull-request):
 The long awaited @FieldObject in 4 lines of code:
 https://issues.apache.org/jira/browse/SOLR-1945
 {code:java}
 public class FieldObjectAnnotationListener extends 
 AccessorAnnotationListenerFieldObject {
 public FieldObjectAnnotationListener(AnnotatedElement element, 
 FieldObject annotation) {
 super(element, annotation);
 }
 @Override
 public void onGetBean(Object obj, SolrDocument doc, DocumentObjectBinder 
 binder) {
 Object nested = binder.getBean(target.clazz, doc);
 setTo(obj, nested);
 }
 @Override
 public void onToSolrInputDocument(Object obj, SolrInputDocument doc, 
 DocumentObjectBinder binder) {
 SolrInputDocument nested = binder.toSolrInputDocument(getFrom(obj));
 for (Map.EntryString, SolrInputField entry : nested.entrySet()) {
 doc.addField(entry.getKey(), entry.getValue());
 }
 }
 }
 {code}
 Or something entirely new like an annotation for ChildDocuments:
 {code:java}
 public class ChildDocumentsAnnotationListener extends 
 AccessorAnnotationListenerChildDocuments {
 public ChildDocumentsAnnotationListener(AnnotatedElement element, 
 ChildDocuments annotation) {
 super(element, annotation);
 if (!target.isInList || target.clazz.isPrimitive()) {
 throw new BindingException(@NestedDocuments is applicable only 
 on ListObject.);
 }
 }
 @Override
 public void onGetBean(Object obj, SolrDocument doc, DocumentObjectBinder 
 binder) {
 ListObject nested = new ArrayList();
 for (SolrDocument child : doc.getChildDocuments()) {
 nested.add(binder.getBean(target.clazz, child));// this should be 
 recursive, but it's only an example
 }
 setTo(obj, nested);
 }
 @Override
 public void onToSolrInputDocument(Object obj, SolrInputDocument doc, 
 DocumentObjectBinder binder) {
 SolrInputDocument nested = binder.toSolrInputDocument(getFrom(obj));
 doc.addChildDocuments(nested.getChildDocuments());
 }
 }
 {code}
 In addition, all the logic is encapsulated in the listener, so you can make a 
 custom FieldAnnotationListener too, and override the default one
 {code:java}
 public class CustomFieldAnnotationListener extends FieldAnnotationListener {
 private boolean isTransientPresent;
 public CustomFieldAnnotationListener(AnnotatedElement element, Field 
 annotation) {
 super(element, annotation);
 this.isTransientPresent = 
 

[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #743: POMs out of sync

2014-10-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/743/

No tests ran.

Build Log:
[...truncated 24609 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:541: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:182: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/lucene/build.xml:400:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/lucene/common-build.xml:573:
 Error deploying artifact 'org.apache.lucene:lucene-solr-grandparent:pom': 
Error retrieving previous build number for artifact 
'org.apache.lucene:lucene-solr-grandparent:pom': repository metadata for: 
'snapshot org.apache.lucene:lucene-solr-grandparent:5.0.0-SNAPSHOT' could not 
be retrieved from repository: apache.snapshots.https due to an error: Error 
transferring file: Server returned HTTP response code: 503 for URL: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-solr-grandparent/5.0.0-SNAPSHOT/maven-metadata.xml

Total time: 12 minutes 44 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-6025) Add BitSet.prevSetBit

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185327#comment-14185327
 ] 

ASF subversion and git services commented on LUCENE-6025:
-

Commit 1634585 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1634585 ]

LUCENE-6025: Give AbstractPagedMutable the accountable interface

 Add BitSet.prevSetBit
 -

 Key: LUCENE-6025
 URL: https://issues.apache.org/jira/browse/LUCENE-6025
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6025.patch


 This would allow the join module to work with any BitSet as opposed to only 
 FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6025) Add BitSet.prevSetBit

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185333#comment-14185333
 ] 

ASF subversion and git services commented on LUCENE-6025:
-

Commit 1634588 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634588 ]

LUCENE-6025: Give AbstractPagedMutable the accountable interface

 Add BitSet.prevSetBit
 -

 Key: LUCENE-6025
 URL: https://issues.apache.org/jira/browse/LUCENE-6025
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6025.patch


 This would allow the join module to work with any BitSet as opposed to only 
 FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6026) Give AbstractPagedMutable 'Accountable' rather than its own ramBytesUsed() api

2014-10-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6026.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

 Give AbstractPagedMutable 'Accountable' rather than its own ramBytesUsed() api
 --

 Key: LUCENE-6026
 URL: https://issues.apache.org/jira/browse/LUCENE-6026
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6026.patch


 This should just implement Accountable rather than re-specifying ramBytesUsed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-27 Thread Jessica Cheng Mallet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185361#comment-14185361
 ] 

Jessica Cheng Mallet commented on SOLR-6650:


Hi Tim,

It should be the latest. Do you see the default being changed for 1000 to -1 
(https://github.com/apache/lucene-solr/pull/102/files)?

 Add optional slow request logging at WARN level
 ---

 Key: SOLR-6650
 URL: https://issues.apache.org/jira/browse/SOLR-6650
 Project: Solr
  Issue Type: Improvement
Reporter: Jessica Cheng Mallet
Assignee: Timothy Potter
  Labels: logging
 Fix For: 5.0


 At super high request rates, logging all the requests can become a bottleneck 
 and therefore INFO logging is often turned off. However, it is still useful 
 to be able to set a latency threshold above which a request is considered 
 slow and log that request at WARN level so we can easily identify slow 
 queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6027) Fix visibility issues in field comparators

2014-10-27 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6027:

Attachment: LUCENE-6027.patch

 Fix visibility issues in field comparators
 --

 Key: LUCENE-6027
 URL: https://issues.apache.org/jira/browse/LUCENE-6027
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6027.patch


 These comparators cannot currently be instantiated because although the 
 classes are public, the constructors are package-private (except Strings for 
 some reason). The visibility was correct in 4.10, e.g. can be used to plug in 
 your own sources of raw values like SortedSet/Numeric SortField do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6027) Fix visibility issues in field comparators

2014-10-27 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6027:
---

 Summary: Fix visibility issues in field comparators
 Key: LUCENE-6027
 URL: https://issues.apache.org/jira/browse/LUCENE-6027
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6027.patch

These comparators cannot currently be instantiated because although the classes 
are public, the constructors are package-private (except Strings for some 
reason). The visibility was correct in 4.10, e.g. can be used to plug in your 
own sources of raw values like SortedSet/Numeric SortField do.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-27 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185384#comment-14185384
 ] 

Timothy Potter commented on SOLR-6650:
--

Yes, see that now ... the patch link doesn't have the latest (which I was I 
looked at earlier) https://github.com/apache/lucene-solr/pull/102.patch

 Add optional slow request logging at WARN level
 ---

 Key: SOLR-6650
 URL: https://issues.apache.org/jira/browse/SOLR-6650
 Project: Solr
  Issue Type: Improvement
Reporter: Jessica Cheng Mallet
Assignee: Timothy Potter
  Labels: logging
 Fix For: 5.0


 At super high request rates, logging all the requests can become a bottleneck 
 and therefore INFO logging is often turned off. However, it is still useful 
 to be able to set a latency threshold above which a request is considered 
 slow and log that request at WARN level so we can easily identify slow 
 queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6027) Fix visibility issues in field comparators

2014-10-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185446#comment-14185446
 ] 

Michael McCandless commented on LUCENE-6027:


+1

 Fix visibility issues in field comparators
 --

 Key: LUCENE-6027
 URL: https://issues.apache.org/jira/browse/LUCENE-6027
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6027.patch


 These comparators cannot currently be instantiated because although the 
 classes are public, the constructors are package-private (except Strings for 
 some reason). The visibility was correct in 4.10, e.g. can be used to plug in 
 your own sources of raw values like SortedSet/Numeric SortField do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1634586 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/index/FieldInfo.java

2014-10-27 Thread Michael McCandless
Woops, thanks Rob.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Oct 27, 2014 at 12:32 PM,  rm...@apache.org wrote:
 Author: rmuir
 Date: Mon Oct 27 16:32:52 2014
 New Revision: 1634586

 URL: http://svn.apache.org/r1634586
 Log:
 remove redundant/outdated check

 Modified:
 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/index/FieldInfo.java

 Modified: 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/index/FieldInfo.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/index/FieldInfo.java?rev=1634586r1=1634585r2=1634586view=diff
 ==
 --- 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/index/FieldInfo.java 
 (original)
 +++ 
 lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/index/FieldInfo.java 
 Mon Oct 27 16:32:52 2014
 @@ -165,9 +165,6 @@ public final class FieldInfo {
if (omitNorms) {
  throw new IllegalStateException(non-indexed field ' + name + ' 
 cannot omit norms);
}
 -  if (indexOptions != null) {
 -throw new IllegalStateException(non-indexed field ' + name + ' 
 cannot have index options);
 -  }
  }

  if (dvGen != -1  docValueType == null) {



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6662) bin/solr -h hangs w/o doing anything

2014-10-27 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6662:
--

 Summary: bin/solr -h hangs w/o doing anything
 Key: SOLR-6662
 URL: https://issues.apache.org/jira/browse/SOLR-6662
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Timothy Potter


i ran ./bin/solr -h thinking it would either give me help info about 
running the script, or an error if that argument was invalid -- but instead if 
just hangs printing nothing and doing nothing.

I realize now that -h is for specifying a host - but that makes the 
behavior all the more weird since it didn't give me an error about a missing 
hostname



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2186 - Still Failing

2014-10-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2186/

1 tests failed.
REGRESSION:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds

Error Message:
1: soft wasn't fast enough

Stack Trace:
java.lang.AssertionError: 1: soft wasn't fast enough
at 
__randomizedtesting.SeedInfo.seed([BCADA386AC22601D:E0B80DBF47A02165]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds(SoftAutoCommitTest.java:316)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10759 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185501#comment-14185501
 ] 

ASF subversion and git services commented on SOLR-6650:
---

Commit 1634621 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1634621 ]

SOLR-6650: disabled by default

 Add optional slow request logging at WARN level
 ---

 Key: SOLR-6650
 URL: https://issues.apache.org/jira/browse/SOLR-6650
 Project: Solr
  Issue Type: Improvement
Reporter: Jessica Cheng Mallet
Assignee: Timothy Potter
  Labels: logging
 Fix For: 5.0


 At super high request rates, logging all the requests can become a bottleneck 
 and therefore INFO logging is often turned off. However, it is still useful 
 to be able to set a latency threshold above which a request is considered 
 slow and log that request at WARN level so we can easily identify slow 
 queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185519#comment-14185519
 ] 

ASF subversion and git services commented on SOLR-6650:
---

Commit 1634628 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634628 ]

SOLR-6650: disabled by default

 Add optional slow request logging at WARN level
 ---

 Key: SOLR-6650
 URL: https://issues.apache.org/jira/browse/SOLR-6650
 Project: Solr
  Issue Type: Improvement
Reporter: Jessica Cheng Mallet
Assignee: Timothy Potter
  Labels: logging
 Fix For: 5.0


 At super high request rates, logging all the requests can become a bottleneck 
 and therefore INFO logging is often turned off. However, it is still useful 
 to be able to set a latency threshold above which a request is considered 
 slow and log that request at WARN level so we can easily identify slow 
 queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-27 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6650.
--
Resolution: Fixed

 Add optional slow request logging at WARN level
 ---

 Key: SOLR-6650
 URL: https://issues.apache.org/jira/browse/SOLR-6650
 Project: Solr
  Issue Type: Improvement
Reporter: Jessica Cheng Mallet
Assignee: Timothy Potter
  Labels: logging
 Fix For: 5.0


 At super high request rates, logging all the requests can become a bottleneck 
 and therefore INFO logging is often turned off. However, it is still useful 
 to be able to set a latency threshold above which a request is considered 
 slow and log that request at WARN level so we can easily identify slow 
 queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6435) Add script to simplify posting content to Solr

2014-10-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185576#comment-14185576
 ] 

Erik Hatcher commented on SOLR-6435:


I'm doing this kind of thing to demonstrate Solr with the post.jar tool:

{code}
java -classpath example/solr-webapp/webapp/WEB-INF/lib/solr-core-*.jar 
-Ddata=web -Drecursive=1 -Ddelay=1 -Dc=gettingstarted -Dauto 
org.apache.solr.util.SimplePostTool $@
{code}

That's the kind of thing we can get bin/post to do cleanly for some very common 
use cases (file, web, data files).

 Add script to simplify posting content to Solr
 --

 Key: SOLR-6435
 URL: https://issues.apache.org/jira/browse/SOLR-6435
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 4.10
Reporter: Erik Hatcher
 Fix For: 5.0, Trunk

 Attachments: SOLR-6435.patch


 Solr's SimplePostTool (example/exampledocs/post.jar) provides a very useful, 
 simple way to get common types of content into Solr.  With the new start 
 scripts and the directory refactoring, let's move this tool to a first-class, 
 non example script fronted tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6435) Add script to simplify posting content to Solr

2014-10-27 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185580#comment-14185580
 ] 

Timothy Potter commented on SOLR-6435:
--

As part of the work I'm doing for SOLR-3619, I'm also invoking the post tool 
using:

$JAVA -Durl=http://localhost:$SOLR_PORT/solr/$EXAMPLE/update -jar 
$SOLR_TIP/example/exampledocs/post.jar $SOLR_TIP/example/exampledocs/*.xml

Of course, this complexity should be hidden behind the simple: bin/solr post 
command

Also as part of the work in SOLR-3619, the script will be able to auto-detect 
the port a local Solr is listening too, so that users don't have to do things 
like:

bin/solr post -url http://localhost:8983/solr ...

 Add script to simplify posting content to Solr
 --

 Key: SOLR-6435
 URL: https://issues.apache.org/jira/browse/SOLR-6435
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 4.10
Reporter: Erik Hatcher
 Fix For: 5.0, Trunk

 Attachments: SOLR-6435.patch


 Solr's SimplePostTool (example/exampledocs/post.jar) provides a very useful, 
 simple way to get common types of content into Solr.  With the new start 
 scripts and the directory refactoring, let's move this tool to a first-class, 
 non example script fronted tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6655) Improve SimplePostTool to easily specify target port/collection etc.

2014-10-27 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185589#comment-14185589
 ] 

Anshum Gupta commented on SOLR-6655:


How about letting the endpoint creation happen in the tool ? We might make it 
zk aware etc (if/when...).

 Improve SimplePostTool to easily specify target port/collection etc.
 

 Key: SOLR-6655
 URL: https://issues.apache.org/jira/browse/SOLR-6655
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
  Labels: difficulty-easy, impact-medium
 Attachments: SOLR-6655.patch


 Right now, the SimplePostTool has a single parameter 'url' that can be used 
 to send the request to a specific endpoint. It would make sense to allow 
 users to specify just the collection name, port etc. explicitly and 
 independently as separate parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185682#comment-14185682
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1634664 from [~sar...@syr.edu] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1634664 ]

SOLR-6058: fix anchor alignment (patch from Fran)

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
 Solr_Icons.pdf, Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, 
 Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, 
 Solr_Logo_on_white.png, Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6591) Cluster state updates can be lost on exception in main queue loop

2014-10-27 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6591:

Attachment: SOLR-6591-no-mixed-batches.patch

Right now main cluster states are batched together and updates to collections 
with stateFormat  1 are not batched (I'll create another issue for that). 
However updates to both can be mixed together e.g. if overseer gets 5 messages 
for main cluster state and then 1 for stateFormat  1 then the resulting 
updates are written to ZK together. This is error prone and we shouldn't batch 
updates for different stateFormats together.

This patch tracks the last stateFormat for which message was processed and 
breaks out of the loop if a different one is encountered.

 Cluster state updates can be lost on exception in main queue loop
 -

 Key: SOLR-6591
 URL: https://issues.apache.org/jira/browse/SOLR-6591
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: Trunk
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk

 Attachments: SOLR-6591-constructStateFix.patch, 
 SOLR-6591-no-mixed-batches.patch, SOLR-6591.patch


 I found this bug while going through the failure on jenkins:
 https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/648/
 {code}
 2 tests failed.
 REGRESSION:  
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch
 Error Message:
 Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
 core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
 core: halfcollection_shard1_replica1
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
 CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
 [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
 halfcollection_shard1_replica1
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5952) Give Version parsing exceptions more descriptive error messages

2014-10-27 Thread Dave Borowitz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185735#comment-14185735
 ] 

Dave Borowitz commented on LUCENE-5952:
---

This patch in Lucene 4.10.1 breaks code that used to compile under 4.10.0, 
which could safely assume Version.parse(Leniently) throws no exceptions. Is 
backwards incompatibility in a bugfix release common, or was this an oversight?

 Give Version parsing exceptions more descriptive error messages
 ---

 Key: LUCENE-5952
 URL: https://issues.apache.org/jira/browse/LUCENE-5952
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Michael McCandless
Assignee: Michael McCandless
Priority: Blocker
 Fix For: 4.10.1, 5.0, Trunk

 Attachments: LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch, 
 LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch


 As discussed on the dev list, it's spooky how Version.java tries to fully 
 parse the incoming version string ... and then throw exceptions that lack 
 details about what invalid value it received, which file contained the 
 invalid value, etc.
 It also seems too low level to be checking versions (e.g. is not future proof 
 for when 4.10 is passed a 5.x index by accident), and seems redundant with 
 the codec headers we already have for checking versions?
 Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5299) Refactor Collector API for parallelism

2014-10-27 Thread Shikhar Bhushan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185751#comment-14185751
 ] 

Shikhar Bhushan commented on LUCENE-5299:
-

Just an update that the code rebased against recent trunk lives at 
https://github.com/shikhar/lucene-solr/tree/LUCENE-5299. I've made various 
tweaks, like being able to throttle per-request parallelism in 
{{ParallelSearchStrategy}}.

luceneutil bench numbers when running with ^
  + hacked IndexSearcher constructor that uses {{ParallelSearchStrategy(new 
ForkJoinPool(128), 8)}}
  + luceneutil constants.py SEARCH_NUM_THREADS = 16

Against trunk, on a 32 core (with HT) Sandy Bridge server, with source 
{{wikimedium500k}}

{noformat}
Report after iter 19:
TaskQPS baseline  StdDev  QPS parcol  StdDev
Pct diff
  Fuzzy1   81.91 (43.2%)   52.96 (39.7%)  
-35.3% ( -82% -   83%)
 LowTerm 2550.11 (11.9%) 1927.28  (5.6%)  
-24.4% ( -37% -   -7%)
 Respell   43.02 (39.4%)   35.23 (31.5%)  
-18.1% ( -63% -   87%)
  Fuzzy2   19.32 (25.1%)   16.40 (34.8%)  
-15.1% ( -59% -   59%)
 MedTerm 1679.37 (12.2%) 1743.27  (8.6%)
3.8% ( -15% -   28%)
PKLookup  221.58  (8.3%)  257.36 (13.2%)   
16.1% (  -4% -   41%)
  AndHighLow 1027.99 (11.6%) 1278.39 (15.9%)   
24.4% (  -2% -   58%)
  AndHighMed  741.50 (10.0%) 1198.04 (27.5%)   
61.6% (  21% -  110%)
   MedPhrase  709.04 (11.6%) 1203.02 (24.3%)   
69.7% (  30% -  119%)
 LowSpanNear  601.13 (16.9%) 1127.30 (16.7%)   
87.5% (  46% -  145%)
 LowSloppyPhrase  554.87 (10.8%) 1130.25 (30.5%)  
103.7% (  56% -  162%)
   OrHighMed  408.55 (10.4%)  977.56 (20.1%)  
139.3% (  98% -  189%)
   LowPhrase  364.36 (10.8%)  893.27 (41.0%)  
145.2% (  84% -  220%)
   OrHighLow  355.78 (12.7%)  893.63 (19.6%)  
151.2% ( 105% -  210%)
 AndHighHigh  390.73 (10.3%) 1004.70 (24.3%)  
157.1% ( 111% -  213%)
HighTerm  399.01 (11.8%) 1067.67 (12.1%)  
167.6% ( 128% -  217%)
Wildcard  754.76 (11.6%) 2067.96 (28.0%)  
174.0% ( 120% -  241%)
HighSpanNear  153.57 (14.8%)  463.54 (24.3%)  
201.8% ( 141% -  282%)
  OrHighHigh  212.16 (12.4%)  665.56 (28.2%)  
213.7% ( 154% -  290%)
  HighPhrase  170.49 (13.1%)  547.72 (17.3%)  
221.3% ( 168% -  289%)
HighSloppyPhrase   66.91 (10.1%)  219.59 (12.0%)  
228.2% ( 187% -  278%)
 MedSloppyPhrase  128.73 (12.5%)  425.67 (20.3%)  
230.7% ( 175% -  300%)
 MedSpanNear  130.31 (10.7%)  436.12 (18.2%)  
234.7% ( 185% -  295%)
 Prefix3  166.91 (14.9%)  652.64 (26.7%)  
291.0% ( 217% -  390%)
  IntNRQ  110.73 (15.0%)  467.72 (33.6%)  
322.4% ( 238% -  436%)
{noformat}


 Refactor Collector API for parallelism
 --

 Key: LUCENE-5299
 URL: https://issues.apache.org/jira/browse/LUCENE-5299
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Shikhar Bhushan
 Attachments: LUCENE-5299.patch, LUCENE-5299.patch, LUCENE-5299.patch, 
 LUCENE-5299.patch, LUCENE-5299.patch, benchmarks.txt


 h2. Motivation
 We should be able to scale-up better with Solr/Lucene by utilizing multiple 
 CPU cores, and not have to resort to scaling-out by sharding (with all the 
 associated distributed system pitfalls) when the index size does not warrant 
 it.
 Presently, IndexSearcher has an optional constructor arg for an 
 ExecutorService, which gets used for searching in parallel for call paths 
 where one of the TopDocCollector's is created internally. The 
 per-atomic-reader search happens in parallel and then the 
 TopDocs/TopFieldDocs results are merged with locking around the merge bit.
 However there are some problems with this approach:
 * If arbitary Collector args come into play, we can't parallelize. Note that 
 even if ultimately results are going to a TopDocCollector it may be wrapped 
 inside e.g. a EarlyTerminatingCollector or TimeLimitingCollector or both.
 * The special-casing with parallelism baked on top does not scale, there are 
 many Collector's that could potentially lend themselves to parallelism, and 
 special-casing means the parallelization has to be re-implemented if a 
 different permutation of collectors is to be used.
 h2. Proposal
 A 

[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185787#comment-14185787
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1634681 from [~sar...@syr.edu] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1634681 ]

SOLR-6058: don't cover content when aligning headers linked from the nav bar 
(patch from Fran)

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
 Solr_Icons.pdf, Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, 
 Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, 
 Solr_Logo_on_white.png, Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6591) Cluster state updates can be lost on exception in main queue loop

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185798#comment-14185798
 ] 

ASF subversion and git services commented on SOLR-6591:
---

Commit 1634684 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1634684 ]

SOLR-6591: Do not batch updates for different stateFormats together

 Cluster state updates can be lost on exception in main queue loop
 -

 Key: SOLR-6591
 URL: https://issues.apache.org/jira/browse/SOLR-6591
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: Trunk
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk

 Attachments: SOLR-6591-constructStateFix.patch, 
 SOLR-6591-no-mixed-batches.patch, SOLR-6591.patch


 I found this bug while going through the failure on jenkins:
 https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/648/
 {code}
 2 tests failed.
 REGRESSION:  
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch
 Error Message:
 Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
 core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
 core: halfcollection_shard1_replica1
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
 CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
 [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
 halfcollection_shard1_replica1
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6591) Cluster state updates can be lost on exception in main queue loop

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185800#comment-14185800
 ] 

ASF subversion and git services commented on SOLR-6591:
---

Commit 1634685 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634685 ]

SOLR-6591: Do not batch updates for different stateFormats together

 Cluster state updates can be lost on exception in main queue loop
 -

 Key: SOLR-6591
 URL: https://issues.apache.org/jira/browse/SOLR-6591
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: Trunk
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk

 Attachments: SOLR-6591-constructStateFix.patch, 
 SOLR-6591-no-mixed-batches.patch, SOLR-6591.patch


 I found this bug while going through the failure on jenkins:
 https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/648/
 {code}
 2 tests failed.
 REGRESSION:  
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch
 Error Message:
 Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
 core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
 core: halfcollection_shard1_replica1
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
 CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
 [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
 halfcollection_shard1_replica1
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
 at 
 org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5299) Refactor Collector API for parallelism

2014-10-27 Thread Shikhar Bhushan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185751#comment-14185751
 ] 

Shikhar Bhushan edited comment on LUCENE-5299 at 10/27/14 9:01 PM:
---

Just an update that the code rebased against recent trunk lives at 
https://github.com/shikhar/lucene-solr/tree/LUCENE-5299. I've made various 
tweaks, like being able to throttle per-request parallelism in 
{{ParallelSearchStrategy}}.

luceneutil bench numbers when running with ^  hacked IndexSearcher constructor 
that uses {{ParallelSearchStrategy(new ForkJoinPool(128), 8)}}, against trunk, 
on a 32 core (with HT) Sandy Bridge server, with source {{wikimedium500k}}

SEARCH_NUM_THREADS = 16
{noformat}
Report after iter 19:
TaskQPS baseline  StdDev  QPS parcol  StdDev
Pct diff
  Fuzzy1   81.91 (43.2%)   52.96 (39.7%)  
-35.3% ( -82% -   83%)
 LowTerm 2550.11 (11.9%) 1927.28  (5.6%)  
-24.4% ( -37% -   -7%)
 Respell   43.02 (39.4%)   35.23 (31.5%)  
-18.1% ( -63% -   87%)
  Fuzzy2   19.32 (25.1%)   16.40 (34.8%)  
-15.1% ( -59% -   59%)
 MedTerm 1679.37 (12.2%) 1743.27  (8.6%)
3.8% ( -15% -   28%)
PKLookup  221.58  (8.3%)  257.36 (13.2%)   
16.1% (  -4% -   41%)
  AndHighLow 1027.99 (11.6%) 1278.39 (15.9%)   
24.4% (  -2% -   58%)
  AndHighMed  741.50 (10.0%) 1198.04 (27.5%)   
61.6% (  21% -  110%)
   MedPhrase  709.04 (11.6%) 1203.02 (24.3%)   
69.7% (  30% -  119%)
 LowSpanNear  601.13 (16.9%) 1127.30 (16.7%)   
87.5% (  46% -  145%)
 LowSloppyPhrase  554.87 (10.8%) 1130.25 (30.5%)  
103.7% (  56% -  162%)
   OrHighMed  408.55 (10.4%)  977.56 (20.1%)  
139.3% (  98% -  189%)
   LowPhrase  364.36 (10.8%)  893.27 (41.0%)  
145.2% (  84% -  220%)
   OrHighLow  355.78 (12.7%)  893.63 (19.6%)  
151.2% ( 105% -  210%)
 AndHighHigh  390.73 (10.3%) 1004.70 (24.3%)  
157.1% ( 111% -  213%)
HighTerm  399.01 (11.8%) 1067.67 (12.1%)  
167.6% ( 128% -  217%)
Wildcard  754.76 (11.6%) 2067.96 (28.0%)  
174.0% ( 120% -  241%)
HighSpanNear  153.57 (14.8%)  463.54 (24.3%)  
201.8% ( 141% -  282%)
  OrHighHigh  212.16 (12.4%)  665.56 (28.2%)  
213.7% ( 154% -  290%)
  HighPhrase  170.49 (13.1%)  547.72 (17.3%)  
221.3% ( 168% -  289%)
HighSloppyPhrase   66.91 (10.1%)  219.59 (12.0%)  
228.2% ( 187% -  278%)
 MedSloppyPhrase  128.73 (12.5%)  425.67 (20.3%)  
230.7% ( 175% -  300%)
 MedSpanNear  130.31 (10.7%)  436.12 (18.2%)  
234.7% ( 185% -  295%)
 Prefix3  166.91 (14.9%)  652.64 (26.7%)  
291.0% ( 217% -  390%)
  IntNRQ  110.73 (15.0%)  467.72 (33.6%)  
322.4% ( 238% -  436%)
{noformat}

SEARCH_NUM_THREADS=32
{noformat}
TaskQPS baseline  StdDev  QPS parcol  StdDev
Pct diff
 LowTerm 2401.88 (12.7%) 1799.27  (6.3%)  
-25.1% ( -39% -   -6%)
  Fuzzy26.52 (14.4%)5.74 (24.0%)  
-11.9% ( -43% -   30%)
 Respell   45.13 (90.2%)   40.94 (83.5%)   
-9.3% ( -96% - 1679%)
PKLookup  232.02 (12.9%)  228.35 (12.4%)   
-1.6% ( -23% -   27%)
 MedTerm 1612.01 (14.0%) 1601.71 (10.9%)   
-0.6% ( -22% -   28%)
  Fuzzy1   14.19 (79.3%)   14.71(177.6%)
3.7% (-141% - 1258%)
  AndHighLow 1205.65 (17.5%) 1254.76 (15.9%)
4.1% ( -24% -   45%)
 MedSpanNear  478.11 (25.4%)  946.72 (34.5%)   
98.0% (  30% -  211%)
   OrHighLow  424.71 (14.5%)  941.39 (31.4%)  
121.7% (  66% -  195%)
 AndHighHigh  377.82 (13.3%)  910.77 (32.2%)  
141.1% (  84% -  215%)
HighTerm  325.35 (11.3%)  855.63  (8.9%)  
163.0% ( 128% -  206%)
  AndHighMed  346.57 (11.7%)  914.59 (26.4%)  
163.9% ( 112% -  228%)
   MedPhrase  227.47 (13.1%)  621.50 (22.9%)  
173.2% ( 121% -  240%)
 LowSloppyPhrase  265.21 (10.4%)  748.30 (49.2%)  
182.2% ( 110% -  269%)
   OrHighMed  221.49 (12.2%)  632.55 (23.9%)  
185.6% ( 133% -  

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1909 - Still Failing!

2014-10-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1909/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([90E799E7CD6E52EA:110117FFBA3132D6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (LUCENE-6023) Remove final modifier from four methods of TFIDFSimilarity class to make them overridable.

2014-10-27 Thread Hafiz M Hamid (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185854#comment-14185854
 ] 

Hafiz M Hamid commented on LUCENE-6023:
---

Thanks a lot for your valuable insights [~rcmuir]. Since the byte-constraint 
seems to have been removed from encode/decodeNormValue signatures since 4.4, 
removing final from these methods in DefaultSimilarity (as you suggested) and 
overriding them in a subclass would do the job for us. I'll change the bug 
title to reflect that and will send a patch.

Just so you're interested in knowing, our goal is to do the length-norm 
computation at search-time. For that, we want to store the raw field-length 
(i.e. numTerms) as fieldNorm so we could use it at search-time to compute 
length-norm. It'll enable us vary the length-norm function and A/B test them 
without having to re-index all the data which is out of question given our 
scale and limitations.

 Remove final modifier from four methods of TFIDFSimilarity class to make 
 them overridable.
 

 Key: LUCENE-6023
 URL: https://issues.apache.org/jira/browse/LUCENE-6023
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/query/scoring
Affects Versions: 4.2.1
Reporter: Hafiz M Hamid
  Labels: similarity
 Fix For: 4.2.1


 The TFIDFSimilarity has the following four of its public methods marked 
 final which is keeping us from overriding these methods. Apparently there 
 doesn't seem to be an obvious reason for keeping these methods 
 non-overridable.
 Here are the four methods:
 computeNorm()
 computeWeight()
 exactSimScorer()
 sloppySimScorer()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185857#comment-14185857
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1634692 from [~gsingers] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1634692 ]

SOLR-6058: minor tweaks

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
 Solr_Icons.pdf, Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, 
 Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, 
 Solr_Logo_on_white.png, Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6023) Remove final modifier from encode/decodeNormValue methods in DefaultSimilarity class.

2014-10-27 Thread Hafiz M Hamid (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hafiz M Hamid updated LUCENE-6023:
--
  Description: 



  was:
The TFIDFSimilarity has the following four of its public methods marked final 
which is keeping us from overriding these methods. Apparently there doesn't 
seem to be an obvious reason for keeping these methods non-overridable.

Here are the four methods:
computeNorm()
computeWeight()
exactSimScorer()
sloppySimScorer()


Affects Version/s: (was: 4.2.1)
Fix Version/s: (was: 4.2.1)
  Summary: Remove final modifier from encode/decodeNormValue 
methods in DefaultSimilarity class.  (was: Remove final modifier from four 
methods of TFIDFSimilarity class to make them overridable.)

 Remove final modifier from encode/decodeNormValue methods in 
 DefaultSimilarity class.
 ---

 Key: LUCENE-6023
 URL: https://issues.apache.org/jira/browse/LUCENE-6023
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/query/scoring
Reporter: Hafiz M Hamid
  Labels: similarity





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-629) Fuzzy search with DisMax request handler

2014-10-27 Thread Fumiaki Yamaoka (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fumiaki Yamaoka updated SOLR-629:
-
Attachment: SOLR-629.patch

I'm working with Walter at Chegg. I have attempted to port the work Walter has 
done to make fuzzy match work on qf field on Solr 3.3 to Solr 4.7 branch. 
Please see the attachment for the changes. Would appreciate some feedback.

 Fuzzy search with DisMax request handler
 

 Key: SOLR-629
 URL: https://issues.apache.org/jira/browse/SOLR-629
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Affects Versions: 1.3
Reporter: Guillaume Smet
Priority: Minor
 Attachments: SOLR-629.patch, dismax_fuzzy_query_field.v0.1.diff, 
 dismax_fuzzy_query_field.v0.1.diff


 The DisMax search handler doesn't support fuzzy queries which would be quite 
 useful for our usage of Solr and from what I've seen on the list, it's 
 something several people would like to have.
 Following this discussion 
 http://markmail.org/message/tx6kqr7ga6ponefa#query:solr%20dismax%20fuzzy+page:1+mid:c4pciq6rlr4dwtgm+state:results
  , I added the ability to add fuzzy query field in the qf parameter. I kept 
 the patch as conservative as possible.
 The syntax supported is: fieldOne^2.3 fieldTwo~0.3 fieldThree~0.2^-0.4 
 fieldFour as discussed in the above thread.
 The recursive query aliasing should work even with fuzzy query fields using a 
 very simple rule: the aliased fields inherit the minSimilarity of their 
 parent, combined with their own one if they have one.
 Only the qf parameter support this syntax atm. I suppose we should make it 
 usable in pf too. Any opinion?
 Comments are very welcome, I'll spend the time needed to put this patch in 
 good shape.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release 4.10.2 RC1

2014-10-27 Thread Steve Rowe
+1

SUCCESS! [0:52:16.190427]

Steve

 On Oct 27, 2014, at 7:54 AM, Adrien Grand jpou...@gmail.com wrote:
 
 +1
 SUCCESS! [0:56:11.020611]
 
 On Sun, Oct 26, 2014 at 4:45 PM, Simon Willnauer
 simon.willna...@gmail.com wrote:
 Tests now pass for me too!! thanks mike
 
 +1
 
 On Sun, Oct 26, 2014 at 12:22 PM, Michael McCandless
 luc...@mikemccandless.com wrote:
 Artifacts: 
 http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC1-rev1634293
 
 Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py
 http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC1-rev1634293
 1634293 4.10.2 /tmp/smoke4102 True
 
 I ran smoke tester:
 
  SUCCESS! [0:30:16.520543]
 
 And also confirmed Elasticsearch tests pass with this RC.
 
 Here's my +1
 
 Mike McCandless
 
 http://blog.mikemccandless.com
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 -- 
 Adrien
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6655) Improve SimplePostTool to easily specify target port/collection etc.

2014-10-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185950#comment-14185950
 ] 

Jan Høydahl commented on SOLR-6655:
---

Remember that the guiding principle behind *simple* post tool is that it shall 
have NO dependencies on non-JDK libs, but be a 100% self contained java file.

Perhaps it's time to create a *SolrPostTool* from scratch using SolrJ, ZK, 
proper commons-cli argument parsing, depend on some open source crawler library 
etc, and thus create a robust cmdline tool for pushing data to Solr.

 Improve SimplePostTool to easily specify target port/collection etc.
 

 Key: SOLR-6655
 URL: https://issues.apache.org/jira/browse/SOLR-6655
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
  Labels: difficulty-easy, impact-medium
 Attachments: SOLR-6655.patch


 Right now, the SimplePostTool has a single parameter 'url' that can be used 
 to send the request to a specific endpoint. It would make sense to allow 
 users to specify just the collection name, port etc. explicitly and 
 independently as separate parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6655) Improve SimplePostTool to easily specify target port/collection etc.

2014-10-27 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185965#comment-14185965
 ] 

Anshum Gupta commented on SOLR-6655:


Sure! Just that, it'd be another issue. SimplePostTool however should accept 
independent params to really make it *simple*.
Collection, port, core to begin with. 

If we intend to use a common param '-c' for both, the collection and core, we 
should document it to avoid ambiguity there. I did open an issue (SOLR-6379) on 
a similar note but it's still open due to lack of consensus. I wouldn't want 
the same ambiguity to be dragged on to other parts of Solr too.

 Improve SimplePostTool to easily specify target port/collection etc.
 

 Key: SOLR-6655
 URL: https://issues.apache.org/jira/browse/SOLR-6655
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
  Labels: difficulty-easy, impact-medium
 Attachments: SOLR-6655.patch


 Right now, the SimplePostTool has a single parameter 'url' that can be used 
 to send the request to a specific endpoint. It would make sense to allow 
 users to specify just the collection name, port etc. explicitly and 
 independently as separate parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6631) DistributedQueue spinning on calling zookeeper getChildren()

2014-10-27 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6631:
-
Attachment: SOLR-6631.patch

Here's a cut of a patch with the start of a unit test for DistributedQueue. I'm 
curious how we might be able to trigger a None event to get raised so I can 
add that to the unit test? Mainly I want to reproduce the scenario described in 
the bug report in the unit test if possible, but wasn't able to get ZK to raise 
a None event.

 DistributedQueue spinning on calling zookeeper getChildren()
 

 Key: SOLR-6631
 URL: https://issues.apache.org/jira/browse/SOLR-6631
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Jessica Cheng Mallet
Assignee: Timothy Potter
  Labels: solrcloud
 Attachments: SOLR-6631.patch


 The change from SOLR-6336 introduced a bug where now I'm stuck in a loop 
 making getChildren() request to zookeeper with this thread dump:
 {quote}
 Thread-51 [WAITING] CPU time: 1d 15h 0m 57s
 java.lang.Object.wait()
 org.apache.zookeeper.ClientCnxn.submitRequest(RequestHeader, Record, Record, 
 ZooKeeper$WatchRegistration)
 org.apache.zookeeper.ZooKeeper.getChildren(String, Watcher)
 org.apache.solr.common.cloud.SolrZkClient$6.execute()2 recursive calls
 org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkOperation)
 org.apache.solr.common.cloud.SolrZkClient.getChildren(String, Watcher, 
 boolean)
 org.apache.solr.cloud.DistributedQueue.orderedChildren(Watcher)
 org.apache.solr.cloud.DistributedQueue.getChildren(long)
 org.apache.solr.cloud.DistributedQueue.peek(long)
 org.apache.solr.cloud.DistributedQueue.peek(boolean)
 org.apache.solr.cloud.Overseer$ClusterStateUpdater.run()
 java.lang.Thread.run()
 {quote}
 Looking at the code, I think the issue is that LatchChildWatcher#process 
 always sets the event to its member variable event, regardless of its type, 
 but the problem is that once the member event is set, the await no longer 
 waits. In this state, the while loop in getChildren(long), when called with 
 wait being Integer.MAX_VALUE will loop back, NOT wait at await because event 
 != null, but then it still will not get any children.
 {quote}
 while (true) \{
   if (!children.isEmpty()) break;
   watcher.await(wait == Long.MAX_VALUE ? DEFAULT_TIMEOUT : wait);
   if (watcher.getWatchedEvent() != null)
 \{ children = orderedChildren(null); \}
   if (wait != Long.MAX_VALUE) break;
 \}
 {quote}
 I think the fix would be to only set the event in the watcher if the type is 
 not None.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6623) NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param

2014-10-27 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-6623.

Resolution: Fixed

This hasn't happened since the fix was committed. Marking the issue as resolved.

 NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED 
 param
 

 Key: SOLR-6623
 URL: https://issues.apache.org/jira/browse/SOLR-6623
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Anshum Gupta
 Attachments: SOLR-6623.patch, SOLR-6623.patch


 I'm not sure if this is an existing bug, or something new caused by changes 
 in SOLR-5986, but it just poped up in jenkinds today...
 http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1844
 Revision: 1631656
 {noformat}
[junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestDistributedGrouping -Dtests.method=testDistribSearch 
 -Dtests.seed=E9460FA0973F6672 -Dtests.slow=true -Dtests.locale=cs 
 -Dtests.timezone=Indian/Mayotte -Dtests.file.encoding=ISO-8859-1
[junit4] ERROR   55.9s | TestDistributedGrouping.testDistribSearch 
[junit4] Throwable #1: 
 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
 java.lang.NullPointerException
[junit4]  at 
 org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:45)
[junit4]  at 
 org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:708)
[junit4]  at 
 org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:691)
[junit4]  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:337)
[junit4]  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:136)
[junit4]  at 
 org.apache.solr.core.SolrCore.execute(SolrCore.java:1983)
[junit4]  at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:773)
[junit4]  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:408)
[junit4]  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202)
[junit4]  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
[junit4]  at 
 org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137)
[junit4]  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
[junit4]  at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
[junit4]  at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
[junit4]  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
[junit4]  at 
 org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
[junit4]  at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
[junit4]  at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
[junit4]  at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
[junit4]  at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
[junit4]  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
[junit4]  at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
[junit4]  at 
 org.eclipse.jetty.server.Server.handle(Server.java:368)
[junit4]  at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
[junit4]  at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
[junit4]  at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
[junit4]  at 
 org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
[junit4]  at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
[junit4]  at 
 org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
[junit4]  at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
[junit4]  at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
[junit4]  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  

[jira] [Commented] (SOLR-6655) Improve SimplePostTool to easily specify target port/collection etc.

2014-10-27 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186028#comment-14186028
 ] 

Anshum Gupta commented on SOLR-6655:


Actually ignore that comment. That's (resolving between collection/core) 
something orthogonal.
The final URL would be the same and Solr needs to handle it(if it does).

 Improve SimplePostTool to easily specify target port/collection etc.
 

 Key: SOLR-6655
 URL: https://issues.apache.org/jira/browse/SOLR-6655
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
  Labels: difficulty-easy, impact-medium
 Attachments: SOLR-6655.patch


 Right now, the SimplePostTool has a single parameter 'url' that can be used 
 to send the request to a specific endpoint. It would make sense to allow 
 users to specify just the collection name, port etc. explicitly and 
 independently as separate parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6663) decide rules syntax for computing stats/ranges/queries only at certain levels of a pivot

2014-10-27 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6663:
--

 Summary: decide rules  syntax for computing stats/ranges/queries 
only at certain levels of a pivot
 Key: SOLR-6663
 URL: https://issues.apache.org/jira/browse/SOLR-6663
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man


[~smolloy] asked a great question in SOLR-6351...

bq. One more question around this, which applies for SOLR-6353 and SOLR-4212 as 
well. Should we have a syntax to apply stats/queries/ranges only at specific 
levels in the pivot hierarchy? It would reduce amount of computation and size 
of response for cases where you only need it at a specific level (usually last 
level I guess). 

I'm splitting this off into it's own Sub-task for further discussion.



For now, the stats localparam must be a single tag, and the work around is 
to add a common tag to all stats you want to use.

ie, this will cause an error...
{noformat}
stats.field={!tag=tagA}price
stats.field={!tag=tagB}popularity
stats.field={!tag=tagB}clicks
facet.pivot={!stats=tagA,tagB}xxx,yyy,zz
{noformat}

but this will work...
{noformat}
stats.field={!tag=tagA,tagPivot}price
stats.field={!tag=tagB,tagPivot}popularity
stats.field={!tag=tagB,tagPivot}clicks
facet.pivot={!stats=tagPivot}xxx,yyy,zz
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6663) decide rules syntax for computing stats/ranges/queries only at certain levels of a pivot

2014-10-27 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186037#comment-14186037
 ] 

Hoss Man commented on SOLR-6663:



cut/pasting comments about this from SOLR-6351...



[Steve's initial 
suggestion|https://issues.apache.org/jira/browse/SOLR-6351?focusedCommentId=14160413page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14160413]...

{quote}
One more question around this, which applies for SOLR-6353 and SOLR-4212 as 
well. Should we have a syntax to apply stats/queries/ranges only at specific 
levels in the pivot hierarchy? It would reduce amount of computation and size 
of response for cases where you only need it at a specific level (usually last 
level I guess). Something like:
facet.pivot=\{!stats=s1,s2\}field1,field2

We could us * for all levels, or something like:
facet.pivot=\{!stats=,,s3\}field1,field2,field3
to only apply at 3rd level.
{quote}

[My 
reply|https://issues.apache.org/jira/browse/SOLR-6351?focusedCommentId=14160658page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14160658]...

{quote}
...


That's a great question ... honestly it's not something i ever really thought 
about.  

One quick thing i will point out: the size of the response shouldn't really be 
a huge factor in our decisions here, because with SOLR-6349 (which i'll 
hopefully have a patch for in the next day or so) the response will only need 
to include the stats people actually care about, andsask for, so the typical 
result size should be much smaller.

But you've got a good point about amount of computation done/returned at levels 
that people may not care about ... in my head, it seemed to make sense that the 
stats (and ranges, etc...) should be computed at every level just like the 
pivot count size -- but at the intermediate levels that count is a free 
computation based on the size of the subset, and but i suspect you are correct: 
may people may only care about having these new stats/ranges/query on the 
leaves in the common case.

I'm not really following your suggested syntax though ... you seem to be saying 
that in the stats local param, commas would be used to delimit levels of 
the pivot (corresponding to the commas in the list of pivot fields) but then 
i'm not really clear what you mean about using \* (if that means all levels, 
how do you know what tag name to use?

in the original examples i porposed, i was thinking that a comma seperated list 
could refer to multiple tag names, wimilar to how the exlcusions work -- ie..

{noformat}
facet.pivot={!stats=prices,ratings}category,manufacturer
facet.pivot={!stats=prices,pop}reseller
stats.field={!key=avg_list_price tag=prices mean=true}list_price
stats.field={!tag=ratings min=true max=true}user_rating
stats.field={!tag=ratings min=true max=true}editors_rating
stats.field={!tag=prices min=true max=true}sale_price
stats.field={!tag=pop}weekly_tweets
stats.field={!tag=pop}weekly_page_views
{noformat}

...would result in the category,manufacturer pivot having stats on 
avg_list_price, sale_price, user_rating,  editors_rating while the 
reseller pivot would have stats on avg_list_price, sale_price, 
weekly_tweets,  weekly_page_views

Thinking about it now though, if we support multiple tag names on stats.field, 
the same thing could be supported like this...

{noformat}
facet.pivot={!stats=cm_s}category,manufacturer
facet.pivot={!stats=r_s}reseller
stats.field={!key=avg_list_price tag=cm_s,r_s mean=true}list_price
stats.field={!tag=cm_s min=true max=true}user_rating
stats.field={!tag=cm_s min=true max=true}editors_rating
stats.field={!tag=cm_s,r_s min=true max=true}sale_price
stats.field={!tag=r_s}weekly_tweets
stats.field={!tag=r_s}weekly_page_views
{noformat}

So ... if we did that, then we could start using position info in a comma 
seperated list of tag names to refer to where in the pivot depth those 
stats/ranges/queries should be computed ... the question i have is should we 
? .. in the context of a facet.pivot param, will it be obvious to folks that 
there is a maping between the commas in these local params and hte commas in 
hte bod of the facet.pivot param, or will it confuse people who are use to 
seeing comma as just a way of delimiting multiple values in tag/ex params?

my opinion: no freaking clue at the moment ... need to let it soak in my brain.

{quote}

 decide rules  syntax for computing stats/ranges/queries only at certain 
 levels of a pivot
 --

 Key: SOLR-6663
 URL: https://issues.apache.org/jira/browse/SOLR-6663
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man

 [~smolloy] asked a great question in SOLR-6351...
 bq. One more question around this, which applies for SOLR-6353 and SOLR-4212 
 as well. 

[jira] [Created] (LUCENE-6028) Cut over DisjunctionScorer to oal.util.PriorityQueue

2014-10-27 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6028:


 Summary: Cut over DisjunctionScorer to oal.util.PriorityQueue
 Key: LUCENE-6028
 URL: https://issues.apache.org/jira/browse/LUCENE-6028
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0


DisjunctionScorer maintains its own implementation of a priority queue, I think 
it should just use oal.util.PriorityQueue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6028) Cut over DisjunctionScorer to oal.util.PriorityQueue

2014-10-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6028:
-
Attachment: LUCENE-6028.patch

Here is a patch. I'm currently traveling but will run the luceneutil benchmarks 
when I come back to see if the patch has any impact.

 Cut over DisjunctionScorer to oal.util.PriorityQueue
 

 Key: LUCENE-6028
 URL: https://issues.apache.org/jira/browse/LUCENE-6028
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6028.patch


 DisjunctionScorer maintains its own implementation of a priority queue, I 
 think it should just use oal.util.PriorityQueue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6029) Add a default implementation for DISI.nextDoc()

2014-10-27 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6029:


 Summary: Add a default implementation for DISI.nextDoc()
 Key: LUCENE-6029
 URL: https://issues.apache.org/jira/browse/LUCENE-6029
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Priority: Minor
 Fix For: 5.0


In some cases, nextDoc() cannot really be faster than advance(docID() + 1), so 
we could make it the default implementation?

This is already how BitDocIdSet, NotDocIdSet, SloppyPhraseScorer and 
DocValuesDocIdSet implement nextDoc().

https://issues.apache.org/jira/browse/LUCENE-6022?focusedCommentId=14181542page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14181542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6029) Add a default implementation for DISI.nextDoc()

2014-10-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6029:
-
Attachment: LUCENE-6029.patch

Here is a patch.

 Add a default implementation for DISI.nextDoc()
 ---

 Key: LUCENE-6029
 URL: https://issues.apache.org/jira/browse/LUCENE-6029
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6029.patch


 In some cases, nextDoc() cannot really be faster than advance(docID() + 1), 
 so we could make it the default implementation?
 This is already how BitDocIdSet, NotDocIdSet, SloppyPhraseScorer and 
 DocValuesDocIdSet implement nextDoc().
 https://issues.apache.org/jira/browse/LUCENE-6022?focusedCommentId=14181542page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14181542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 660 - Still Failing

2014-10-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/660/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 30 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 30 
seconds
at 
__randomizedtesting.SeedInfo.seed([B5A13DAD736935A:8ABC9DC2A069F366]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:840)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1459)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.doTest(DistribDocExpirationUpdateProcessorTest.java:79)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186127#comment-14186127
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1634755 from [~sar...@syr.edu] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1634755 ]

SOLR-6058: scroll to the right place when smoothly scrolling to the 'Learn more 
about Solr.' section

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, SOLR-6058.location-fix.patchfile, 
 Solr_Icons.pdf, Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, 
 Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, 
 Solr_Logo_on_white.png, Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6631) DistributedQueue spinning on calling zookeeper getChildren()

2014-10-27 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186206#comment-14186206
 ] 

Hoss Man commented on SOLR-6631:


bq. I'm curious how we might be able to trigger a None event to get raised so 
I can add that to the unit test?

IIUC: EventType.None only happens when there are session level events -- 
ie: KeeperState in (Disconnected, SyncConnected (reconnected), Expired ).

bq. I think the fix would be to only set the event in the watcher if the type 
is not None.

I'm not very familiar with this code, perhaps a better approach would be to 
only proceed with this code if the EventType recived is in an explicit set of 
_expected_ types?

whichever way makes sense, one trick i find very helpful in situations like 
this (a: dealing with enums from third party packages; b: wanting to behave 
according to partitions of the enum space) is to not just do X if state in (A) 
 but do X if state in (A) else no-op if state in (B) else ERROR so that if 
someone upgrades zookeeper and there are suddenly all new EventTypes we don't 
expect, they aren't silently ignored.

the EnumSet.allOf() and EnumSet.complimentsOf() methods can also help write 
very targetted unit tests to alert you to unexpected values as soon as you 
upgrade.

So for example...

{code}
public class DistributedQueue {
  public static final EnumSetEventType EXPECTED_EVENTS = EnumSet.of(...);
  public static final EnumSetEventType IGNORED_EVENTS = EnumSet.of(...);
  ...
if (EXPECTED_EVENTS.contains(event.getType()) {
  // do stuff
  ...
} else if (IGNORED_EVENTS.contains(event.getType()) {
  // NO-OP
} else {
  log.error(WTF EVENT IS THIS?  + ...)
}
  ...
}
public class TestDistributedQueue {
  ...
  /**
   * if this test fails, don't change it - go audit these EnumSets and all 
their usages
   */
  public void testSanityOfEventTypes() {
EnumSetEventType known = EnumSet.copyOf(DistributedQueue.EXPECTED_EVENTS);
known.addAll(DistributedQueue.IGNORED_EVENTS);

EnumSetEventType unknown = EnumSet.complementOf(known);
assertEquals(un-known EventTypes found, zk upgrade?, 
EnumSet.noneOf(EventType.class), unknown)
  }
{code}


 DistributedQueue spinning on calling zookeeper getChildren()
 

 Key: SOLR-6631
 URL: https://issues.apache.org/jira/browse/SOLR-6631
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Jessica Cheng Mallet
Assignee: Timothy Potter
  Labels: solrcloud
 Attachments: SOLR-6631.patch


 The change from SOLR-6336 introduced a bug where now I'm stuck in a loop 
 making getChildren() request to zookeeper with this thread dump:
 {quote}
 Thread-51 [WAITING] CPU time: 1d 15h 0m 57s
 java.lang.Object.wait()
 org.apache.zookeeper.ClientCnxn.submitRequest(RequestHeader, Record, Record, 
 ZooKeeper$WatchRegistration)
 org.apache.zookeeper.ZooKeeper.getChildren(String, Watcher)
 org.apache.solr.common.cloud.SolrZkClient$6.execute()2 recursive calls
 org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkOperation)
 org.apache.solr.common.cloud.SolrZkClient.getChildren(String, Watcher, 
 boolean)
 org.apache.solr.cloud.DistributedQueue.orderedChildren(Watcher)
 org.apache.solr.cloud.DistributedQueue.getChildren(long)
 org.apache.solr.cloud.DistributedQueue.peek(long)
 org.apache.solr.cloud.DistributedQueue.peek(boolean)
 org.apache.solr.cloud.Overseer$ClusterStateUpdater.run()
 java.lang.Thread.run()
 {quote}
 Looking at the code, I think the issue is that LatchChildWatcher#process 
 always sets the event to its member variable event, regardless of its type, 
 but the problem is that once the member event is set, the await no longer 
 waits. In this state, the while loop in getChildren(long), when called with 
 wait being Integer.MAX_VALUE will loop back, NOT wait at await because event 
 != null, but then it still will not get any children.
 {quote}
 while (true) \{
   if (!children.isEmpty()) break;
   watcher.await(wait == Long.MAX_VALUE ? DEFAULT_TIMEOUT : wait);
   if (watcher.getWatchedEvent() != null)
 \{ children = orderedChildren(null); \}
   if (wait != Long.MAX_VALUE) break;
 \}
 {quote}
 I think the fix would be to only set the event in the watcher if the type is 
 not None.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-27 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6351:
---
Attachment: SOLR-6351.patch


bq. ...Should we have a syntax to apply stats/queries/ranges only at specific 
levels in the pivot hierarchy?...

I spun this off into it's own issue: SOLR-6663

For now, i think we should focus on requiring example one tag value in the 
stats local param -- i incorporated that into some of the code/tests as i was 
reviewing...


{panel:title=changes since last patch}

* PivotFacetProcessor
** brought back getSubsetSize - it gives us a nice optimization in the cases 
where the actual subset isn't needed.
** getStatsFields
*** renamed getTaggedStatsFields so it's a little more clear it's a subset 
relative this pivot
*** made static so it's use of localparams wasn't soo obtuse (i hate how much 
stateful variables there are in SimpleFacets)
*** added error throwing if/when user specifies multiple tags seperated by 
commas since we want to reserve for now and may use it for controlling where in 
the pivot tree stats are computed (SOLR-6663)
*** javadocs
** doPivots
*** refactored the new stats logic to all be in one place, and not to do 
anything (or allocate any extra objects) unless there are docs to compute stats 
over, and stats to compute for those docs

* SolrExampleTests
** merged testPivotFacetsStatsParsed into testPivotFacetsStats
*** testPivotFacetsStats didn't need any distinct setup from what 
testPivotFacetsStatsParsed, so i just moved the querys/assertions done by 
testPivotFacetsStats into testPivotFacetsStatsParsed and simplified the name
*** removed the SOLR-6349 style local params from the stats.fields -- not 
supported yet, and if/when it is this test asserts that more stats are there 
then what those params were asking for, so we don't want it to suddenly break 
in the future.
*** enhance the test a bit to sanity check these assertions still pass even 
when extra levels of pivots are requested.
** merge testPivotFacetsStatsNotSupportedBoolean + 
testPivotFacetsStatsNotSupportedString = testPivotFacetsStatsNotSupported
*** String was not accurate, TextField is more specific
*** also simplified the setup - no need for so many docs, and no need for it to 
be diff between the two diff checks
*** added ignoreException -- the junit logs shouldn't misslead the user when an 
exception is expected
*** added some additional assertions about the error messages
*** added another assertion if multiple tags seperated by commas (SOLR-6663)

* TestCloudPivots
** replaced nocommit comment with a comment refering to SOLR-6663
{panel}




 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >