[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_40) - Build # 4554 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4554/
Java: 32bit/jdk1.8.0_40 -client -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (28  20) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (28  20) - we 
expect it can happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([1005453B13ED241B:98517AE1BD1149E3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  

[jira] [Updated] (SOLR-7384) Delete-by-id with _route_ parameter fails on replicas for collections with implicit router

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7384:

Affects Version/s: 5.1

 Delete-by-id with _route_ parameter fails on replicas for collections with 
 implicit router
 --

 Key: SOLR-7384
 URL: https://issues.apache.org/jira/browse/SOLR-7384
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: FullSolrCloudDistribCmdsTest-2.log, 
 FullSolrCloudDistribCmdsTest.log


 The FullSolrCloudDistribCmdsTest test has been failing quite regularly on 
 jenkins.
 {quote}
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
 Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC
 1 tests failed.
 FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test
 Error Message:
 Error from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
 from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 at 
 __randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
 at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7384) Delete-by-id with _route_ parameter fails on replicas for collections with implicit router

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7384:

Description: 
The FullSolrCloudDistribCmdsTest test has been failing quite regularly on 
jenkins.

{quote}
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard:

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard:
at 
__randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
{quote}

  was:
This has been failing quite regularly on jenkins.

{quote}
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard:

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard:
at 
__randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
{quote}


 Delete-by-id with _route_ parameter fails on replicas for collections with 
 implicit router
 --

 Key: SOLR-7384
 URL: https://issues.apache.org/jira/browse/SOLR-7384
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: FullSolrCloudDistribCmdsTest-2.log, 
 FullSolrCloudDistribCmdsTest.log


 The FullSolrCloudDistribCmdsTest test has been failing quite regularly on 
 jenkins.
 {quote}
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
 Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC
 1 tests failed.
 FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test
 Error Message:
 Error from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
 from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 at 
 __randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
 at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7385) The clusterstatus API does not return the config name for a collection

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7385:

Attachment: SOLR-7385.patch

Simple patch. I'll add the test for a few more cases before committing.

 The clusterstatus API does not return the config name for a collection
 --

 Key: SOLR-7385
 URL: https://issues.apache.org/jira/browse/SOLR-7385
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.4, 5.0, 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7385.patch


 The config name used while creating the collection is not returned by the 
 'clusterstatus' API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7386) Ability to Circumvent Content-Type Requirement

2015-04-13 Thread Konrad Slepoy (JIRA)
Konrad Slepoy created SOLR-7386:
---

 Summary: Ability to Circumvent Content-Type Requirement
 Key: SOLR-7386
 URL: https://issues.apache.org/jira/browse/SOLR-7386
 Project: Solr
  Issue Type: Improvement
Reporter: Konrad Slepoy


There is no way to circumvent a content-type requirement for a request. 

This came about because my team passes a POST request body which should not be 
touched by SOLR and is used by our native application. However, with the new 
inclusions in SOLR/Heliosearch it seems like SOLR tries to parse the body in a 
new way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7384) Delete-by-id with _route_ parameter fails on replicas for collections with implicit router

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492874#comment-14492874
 ] 

ASF subversion and git services commented on SOLR-7384:
---

Commit 1673263 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673263 ]

SOLR-7384: Disable the failing tests until the root cause is fixed

 Delete-by-id with _route_ parameter fails on replicas for collections with 
 implicit router
 --

 Key: SOLR-7384
 URL: https://issues.apache.org/jira/browse/SOLR-7384
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: FullSolrCloudDistribCmdsTest-2.log, 
 FullSolrCloudDistribCmdsTest.log


 The FullSolrCloudDistribCmdsTest test has been failing quite regularly on 
 jenkins. Some of those failures are spurious but there is an underlying bug 
 that delete-by-id requests with _route_ parameter on a collection with 
 implicit router, fails on replicas because of a missing _version_ field.
 {quote}
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
 Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC
 1 tests failed.
 FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test
 Error Message:
 Error from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
 from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 at 
 __randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
 at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2940 - Still Failing

2015-04-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2940/

3 tests failed.
REGRESSION:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
expected:1 but was:2

Stack Trace:
java.lang.AssertionError: expected:1 but was:2
at 
__randomizedtesting.SeedInfo.seed([5B65C99EBCDDD5BB:D331F6441221B843]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdCompositeRouterWithRouterField(FullSolrCloudDistribCmdsTest.java:383)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

Re: Examples in JIRA issues CHANGES messages

2015-04-13 Thread Cassandra Targett
+1. Ref Guide updates would be faster, easier and more accurate if there
was more description of the changes introduced by each patch.

On Mon, Apr 13, 2015 at 5:43 AM, Shalin Shekhar Mangar 
shalinman...@gmail.com wrote:

 +1 to everything.

 It is also nice to give more details into what changed between patches.
 Unless you use review board, this is sometimes the only way to understand
 the changes between two patches. Especially, please call out any hacks,
 gotchas and todo items that you may have thought about when writing the
 code. This is not just for people following the development but also for
 future contributors who may have to debug your code and need some
 historical context to understand the design decisions. Finally, if someone
 has given you review comments, please be kind enough to point out if/how
 they've been addressed.

 On Sun, Apr 12, 2015 at 12:21 AM, Yonik Seeley ysee...@gmail.com wrote:

 Devs  contributors, please remember to be nice to other contributors
 and describe what your patch is trying to do in the JIRA issue.

 For patches that add/change an API, that means giving an example or
 specifying what the API is.  People should not have to read through
 source code to try and reconstruct what an API actually looks like in
 order to give feedback on a proposed API.

 Also, for CHANGES, please consider what it will take for others to
 understand the actual change.  Don't automatically just use the JIRA
 description.
  - if you added a new parameter, then put that parameter in the
 description
  - where appropriate, put a short/concise example (not more than a few
 lines though) - when to do this is more subjective, but please think
 about it for very commonly used APIs.


 For the sake of example, I'll pick on the first feature added for 5.2:

 from CHANGES.txt:
 '''
 New Features
 --
 * SOLR-6637: Solr should have a way to restore a core from a backed up
 index.
 '''

 So it's saying we *should* have a feature (as opposed to saying we
 actually now do have a feature, and what that feature is), and doesn't
 give you any clue how that feature was actually implemented, or how
 you could go about finding out.

 So next, I go to SOLR-6637 to try and see what this feature actually
 consists of.
 Unfortunately, there's never an example of how someone is supposed to
 try this feature out.  We're setting a high bar for contribution from
 others.

 So next, I use the source to try and reconstruct what the API actually
 looks like.
 I find what looks like will be the right test class:

 https://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test/org/apache/solr/handler/TestRestoreCore.java?view=markup

 Of course, the tests aren't going to directly give me what a command
 URL would look like, but this is the closest thing:
 TestReplicationHandlerBackup.runBackupCommand(masterJetty,
 ReplicationHandler.CMD_RESTORE, params);

 And continue following the source just to be able to construct a
 simple example like I gave here:

 http://yonik.com/solr-5-2/

 (so I finally tried it out, and it works... yay ;-)

 So to recap:
 - Consider CHANGES documentation.
 - Describe *what* you are trying to implement in your JIRA issues, and
 give API examples where appropriate.

 -Yonik

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 Regards,
 Shalin Shekhar Mangar.



[jira] [Updated] (SOLR-7386) Ability to Circumvent Content-Type Requirement

2015-04-13 Thread Konrad Slepoy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konrad Slepoy updated SOLR-7386:

Description: 
There is no way to circumvent a content-type requirement for a POST request. 

This came about because my team passes a POST request body which should not be 
touched by SOLR and is used by our native application. However, with the new 
inclusions in SOLR/Heliosearch it seems like SOLR tries to parse the body in a 
new way.

  was:
There is no way to circumvent a content-type requirement for a request. 

This came about because my team passes a POST request body which should not be 
touched by SOLR and is used by our native application. However, with the new 
inclusions in SOLR/Heliosearch it seems like SOLR tries to parse the body in a 
new way.


 Ability to Circumvent Content-Type Requirement
 --

 Key: SOLR-7386
 URL: https://issues.apache.org/jira/browse/SOLR-7386
 Project: Solr
  Issue Type: Improvement
Reporter: Konrad Slepoy

 There is no way to circumvent a content-type requirement for a POST request. 
 This came about because my team passes a POST request body which should not 
 be touched by SOLR and is used by our native application. However, with the 
 new inclusions in SOLR/Heliosearch it seems like SOLR tries to parse the body 
 in a new way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7384) Delete-by-id with _route_ parameter fails on replicas for collections with implicit router

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7384:

Summary: Delete-by-id with _route_ parameter fails on replicas for 
collections with implicit router  (was: FullSolrCloudDistribCmdsTest failures 
on jenkins)

 Delete-by-id with _route_ parameter fails on replicas for collections with 
 implicit router
 --

 Key: SOLR-7384
 URL: https://issues.apache.org/jira/browse/SOLR-7384
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: FullSolrCloudDistribCmdsTest-2.log, 
 FullSolrCloudDistribCmdsTest.log


 This has been failing quite regularly on jenkins.
 {quote}
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
 Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC
 1 tests failed.
 FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test
 Error Message:
 Error from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
 from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 at 
 __randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
 at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5772) duplicate documents between solr block join documents and normal document

2015-04-13 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492921#comment-14492921
 ] 

Mikhail Khludnev commented on SOLR-5772:


mixing _blocks_ and _normal_ docs is not supported and leads to undermined 
behavior, it was discussed in many dupes of SOLR-5211.

 duplicate documents between solr block join documents and normal document
 -

 Key: SOLR-5772
 URL: https://issues.apache.org/jira/browse/SOLR-5772
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5.1, 4.6.1
Reporter: Xiang Xiao
  Labels: blockjoin

 if i first upload this document to solr:
 {code:xml}
 ?xml version=1.0?
 add
   doc boost=1.0
 field name=idfile1/field
 field name=size_i100/field
   /doc
 /add
 {code}
 and then this one:
 {code:xml}
 ?xml version=1.0?
 add
   doc boost=1.0
 field name=iddir1/field
 doc boost=1.0
   field name=idfile1/field
   field name=size_i400/field
 /doc
   /doc
 /add
 {code}
 i will get two file documents with the same id
 http://localhost:8983/solr/select?q=*:*fq=id:file1
 in the config file, i have 
 {code:xml}
 field name=id type=string indexed=true stored=true required=true 
 multiValued=false /
 dynamicField name=*_i  type=intindexed=true  stored=true/
 uniqueKeyid/uniqueKey
 {code}
 i would expect the first file document to be overridden by the block join 
 document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_60-ea-b06) - Build # 12123 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12123/
Java: 32bit/jdk1.8.0_60-ea-b06 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.request.TestWriterPerf

Error Message:
1

Stack Trace:
java.lang.AssertionError: 1
at __randomizedtesting.SeedInfo.seed([488B63D091E6C3B8]:0)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:201)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1176)
at org.apache.solr.core.SolrCores.close(SolrCores.java:117)
at org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:378)
at org.apache.solr.util.TestHarness.close(TestHarness.java:359)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:704)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:230)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
expected:1 but was:2

Stack Trace:
java.lang.AssertionError: expected:1 but was:2
at 
__randomizedtesting.SeedInfo.seed([488B63D091E6C3B8:C0DF5C0A3F1AAE40]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdImplicitRouter(FullSolrCloudDistribCmdsTest.java:247)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 

Re: API for retrieving the configuration a collection was created with

2015-04-13 Thread Shai Erera
Thanks!
On Apr 13, 2015 7:18 PM, Shalin Shekhar Mangar shalinman...@gmail.com
wrote:

 I opened https://issues.apache.org/jira/browse/SOLR-7385

 On Mon, Apr 13, 2015 at 8:08 PM, Shai Erera ser...@gmail.com wrote:

 Thanks Shalin, don't know how did I miss it :). I see that besides just
 reading the configuration name, it also checks that it exists which is nice.

 We should add this information to the cluster status API


 +1!

 Shai

 On Mon, Apr 13, 2015 at 5:08 PM, Shalin Shekhar Mangar 
 shalinman...@gmail.com wrote:

 You can use the oddly named ZkStateReader.readConfigName(String
 Collection) to get this information. We should add this information to the
 cluster status API.

 On Mon, Apr 13, 2015 at 6:58 PM, Shai Erera ser...@gmail.com wrote:

 Hi

 I was looking for some API (Java or REST) for retrieving the
 configuration name with which a collection was created. It doesn't appear
 as part of the cluster status information, nor is part of the DocCollection
 class.

 I eventually wrote this code:

   /** Returns a collection's configuration name, or {@code null} if the
 collection doesn't exist. */
   public static String getCollectionConfigName(ZkStateReader
 zkStateReader, String collection) {
 try {
   final String collectionZkNode = ZkStateReader.COLLECTIONS_ZKNODE
 + / + collection;
   final byte[] data =
 zkStateReader.getZkClient().getData(collectionZkNode, null, null, true);
   final ZkNodeProps nodeProps = ZkNodeProps.load(data);
   final String collectionConfigName =
 nodeProps.getStr(ZkStateReader.CONFIGNAME_PROP);
   return collectionConfigName;
 } catch (NoNodeException e) {
   return null;
 } catch (KeeperException | InterruptedException e) {
   throw Throwables.propagate(e);
 }
   }

 This works but feels hacky as none of this is documented anywhere. So
 if anyone is aware of an existing class/method which does that, even if
 it's not truly public API, I'd appreciate a pointer.

 Also, would it make sense to add this information to DocCollection,
 e.g. docCollection.getConfigName()?

 Shai




 --
 Regards,
 Shalin Shekhar Mangar.





 --
 Regards,
 Shalin Shekhar Mangar.



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2171 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2171/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
expected:1 but was:2

Stack Trace:
java.lang.AssertionError: expected:1 but was:2
at 
__randomizedtesting.SeedInfo.seed([6B8FF6AC3304680D:E3DBC9769DF805F5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdImplicitRouter(FullSolrCloudDistribCmdsTest.java:247)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40) - Build # 12124 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12124/
Java: 32bit/jdk1.8.0_40 -server -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.request.TestWriterPerf

Error Message:
1

Stack Trace:
java.lang.AssertionError: 1
at __randomizedtesting.SeedInfo.seed([21905D436C6C23B2]:0)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:201)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1176)
at org.apache.solr.core.SolrCores.close(SolrCores.java:117)
at org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:378)
at org.apache.solr.util.TestHarness.close(TestHarness.java:359)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:704)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:230)
at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
expected:1 but was:2

Stack Trace:
java.lang.AssertionError: expected:1 but was:2
at 
__randomizedtesting.SeedInfo.seed([21905D436C6C23B2:A9C46299C2904E4A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdImplicitRouter(FullSolrCloudDistribCmdsTest.java:247)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b54) - Build # 12292 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12292/
Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.request.TestWriterPerf

Error Message:
1

Stack Trace:
java.lang.AssertionError: 1
at __randomizedtesting.SeedInfo.seed([EF9045A27F496A9D]:0)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:201)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1176)
at org.apache.solr.core.SolrCores.close(SolrCores.java:117)
at org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:378)
at org.apache.solr.util.TestHarness.close(TestHarness.java:359)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:704)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:230)
at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.request.TestWriterPerf.testPerf

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([EF9045A27F496A9D:CEA7A42517C741AC]:0)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:453)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:394)
at 
org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:143)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:253)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1988)
at 
org.apache.solr.request.TestWriterPerf.getResponse(TestWriterPerf.java:96)
at 
org.apache.solr.request.TestWriterPerf.doPerf(TestWriterPerf.java:105)
at 
org.apache.solr.request.TestWriterPerf.testPerf(TestWriterPerf.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 

[jira] [Commented] (SOLR-7361) Main Jetty thread blocked by core loading delays HTTP listener from binding if core loading is slow

2015-04-13 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492853#comment-14492853
 ] 

Timothy Potter commented on SOLR-7361:
--

[~damienka] All good questions, but this ticket is not intended to address any 
of those and it sounds like you're tackling them as part of SOLR-7191. I'm 
close to posting a patch for this, which will only address the problem of 
blocking the main thread (which activates the Jetty listener) during core 
loading.

 Main Jetty thread blocked by core loading delays HTTP listener from binding 
 if core loading is slow
 ---

 Key: SOLR-7361
 URL: https://issues.apache.org/jira/browse/SOLR-7361
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Timothy Potter
Assignee: Timothy Potter

 During server startup, the CoreContainer uses an ExecutorService to load 
 cores in multiple back-ground threads but then blocks until cores are loaded, 
 see: CoreContainer#load around line 290 on trunk (invokeAll). From the 
 JavaDoc on that method, we have:
 {quote}
 Executes the given tasks, returning a list of Futures holding their status 
 and results when all complete. Future.isDone() is true for each element of 
 the returned list.
 {quote}
 In other words, this is a blocking call.
 This delays the Jetty HTTP listener from binding and accepting requests until 
 all cores are loaded. Do we need to block the main thread?
 Also, prior to this happening, the node is registered as a live node in ZK, 
 which makes it a candidate for receiving requests from the Overseer, such as 
 to service a create collection request. The problem of course is that the 
 node listed in /live_nodes isn't accepting requests yet. So we either need to 
 unblock the main thread during server loading or maybe wait longer before we 
 register as a live node ... not sure which is the better way forward?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7384) Delete-by-id with _route_ parameter fails on replicas for collections with implicit router

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7384:

Description: 
The FullSolrCloudDistribCmdsTest test has been failing quite regularly on 
jenkins. Some of those failures are spurious but there is an underlying bug 
that delete-by-id requests with _route_ parameter on a collection with 
implicit router, fails on replicas because of a missing _version_ field.

{quote}
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard:

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard:
at 
__randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
{quote}

  was:
The FullSolrCloudDistribCmdsTest test has been failing quite regularly on 
jenkins.

{quote}
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard:

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard:
at 
__randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
{quote}


 Delete-by-id with _route_ parameter fails on replicas for collections with 
 implicit router
 --

 Key: SOLR-7384
 URL: https://issues.apache.org/jira/browse/SOLR-7384
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: FullSolrCloudDistribCmdsTest-2.log, 
 FullSolrCloudDistribCmdsTest.log


 The FullSolrCloudDistribCmdsTest test has been failing quite regularly on 
 jenkins. Some of those failures are spurious but there is an underlying bug 
 that delete-by-id requests with _route_ parameter on a collection with 
 implicit router, fails on replicas because of a missing _version_ field.
 {quote}
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
 Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC
 1 tests failed.
 FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test
 Error Message:
 Error from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
 from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 at 
 __randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
 at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80-ea-b05) - Build # 12125 - Still Failing!

2015-04-13 Thread david.w.smi...@gmail.com
Woops; sorry for the noise in the highlighter; I'll fix

On Mon, Apr 13, 2015 at 4:51 PM Policeman Jenkins Server 
jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12125/
 Java: 64bit/jdk1.7.0_80-ea-b05 -XX:-UseCompressedOops
 -XX:+UseConcMarkSweepGC

 2 tests failed.
 FAILED:  junit.framework.TestSuite.org.apache.solr.request.TestWriterPerf

 Error Message:
 1

 Stack Trace:
 java.lang.AssertionError: 1
 at __randomizedtesting.SeedInfo.seed([D9131D669EBC7199]:0)
 at
 org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:201)
 at org.apache.solr.core.SolrCore.close(SolrCore.java:1176)
 at org.apache.solr.core.SolrCores.close(SolrCores.java:117)
 at
 org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:378)
 at org.apache.solr.util.TestHarness.close(TestHarness.java:359)
 at
 org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:704)
 at
 org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:230)
 at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:745)


 FAILED:  org.apache.solr.request.TestWriterPerf.testPerf

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at
 __randomizedtesting.SeedInfo.seed([D9131D669EBC7199:F824FCE1F6325AA8]:0)
 at
 org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:451)
 at
 org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:394)
 at
 org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:143)
 at
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:253)
 at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1988)
 at
 org.apache.solr.request.TestWriterPerf.getResponse(TestWriterPerf.java:96)
 at
 org.apache.solr.request.TestWriterPerf.doPerf(TestWriterPerf.java:105)
 at
 org.apache.solr.request.TestWriterPerf.testPerf(TestWriterPerf.java:178)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at
 

[jira] [Commented] (SOLR-6692) hl.maxAnalyzedChars should apply cumulatively on a multi-valued field

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493101#comment-14493101
 ] 

ASF subversion and git services commented on SOLR-6692:
---

Commit 1673281 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1673281 ]

SOLR-6692: Highlighter NPE bugfix when highlight nonexistent field.

 hl.maxAnalyzedChars should apply cumulatively on a multi-valued field
 -

 Key: SOLR-6692
 URL: https://issues.apache.org/jira/browse/SOLR-6692
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.2

 Attachments: 
 SOLR-6692_hl_maxAnalyzedChars_cumulative_multiValued,_and_more.patch


 in DefaultSolrHighlighter, the hl.maxAnalyzedChars figure is used to 
 constrain how much text is analyzed before the highlighter stops, in the 
 interests of performance.  For a multi-valued field, it effectively treats 
 each value anew, no matter how much text it was previously analyzed for other 
 values for the same field for the current document. The PostingsHighlighter 
 doesn't work this way -- hl.maxAnalyzedChars is effectively the total budget 
 for a field for a document, no matter how many values there might be.  It's 
 not reset for each value.  I think this makes more sense.  When we loop over 
 the values, we should subtract from hl.maxAnalyzedChars the length of the 
 value just checked.  The motivation here is consistency with 
 PostingsHighlighter, and to allow for hl.maxAnalyzedChars to be pushed down 
 to term vector uninversion, which wouldn't be possible for multi-valued 
 fields based on the current way this parameter is used.
 Interestingly, I noticed Solr's use of FastVectorHighlighter doesn't honor 
 hl.maxAnalyzedChars as the FVH doesn't have a knob for that.  It does have 
 hl.phraseLimit which is a limit that could be used for a similar purpose, 
 albeit applied differently.
 Furthermore, DefaultSolrHighligher.doHighlightingByHighlighter should exit 
 early from it's field value loop if it reaches hl.snippets, and if 
 hl.preserveMulti=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-04-13 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6196:

Attachment: LUCENE-6196-additions.patch

This patch adds support for degenerate cases, and corrects a bug in 
GeoWideRectangle.


 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: LUCENE-6196-additions.patch, LUCENE-6196_Geo3d.patch, 
 ShapeImpl.java, geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2015-04-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493005#comment-14493005
 ] 

Jan Høydahl commented on SOLR-7374:
---

Looking forward to a collection backup command, we can imagine a cluster wide 
set of Directory configurations, such as an {{S3Directory}} configured with API 
keys etc. So should be letting each core/shard backup/restore command be able 
to write/read the backup directly to/from such cluster-wide locations. One way 
could be to support protocol and config in the {{location}} attribute, e.g. 
{{s3:/backups/collection1/shard1}}. Would make it super simple for Overseer to 
kick off a bunch of backup jobs across a cluster and let each shard write 
directly to correct target instead of intermediate local stoage. No idea of how 
to configure cluster-wide Directory configs though.

 Backup/Restore should provide a param for specifying the directory 
 implementation it should use
 ---

 Key: SOLR-7374
 URL: https://issues.apache.org/jira/browse/SOLR-7374
 Project: Solr
  Issue Type: Bug
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: Trunk, 5.2


 Currently when we create a backup we use SimpleFSDirectory to write the 
 backup indexes. Similarly during a restore we open the index using 
 FSDirectory.open . 
 We should provide a param called {{directoryImpl}} or {{type}} which will be 
 used to specify the Directory implementation to backup the index. 
 Likewise during a restore you would need to specify the directory impl which 
 was used during backup so that the index can be opened correctly.
 This param will address the problem that currently if a user is running Solr 
 on HDFS there is no way to use the backup/restore functionality as the 
 directory is hardcoded.
 With this one could be running Solr on a local FS but backup the index on 
 HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_40) - Build # 4672 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4672/
Java: 64bit/jdk1.8.0_40 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.request.TestWriterPerf.testPerf

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([9571EFD67A45DD76:B4460E5112CBF647]:0)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:453)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:394)
at 
org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:143)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:253)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1988)
at 
org.apache.solr.request.TestWriterPerf.getResponse(TestWriterPerf.java:96)
at 
org.apache.solr.request.TestWriterPerf.doPerf(TestWriterPerf.java:105)
at 
org.apache.solr.request.TestWriterPerf.testPerf(TestWriterPerf.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Updated] (SOLR-7361) Main Jetty thread blocked by core loading delays HTTP listener from binding if core loading is slow

2015-04-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-7361:
-
Attachment: SOLR-7361.patch

Patch that blocks the main thread until cores are pre-registered with ZK, but 
then loads cores in the background, allowing the main thread to progress and 
activate the Jetty listener. Only works in SolrCloud mode; the main thread will 
continue to be blocked by core loading in non-cloud mode. Cores will come 
online asynchronously when they are loaded.

 Main Jetty thread blocked by core loading delays HTTP listener from binding 
 if core loading is slow
 ---

 Key: SOLR-7361
 URL: https://issues.apache.org/jira/browse/SOLR-7361
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Timothy Potter
Assignee: Timothy Potter
 Attachments: SOLR-7361.patch


 During server startup, the CoreContainer uses an ExecutorService to load 
 cores in multiple back-ground threads but then blocks until cores are loaded, 
 see: CoreContainer#load around line 290 on trunk (invokeAll). From the 
 JavaDoc on that method, we have:
 {quote}
 Executes the given tasks, returning a list of Futures holding their status 
 and results when all complete. Future.isDone() is true for each element of 
 the returned list.
 {quote}
 In other words, this is a blocking call.
 This delays the Jetty HTTP listener from binding and accepting requests until 
 all cores are loaded. Do we need to block the main thread?
 Also, prior to this happening, the node is registered as a live node in ZK, 
 which makes it a candidate for receiving requests from the Overseer, such as 
 to service a create collection request. The problem of course is that the 
 node listed in /live_nodes isn't accepting requests yet. So we either need to 
 unblock the main thread during server loading or maybe wait longer before we 
 register as a live node ... not sure which is the better way forward?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40) - Build # 12293 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12293/
Java: 64bit/jdk1.8.0_40 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.request.TestWriterPerf

Error Message:
1

Stack Trace:
java.lang.AssertionError: 1
at __randomizedtesting.SeedInfo.seed([B206800B682C76EB]:0)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:201)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1176)
at org.apache.solr.core.SolrCores.close(SolrCores.java:117)
at org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:378)
at org.apache.solr.util.TestHarness.close(TestHarness.java:359)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:704)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:230)
at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.request.TestWriterPerf

Error Message:
Clean up static fields (in @AfterClass?), your test seems to hang on to 
approximately 10,767,016 bytes (threshold is 10,485,760). Field reference sizes 
(counted individually):   - 11,813,344 bytes, protected static 
org.apache.solr.util.TestHarness org.apache.solr.SolrTestCaseJ4.h   - 
11,806,992 bytes, protected static 
org.apache.solr.util.TestHarness$LocalRequestFactory 
org.apache.solr.SolrTestCaseJ4.lrf   - 10,096,056 bytes, protected static 
org.apache.solr.core.SolrConfig org.apache.solr.SolrTestCaseJ4.solrConfig   - 
296 bytes, public static org.junit.rules.TestRule 
org.apache.solr.SolrTestCaseJ4.solrClassRules   - 216 bytes, protected static 
java.lang.String org.apache.solr.SolrTestCaseJ4.testSolrHome   - 144 bytes, 
private static java.lang.String org.apache.solr.SolrTestCaseJ4.factoryProp   - 
112 bytes, protected static java.lang.String 
org.apache.solr.SolrTestCaseJ4.configString   - 80 bytes, private static 
java.lang.String org.apache.solr.SolrTestCaseJ4.coreName   - 80 bytes, 
protected static java.lang.String org.apache.solr.SolrTestCaseJ4.schemaString

Stack Trace:
junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?), 
your test seems to hang on to approximately 10,767,016 bytes (threshold is 
10,485,760). Field reference sizes (counted individually):
  - 11,813,344 bytes, protected static org.apache.solr.util.TestHarness 
org.apache.solr.SolrTestCaseJ4.h
  - 11,806,992 bytes, protected static 
org.apache.solr.util.TestHarness$LocalRequestFactory 
org.apache.solr.SolrTestCaseJ4.lrf
  - 10,096,056 bytes, protected static org.apache.solr.core.SolrConfig 
org.apache.solr.SolrTestCaseJ4.solrConfig
  - 296 bytes, public static org.junit.rules.TestRule 

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2127 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2127/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.request.TestWriterPerf.testPerf

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([C00773CE20AC5C1F:E13092494822772E]:0)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:451)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:394)
at 
org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:143)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:253)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1988)
at 
org.apache.solr.request.TestWriterPerf.getResponse(TestWriterPerf.java:96)
at 
org.apache.solr.request.TestWriterPerf.doPerf(TestWriterPerf.java:105)
at 
org.apache.solr.request.TestWriterPerf.testPerf(TestWriterPerf.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2941 - Still Failing

2015-04-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2941/

2 tests failed.
FAILED:  org.apache.solr.request.TestWriterPerf.testPerf

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([969DE39C20A1D75C:B7AA021B482FFC6D]:0)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:451)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:394)
at 
org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:143)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:253)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1988)
at 
org.apache.solr.request.TestWriterPerf.getResponse(TestWriterPerf.java:96)
at 
org.apache.solr.request.TestWriterPerf.doPerf(TestWriterPerf.java:105)
at 
org.apache.solr.request.TestWriterPerf.testPerf(TestWriterPerf.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80-ea-b05) - Build # 12125 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12125/
Java: 64bit/jdk1.7.0_80-ea-b05 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.request.TestWriterPerf

Error Message:
1

Stack Trace:
java.lang.AssertionError: 1
at __randomizedtesting.SeedInfo.seed([D9131D669EBC7199]:0)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:201)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1176)
at org.apache.solr.core.SolrCores.close(SolrCores.java:117)
at org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:378)
at org.apache.solr.util.TestHarness.close(TestHarness.java:359)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:704)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:230)
at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.request.TestWriterPerf.testPerf

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([D9131D669EBC7199:F824FCE1F6325AA8]:0)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:451)
at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:394)
at 
org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:143)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:253)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1988)
at 
org.apache.solr.request.TestWriterPerf.getResponse(TestWriterPerf.java:96)
at 
org.apache.solr.request.TestWriterPerf.doPerf(TestWriterPerf.java:105)
at 
org.apache.solr.request.TestWriterPerf.testPerf(TestWriterPerf.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 

[jira] [Commented] (SOLR-5894) Speed up high-cardinality facets with sparse counters

2015-04-13 Thread Manuel Lenormand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493153#comment-14493153
 ] 

Manuel Lenormand commented on SOLR-5894:


I'll be pleased to be updated about the 4.10.x migration, I'll be watching the 
issue. We have a 40 shards collection, 3TB/100M docs. As you can notice from 
JIRA issues I've opened, scalability and performance is our main concern and 
it's nice seeing others dealing with harder use cases than ours and writing 
about it.

 Speed up high-cardinality facets with sparse counters
 -

 Key: SOLR-5894
 URL: https://issues.apache.org/jira/browse/SOLR-5894
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Affects Versions: 4.7.1
Reporter: Toke Eskildsen
Priority: Minor
  Labels: faceted-search, faceting, memory, performance
 Attachments: SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, 
 SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, 
 SOLR-5894.patch, SOLR-5894.patch, SOLR-5894_test.zip, SOLR-5894_test.zip, 
 SOLR-5894_test.zip, SOLR-5894_test.zip, SOLR-5894_test.zip, 
 author_7M_tags_1852_logged_queries_warmed.png, 
 sparse_200docs_fc_cutoff_20140403-145412.png, 
 sparse_500docs_20140331-151918_multi.png, 
 sparse_500docs_20140331-151918_single.png, 
 sparse_5051docs_20140328-152807.png


 Field based faceting in Solr has two phases: Collecting counts for tags in 
 facets and extracting the requested tags.
 The execution time for the collecting phase is approximately linear to the 
 number of hits and the number of references from hits to tags. This phase is 
 not the focus here.
 The extraction time scales with the number of unique tags in the search 
 result, but is also heavily influenced by the total number of unique tags in 
 the facet as every counter, 0 or not, is visited by the extractor (at least 
 for count order). For fields with millions of unique tag values this means 
 10s of milliseconds added to the minimum response time (see 
 https://sbdevel.wordpress.com/2014/03/18/sparse-facet-counting-on-a-real-index/
  for a test on a corpus with 7M unique values in the facet).
 The extractor needs to visit every counter due to the current counter 
 structure being a plain int-array of size #unique_tags. Switching to a sparse 
 structure, where only the tag counters  0 are visited, makes the extraction 
 time linear to the number of unique tags in the result set.
 Unfortunately the number of unique tags in the result set is unknown at 
 collect time, so it is not possible to reliably select sparse counting vs. 
 full counting up front. Luckily there exists solutions for sparse sets that 
 has the property of switching to non-sparse-mode without a switch-penalty, 
 when the sparse-threshold is exceeded (see 
 http://programmingpraxis.com/2012/03/09/sparse-sets/ for an example). This 
 JIRA aims to implement this functionality in Solr.
 Current status: Sparse counting is implemented for field cache faceting, both 
 single- and multi-value, with and without doc-values. Sort by count only. The 
 patch applies cleanly to Solr 4.6.1 and should integrate well with everything 
 as all functionality is unchanged. After patching, the following new 
 parameters are possible:
 * facet.sparse=true enables sparse faceting.
 * facet.sparse.mintags=1 the minimum amount of unique tags in the given 
 field for sparse faceting to be active. This is used for auto-selecting 
 whether sparse should be used or not.
 * facet.sparse.fraction=0.08 the overhead used for the sparse tracker. 
 Setting this too low means that only very small result sets are handled as 
 sparse. Setting this too high will result in a large performance penalty if 
 the result set blows the sparse tracker. Values between 0.04 and 0.1 seems to 
 work well.
 * facet.sparse.packed=true use PackecInts for counters instead of int[]. This 
 saves memory, but performance will differ. Whether performance will be better 
 or worse depends on the corpus. Experiment with it.
 * facet.sparse.cutoff=0.90 if the estimated number (based on hitcount) of 
 unique tags in the search result exceeds this fraction of the sparse tracker, 
 do not perform sparse tracking. The estimate is based on the assumption that 
 references from documents to tags are distributed randomly.
 * facet.sparse.pool.size=2 the maximum amount of sparse trackers to clear and 
 keep in memory, ready for usage. Clearing and re-using a counter is faster 
 that allocating it fresh from the heap. Setting the pool size to 0 means than 
 a new sparse counter will be allocated each time, just as standard Solr 
 faceting works.
 * facet.sparse.stats=true adds a special tag with timing statistics for 
 sparse faceting.
 * 

[jira] [Commented] (SOLR-7110) Optimize JavaBinCodec to minimize string Object creation

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492996#comment-14492996
 ] 

ASF subversion and git services commented on SOLR-7110:
---

Commit 1673271 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673271 ]

SOLR-7110: tests - java7 compilable

 Optimize JavaBinCodec to minimize string Object creation
 

 Key: SOLR-7110
 URL: https://issues.apache.org/jira/browse/SOLR-7110
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7110.patch, SOLR-7110.patch, SOLR-7110.patch


 In JavabinCodec we already optimize on strings creation , if they are 
 repeated in the same payload. if we use a cache it is possible to avoid 
 string creation across objects as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7110) Optimize JavaBinCodec to minimize string Object creation

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492995#comment-14492995
 ] 

ASF subversion and git services commented on SOLR-7110:
---

Commit 1673270 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1673270 ]

SOLR-7110: tests - java7 compilable

 Optimize JavaBinCodec to minimize string Object creation
 

 Key: SOLR-7110
 URL: https://issues.apache.org/jira/browse/SOLR-7110
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7110.patch, SOLR-7110.patch, SOLR-7110.patch


 In JavabinCodec we already optimize on strings creation , if they are 
 repeated in the same payload. if we use a cache it is possible to avoid 
 string creation across objects as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6692) hl.maxAnalyzedChars should apply cumulatively on a multi-valued field

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493108#comment-14493108
 ] 

ASF subversion and git services commented on SOLR-6692:
---

Commit 1673283 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673283 ]

SOLR-6692: Highlighter NPE bugfix when highlight nonexistent field.

 hl.maxAnalyzedChars should apply cumulatively on a multi-valued field
 -

 Key: SOLR-6692
 URL: https://issues.apache.org/jira/browse/SOLR-6692
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.2

 Attachments: 
 SOLR-6692_hl_maxAnalyzedChars_cumulative_multiValued,_and_more.patch


 in DefaultSolrHighlighter, the hl.maxAnalyzedChars figure is used to 
 constrain how much text is analyzed before the highlighter stops, in the 
 interests of performance.  For a multi-valued field, it effectively treats 
 each value anew, no matter how much text it was previously analyzed for other 
 values for the same field for the current document. The PostingsHighlighter 
 doesn't work this way -- hl.maxAnalyzedChars is effectively the total budget 
 for a field for a document, no matter how many values there might be.  It's 
 not reset for each value.  I think this makes more sense.  When we loop over 
 the values, we should subtract from hl.maxAnalyzedChars the length of the 
 value just checked.  The motivation here is consistency with 
 PostingsHighlighter, and to allow for hl.maxAnalyzedChars to be pushed down 
 to term vector uninversion, which wouldn't be possible for multi-valued 
 fields based on the current way this parameter is used.
 Interestingly, I noticed Solr's use of FastVectorHighlighter doesn't honor 
 hl.maxAnalyzedChars as the FVH doesn't have a knob for that.  It does have 
 hl.phraseLimit which is a limit that could be used for a similar purpose, 
 albeit applied differently.
 Furthermore, DefaultSolrHighligher.doHighlightingByHighlighter should exit 
 early from it's field value loop if it reaches hl.snippets, and if 
 hl.preserveMulti=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7243) 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST

2015-04-13 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493308#comment-14493308
 ] 

Hrishikesh Gadre commented on SOLR-7243:


[~elyograg] Did you get a chance to review the patch? Please let me know if any 
feedback.

 4.10.3 SolrJ is throwing a SERVER_ERROR exception instead of BAD_REQUEST
 

 Key: SOLR-7243
 URL: https://issues.apache.org/jira/browse/SOLR-7243
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3
Reporter: Hrishikesh Gadre
Priority: Minor
 Attachments: SOLR-7243.patch, SOLR-7243.patch, SOLR-7243.patch, 
 SOLR-7243.patch


 We found this problem while upgrading Solr from 4.4 to 4.10.3. Our 
 integration test is similar to this Solr unit test,
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java
 Specifically we test if the Solr server returns BAD_REQUEST when provided 
 with incorrect input.The only difference is that it uses CloudSolrServer 
 instead of HttpSolrServer. The CloudSolrServer always returns SERVER_ERROR 
 error code. Please take a look
 https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L359
 I think we can improve the error handling by checking if the first exception 
 in the list is of type SolrException and if that is the case return the error 
 code associated with that exception. If the first exception is not of type 
 SolrException, then we can return SERVER_ERROR code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6692) hl.maxAnalyzedChars should apply cumulatively on a multi-valued field

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493396#comment-14493396
 ] 

ASF subversion and git services commented on SOLR-6692:
---

Commit 1673328 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1673328 ]

SOLR-6692: highlighter refactorings...
 * extract method getDocPrefetchFieldNames
 * trim field names in getHighlightFields instead of later on
 * lazy-create FVH (could be expensive for wildcard queries)

 hl.maxAnalyzedChars should apply cumulatively on a multi-valued field
 -

 Key: SOLR-6692
 URL: https://issues.apache.org/jira/browse/SOLR-6692
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.2

 Attachments: 
 SOLR-6692_hl_maxAnalyzedChars_cumulative_multiValued,_and_more.patch


 in DefaultSolrHighlighter, the hl.maxAnalyzedChars figure is used to 
 constrain how much text is analyzed before the highlighter stops, in the 
 interests of performance.  For a multi-valued field, it effectively treats 
 each value anew, no matter how much text it was previously analyzed for other 
 values for the same field for the current document. The PostingsHighlighter 
 doesn't work this way -- hl.maxAnalyzedChars is effectively the total budget 
 for a field for a document, no matter how many values there might be.  It's 
 not reset for each value.  I think this makes more sense.  When we loop over 
 the values, we should subtract from hl.maxAnalyzedChars the length of the 
 value just checked.  The motivation here is consistency with 
 PostingsHighlighter, and to allow for hl.maxAnalyzedChars to be pushed down 
 to term vector uninversion, which wouldn't be possible for multi-valued 
 fields based on the current way this parameter is used.
 Interestingly, I noticed Solr's use of FastVectorHighlighter doesn't honor 
 hl.maxAnalyzedChars as the FVH doesn't have a knob for that.  It does have 
 hl.phraseLimit which is a limit that could be used for a similar purpose, 
 albeit applied differently.
 Furthermore, DefaultSolrHighligher.doHighlightingByHighlighter should exit 
 early from it's field value loop if it reaches hl.snippets, and if 
 hl.preserveMulti=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-04-13 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6196:

Attachment: (was: LUCENE-6196-additions.patch)

 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: LUCENE-6196-additions.patch, LUCENE-6196_Geo3d.patch, 
 ShapeImpl.java, geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-04-13 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6196:

Attachment: LUCENE-6196-additions.patch

 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: LUCENE-6196-additions.patch, LUCENE-6196_Geo3d.patch, 
 ShapeImpl.java, geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7349) Cleanup or fix cloud-dev scripts

2015-04-13 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493473#comment-14493473
 ] 

Mark Miller commented on SOLR-7349:
---

They are def not intended for release - that's why they it's cloud-dev. Simply 
best effort scripts that help during development.

I don't mind that they are part of the src tree where they are myself though, 
so I wouldn't do the work to move them. We could add a readme I suppose.

 Cleanup or fix cloud-dev scripts
 

 Key: SOLR-7349
 URL: https://issues.apache.org/jira/browse/SOLR-7349
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Ramkumar Aiyengar
Assignee: Ramkumar Aiyengar
Priority: Minor
 Fix For: 5.2

 Attachments: SOLR-7349.patch


 With Jetty 9, cloud-dev scripts no longer work in trunk, we need to either 
 clean up or fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2015-04-13 Thread Judith Silverman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493256#comment-14493256
 ] 

Judith Silverman commented on SOLR-6203:


Hello, has anybody looked into this? I have tried to take Hoss Man's advice, 
but my tests are still failing. This is my first dive into Solr code and I am 
only guessing how things fit together. I don't think it's worth posting my 
code, but here is the list of source files I have stumbled across and modified; 
if some other files are calling out for modification, please let me know! 
Thanks in advance for any suggestions.

./solr/core/src/java/org/apache/solr/handler/component/ResponseBuilder.java

./solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/command/SearchGroupsFieldCommand.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/command/TopGroupsFieldCommand.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/TopGroupsResultTransformer.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/ShardResultTransformer.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SearchGroupShardResponseProcessor.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java

./solr/core/src/java/org/apache/solr/search/grouping/GroupingSpecification.java

./solr/core/src/java/org/apache/solr/search/QParser.java

 cast exception while searching with sort function and result grouping
 -

 Key: SOLR-6203
 URL: https://issues.apache.org/jira/browse/SOLR-6203
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.7, 4.8
Reporter: Nathan Dire
 Attachments: SOLR-6203-unittest.patch


 After upgrading from 4.5.1 to 4.7+, a schema including a {{*}} dynamic 
 field as text gets a cast exception when using a sort function and result 
 grouping.  
 Repro (with example config):
 # Add {{*}} dynamic field as a {{TextField}}, eg:
 {noformat}
 dynamicField name=* type=text_general multiValued=true /
 {noformat}
 #  Create  sharded collection
 {noformat}
 curl 
 'http://localhost:8983/solr/admin/collections?action=CREATEname=testnumShards=2maxShardsPerNode=2'
 {noformat}
 # Add example docs (query must have some results)
 # Submit query which sorts on a function result and uses result grouping:
 {noformat}
 {
   responseHeader: {
 status: 500,
 QTime: 50,
 params: {
   sort: sqrt(popularity) desc,
   indent: true,
   q: *:*,
   _: 1403709010008,
   group.field: manu,
   group: true,
   wt: json
 }
   },
   error: {
 msg: java.lang.Double cannot be cast to 
 org.apache.lucene.util.BytesRef,
 code: 500
   }
 }
 {noformat}
 Source exception from log:
 {noformat}
 ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
 java.lang.ClassCastException: java.lang.Double cannot be cast to 
 org.apache.lucene.util.BytesRef
 at 
 org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
 at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
 at 
 org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
 at 
 org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
 at 
 org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
 at 
 org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   ...
 {noformat}
 It looks like {{serializeSearchGroup}} is matching the sort expression as the 
 {{*}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7345) Add support for facet.limit to range facets

2015-04-13 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7345:
---
Issue Type: Improvement  (was: Bug)
   Summary: Add support for facet.limit to range facets  (was: Facet limit 
doesn't work on range facets)

range faceting (and date faceting before it) have never supported facet.limit 
-- nor has facet.limit ever been documented as something that *might* be 
supported for range faceting -- it is explicitly listed as a Field-Value 
Faceting Parameters (not a Range Faceting param)

edited jira to note this is a feature request, not a bug.

https://cwiki.apache.org/confluence/display/solr/Faceting#Faceting-Field-ValueFacetingParameters



 Add support for facet.limit to range facets
 ---

 Key: SOLR-7345
 URL: https://issues.apache.org/jira/browse/SOLR-7345
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Haase

 I have a field called post_date_tdt that I want to facet by month:
 {code}
 $ curl 
 'http://myhost/solr/myapp/select/?defType=edismaxq=videorows=0start=0wt=jsonfacet=truefacet.range=post_date_tdtf.post_date_tdt.facet.mincount=200f.post_date_tdt.facet.range.end=NOW%2FMONTHf.post_date_tdt.facet.range.gap=%2B1MONTHf.post_date_tdt.facet.range.start=NOW-120MONTHS%2FMONTH'
 {
 facet_counts: {
 facet_dates: {},
 facet_fields: {},
 facet_intervals: {},
 facet_queries: {},
 facet_ranges: {
 post_date_tdt: {
 counts: [
 2014-07-01T00:00:00Z,
 202,
 2014-08-01T00:00:00Z,
 264,
 2014-09-01T00:00:00Z,
 212,
 2015-01-01T00:00:00Z,
 247
 ],
 end: 2015-04-01T00:00:00Z,
 gap: +1MONTH,
 start: 2005-04-01T00:00:00Z
 }
 }
 },
 response: {
 docs: [],
 numFound: 2432,
 start: 0
 },
 responseHeader: {
 QTime: 3,
 params: {
 defType: edismax,
 f.post_date_tdt.facet.mincount: 200,
 f.post_date_tdt.facet.range.end: NOW/MONTH,
 f.post_date_tdt.facet.range.gap: +1MONTH,
 f.post_date_tdt.facet.range.start: NOW-120MONTHS/MONTH,
 facet: true,
 facet.range: post_date_tdt,
 q: video,
 rows: 0,
 start: 0,
 wt: json
 },
 status: 0
 }
 }
 {code}
 So far, so good. But what if I want to limit my results to just the top 3 
 facets? Adding f.post_date_tdt_.facet.limit=3 doesn't have any effect.
 {code}
 curl 
 'http://myhost/solr/myapp/select/?defType=edismaxq=videorows=0start=0wt=jsonfacet=truefacet.range=post_date_tdtf.post_date_tdt.facet.limit=3f.post_date_tdt.facet.mincount=200f.post_date_tdt.facet.range.end=NOW%2FMONTHf.post_date_tdt.facet.range.gap=%2B1MONTHf.post_date_tdt.facet.range.start=NOW-120MONTHS%2FMONTH'
 {
 facet_counts: {
 facet_dates: {},
 facet_fields: {},
 facet_intervals: {},
 facet_queries: {},
 facet_ranges: {
 post_date_tdt: {
 counts: [
 2014-07-01T00:00:00Z,
 202,
 2014-08-01T00:00:00Z,
 264,
 2014-09-01T00:00:00Z,
 212,
 2015-01-01T00:00:00Z,
 247
 ],
 end: 2015-04-01T00:00:00Z,
 gap: +1MONTH,
 start: 2005-04-01T00:00:00Z
 }
 }
 },
 response: {
 docs: [],
 numFound: 2432,
 start: 0
 },
 responseHeader: {
 QTime: 5,
 params: {
 defType: edismax,
 f.post_date_tdt.facet.limit: 3,
 f.post_date_tdt.facet.mincount: 200,
 f.post_date_tdt.facet.range.end: NOW/MONTH,
 f.post_date_tdt.facet.range.gap: +1MONTH,
 f.post_date_tdt.facet.range.start: NOW-120MONTHS/MONTH,
 facet: true,
 facet.range: post_date_tdt,
 q: video,
 rows: 0,
 start: 0,
 wt: json
 },
 status: 0
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2942 - Still Failing

2015-04-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2942/

4 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

Error Message:
Didn't see all replicas for shard shard1 in collection1 come up within 3 
ms! ClusterState: {   control_collection:{ shards:{shard1:{ 
range:8000-7fff, state:active, 
replicas:{core_node1:{ core:collection1, 
state:active, base_url:http://127.0.0.1:46288;, 
node_name:127.0.0.1:46288_, leader:true, 
maxShardsPerNode:1, router:{name:compositeId}, 
replicationFactor:1, autoCreated:true, 
autoAddReplicas:false},   collection1:{ shards:{shard1:{ 
range:8000-7fff, state:active, replicas:{ 
  core_node1:{ core:collection1, 
state:active, base_url:http://127.0.0.1:51046;, 
node_name:127.0.0.1:51046_, leader:true},   
core_node2:{ core:collection1, 
state:recovering, base_url:http://127.0.0.1:18772;,  
   node_name:127.0.0.1:18772_, maxShardsPerNode:1, 
router:{name:compositeId}, replicationFactor:1, 
autoCreated:true, autoAddReplicas:false}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in 
collection1 come up within 3 ms! ClusterState: {
  control_collection:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{core_node1:{
core:collection1,
state:active,
base_url:http://127.0.0.1:46288;,
node_name:127.0.0.1:46288_,
leader:true,
maxShardsPerNode:1,
router:{name:compositeId},
replicationFactor:1,
autoCreated:true,
autoAddReplicas:false},
  collection1:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{
  core_node1:{
core:collection1,
state:active,
base_url:http://127.0.0.1:51046;,
node_name:127.0.0.1:51046_,
leader:true},
  core_node2:{
core:collection1,
state:recovering,
base_url:http://127.0.0.1:18772;,
node_name:127.0.0.1:18772_,
maxShardsPerNode:1,
router:{name:compositeId},
replicationFactor:1,
autoCreated:true,
autoAddReplicas:false}}
at 
__randomizedtesting.SeedInfo.seed([C6B12823915A60EB:4EE517F93FA60D13]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1920)
at 
org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 

[jira] [Commented] (SOLR-6692) hl.maxAnalyzedChars should apply cumulatively on a multi-valued field

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493428#comment-14493428
 ] 

ASF subversion and git services commented on SOLR-6692:
---

Commit 1673332 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673332 ]

SOLR-6692: highlighter refactorings...
 * extract method getDocPrefetchFieldNames
 * trim field names in getHighlightFields instead of later on
 * lazy-create FVH (could be expensive for wildcard queries)

 hl.maxAnalyzedChars should apply cumulatively on a multi-valued field
 -

 Key: SOLR-6692
 URL: https://issues.apache.org/jira/browse/SOLR-6692
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.2

 Attachments: 
 SOLR-6692_hl_maxAnalyzedChars_cumulative_multiValued,_and_more.patch


 in DefaultSolrHighlighter, the hl.maxAnalyzedChars figure is used to 
 constrain how much text is analyzed before the highlighter stops, in the 
 interests of performance.  For a multi-valued field, it effectively treats 
 each value anew, no matter how much text it was previously analyzed for other 
 values for the same field for the current document. The PostingsHighlighter 
 doesn't work this way -- hl.maxAnalyzedChars is effectively the total budget 
 for a field for a document, no matter how many values there might be.  It's 
 not reset for each value.  I think this makes more sense.  When we loop over 
 the values, we should subtract from hl.maxAnalyzedChars the length of the 
 value just checked.  The motivation here is consistency with 
 PostingsHighlighter, and to allow for hl.maxAnalyzedChars to be pushed down 
 to term vector uninversion, which wouldn't be possible for multi-valued 
 fields based on the current way this parameter is used.
 Interestingly, I noticed Solr's use of FastVectorHighlighter doesn't honor 
 hl.maxAnalyzedChars as the FVH doesn't have a knob for that.  It does have 
 hl.phraseLimit which is a limit that could be used for a similar purpose, 
 albeit applied differently.
 Furthermore, DefaultSolrHighligher.doHighlightingByHighlighter should exit 
 early from it's field value loop if it reaches hl.snippets, and if 
 hl.preserveMulti=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2015-04-13 Thread Judith (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493264#comment-14493264
 ] 

Judith commented on SOLR-6203:
--

Hello, has anybody looked into this? I have tried to take Hoss Man's advice, 
but my tests are still failing. This is my first dive into Solr code and I am 
only guessing how things fit together. I don't think it's worth posting my 
code, but here is the list of source files I have stumbled across and modified; 
if some other files are calling out for modification, please let me know! 
Thanks in advance for any suggestions.

./solr/core/src/java/org/apache/solr/handler/component/ResponseBuilder.java

./solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/command/SearchGroupsFieldCommand.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/command/TopGroupsFieldCommand.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/TopGroupsResultTransformer.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/ShardResultTransformer.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SearchGroupShardResponseProcessor.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java

./solr/core/src/java/org/apache/solr/search/grouping/GroupingSpecification.java

./solr/core/src/java/org/apache/solr/search/QParser.java


 cast exception while searching with sort function and result grouping
 -

 Key: SOLR-6203
 URL: https://issues.apache.org/jira/browse/SOLR-6203
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.7, 4.8
Reporter: Nathan Dire
 Attachments: SOLR-6203-unittest.patch


 After upgrading from 4.5.1 to 4.7+, a schema including a {{*}} dynamic 
 field as text gets a cast exception when using a sort function and result 
 grouping.  
 Repro (with example config):
 # Add {{*}} dynamic field as a {{TextField}}, eg:
 {noformat}
 dynamicField name=* type=text_general multiValued=true /
 {noformat}
 #  Create  sharded collection
 {noformat}
 curl 
 'http://localhost:8983/solr/admin/collections?action=CREATEname=testnumShards=2maxShardsPerNode=2'
 {noformat}
 # Add example docs (query must have some results)
 # Submit query which sorts on a function result and uses result grouping:
 {noformat}
 {
   responseHeader: {
 status: 500,
 QTime: 50,
 params: {
   sort: sqrt(popularity) desc,
   indent: true,
   q: *:*,
   _: 1403709010008,
   group.field: manu,
   group: true,
   wt: json
 }
   },
   error: {
 msg: java.lang.Double cannot be cast to 
 org.apache.lucene.util.BytesRef,
 code: 500
   }
 }
 {noformat}
 Source exception from log:
 {noformat}
 ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
 java.lang.ClassCastException: java.lang.Double cannot be cast to 
 org.apache.lucene.util.BytesRef
 at 
 org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
 at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
 at 
 org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
 at 
 org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
 at 
 org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
 at 
 org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   ...
 {noformat}
 It looks like {{serializeSearchGroup}} is matching the sort expression as the 
 {{*}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7349) Cleanup or fix cloud-dev scripts

2015-04-13 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493291#comment-14493291
 ] 

Hoss Man commented on SOLR-7349:


if these aren't intended to be used by regular users, we should pull them out 
of the release -- they can happily live in dev-tools.

alternatively: pull whatever value they have into bin/solr as more advanced 
cloud example?

 Cleanup or fix cloud-dev scripts
 

 Key: SOLR-7349
 URL: https://issues.apache.org/jira/browse/SOLR-7349
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Ramkumar Aiyengar
Assignee: Ramkumar Aiyengar
Priority: Minor
 Fix For: 5.2

 Attachments: SOLR-7349.patch


 With Jetty 9, cloud-dev scripts no longer work in trunk, we need to either 
 clean up or fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.1-Linux (32bit/jdk1.9.0-ea-b54) - Build # 259 - Failure!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.1-Linux/259/
Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at 
http://127.0.0.1:39812/o/q/implicit_collection_without_routerfield_shard1_replica1:
 no servers hosting shard: 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:39812/o/q/implicit_collection_without_routerfield_shard1_replica1:
 no servers hosting shard: 
at 
__randomizedtesting.SeedInfo.seed([C9DEE99FBB3535E6:418AD64515C9581E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:556)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdImplicitRouter(FullSolrCloudDistribCmdsTest.java:225)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7344) Use two thread pools, one for internal requests and one for external, to avoid distributed deadlock and decrease the number of threads that need to be created.

2015-04-13 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493300#comment-14493300
 ] 

Hrishikesh Gadre commented on SOLR-7344:


Here is a high-level design. I have a reasonably working patch against Solr 
4.10.3 version. If there are no major objections to this proposal, I will 
prepare and submit a patch against the trunk.

- Define two separate end-points for Solr - one to handle internal requests 
(i.e. communication between Solr servers) and other for external requests (i.e. 
communication between clients and servers). Each of the end-point would be 
backed by a dedicated thread-pool.
- Define a property ‘externalPort’ in the solr.xml (under solrcloud 
configuration element) along with a similarly named Java system property. This 
property would define the port used by the external endpoint.
- Make appropriate changes in Solr such that,
  -- This property is published as part of the clusterstate.json ZNODE (along 
with the current base_url property which is used for internal requests).
  -- Change the solrj implementation to use this newly introduced property 
instead of base_url property (in the CloudSolrServer). If this newly introduced 
property is missing (e.g. new client connecting to old server), it will fall 
back to using the old property for backward compatibility.
  -- We don't need to change any other code on the server side (since it is 
using base_url property anyways).

If all external requests are sent to the external endpoint, a distributed 
deadlock can not occur since only threads associated with external endpoint 
will be doing the scatter/gather. And no two scatter/gather requests will 
directly depend upon each other. In the worst case, we can get a socket timeout 
error during the gather phase if too many internal requests are sent to a 
specific solr server. But we can not run into deadlock scenarios.

But the same can not be said if external requests also land on the internal 
endpoint. In this case one or more internal threads may be doing scatter/gather 
and hence would depend upon each other (just like today). Hence there is a 
possibility of distributed deadlock in this case. To prevent this from 
happening we should also add validation to ensure that the external requests 
sent to the internal endpoint are rejected.

This can be implemented by tagging internal requests in Solr (via an additional 
request parameter or a header) and adding validation via a servlet filter to 
reject external requests sent to the internal endpoint. To check if a request 
is sent to an internal endpoint, we can use the ServletRequest#getLocalPort() 
method.

Open questions
(1) For /admin/collections and /admin/cores APIs, we currently use information 
stored under live_nodes ZNODE. Each ZNODE under live_nodes is named as 
host_name:port_number_solr. The port number mentioned here corresponds to 
internal endpoint (used for solr server specific communication). What is the 
best way to add more information to it (e.g. external port value) ? may be as a 
content of the ZNODE?
https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrServer.java#L550

(2) What is your opinion on rejecting the external requests sent to internal 
endpoint? Any alternatives? 

 Use two thread pools, one for internal requests and one for external, to 
 avoid distributed deadlock and decrease the number of threads that need to be 
 created.
 ---

 Key: SOLR-7344
 URL: https://issues.apache.org/jira/browse/SOLR-7344
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.1-Linux (32bit/jdk1.8.0_40) - Build # 260 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.1-Linux/260/
Java: 32bit/jdk1.8.0_40 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([BBBD7229792380A6:52E7C911E7BA100E]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:794)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: ?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status0/intint 
name=QTime1/int/lstresult name=response numFound=0 
start=0/result
/response

request was:q=id:2qt=standardstart=0rows=20version=2.2
at 

[jira] [Issue Comment Deleted] (SOLR-6203) cast exception while searching with sort function and result grouping

2015-04-13 Thread Judith Silverman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Judith Silverman updated SOLR-6203:
---
Comment: was deleted

(was: Hello, has anybody looked into this? I have tried to take Hoss Man's 
advice, but my tests are still failing. This is my first dive into Solr code 
and I am only guessing how things fit together. I don't think it's worth 
posting my code, but here is the list of source files I have stumbled across 
and modified; if some other files are calling out for modification, please let 
me know! Thanks in advance for any suggestions.

./solr/core/src/java/org/apache/solr/handler/component/ResponseBuilder.java

./solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/command/SearchGroupsFieldCommand.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/command/TopGroupsFieldCommand.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/TopGroupsResultTransformer.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/ShardResultTransformer.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SearchGroupShardResponseProcessor.java

./solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java

./solr/core/src/java/org/apache/solr/search/grouping/GroupingSpecification.java

./solr/core/src/java/org/apache/solr/search/QParser.java)

 cast exception while searching with sort function and result grouping
 -

 Key: SOLR-6203
 URL: https://issues.apache.org/jira/browse/SOLR-6203
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.7, 4.8
Reporter: Nathan Dire
 Attachments: SOLR-6203-unittest.patch


 After upgrading from 4.5.1 to 4.7+, a schema including a {{*}} dynamic 
 field as text gets a cast exception when using a sort function and result 
 grouping.  
 Repro (with example config):
 # Add {{*}} dynamic field as a {{TextField}}, eg:
 {noformat}
 dynamicField name=* type=text_general multiValued=true /
 {noformat}
 #  Create  sharded collection
 {noformat}
 curl 
 'http://localhost:8983/solr/admin/collections?action=CREATEname=testnumShards=2maxShardsPerNode=2'
 {noformat}
 # Add example docs (query must have some results)
 # Submit query which sorts on a function result and uses result grouping:
 {noformat}
 {
   responseHeader: {
 status: 500,
 QTime: 50,
 params: {
   sort: sqrt(popularity) desc,
   indent: true,
   q: *:*,
   _: 1403709010008,
   group.field: manu,
   group: true,
   wt: json
 }
   },
   error: {
 msg: java.lang.Double cannot be cast to 
 org.apache.lucene.util.BytesRef,
 code: 500
   }
 }
 {noformat}
 Source exception from log:
 {noformat}
 ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
 java.lang.ClassCastException: java.lang.Double cannot be cast to 
 org.apache.lucene.util.BytesRef
 at 
 org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
 at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
 at 
 org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
 at 
 org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
 at 
 org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
 at 
 org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   ...
 {noformat}
 It looks like {{serializeSearchGroup}} is matching the sort expression as the 
 {{*}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6692) hl.maxAnalyzedChars should apply cumulatively on a multi-valued field

2015-04-13 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-6692.

Resolution: Fixed

 hl.maxAnalyzedChars should apply cumulatively on a multi-valued field
 -

 Key: SOLR-6692
 URL: https://issues.apache.org/jira/browse/SOLR-6692
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.2

 Attachments: 
 SOLR-6692_hl_maxAnalyzedChars_cumulative_multiValued,_and_more.patch


 in DefaultSolrHighlighter, the hl.maxAnalyzedChars figure is used to 
 constrain how much text is analyzed before the highlighter stops, in the 
 interests of performance.  For a multi-valued field, it effectively treats 
 each value anew, no matter how much text it was previously analyzed for other 
 values for the same field for the current document. The PostingsHighlighter 
 doesn't work this way -- hl.maxAnalyzedChars is effectively the total budget 
 for a field for a document, no matter how many values there might be.  It's 
 not reset for each value.  I think this makes more sense.  When we loop over 
 the values, we should subtract from hl.maxAnalyzedChars the length of the 
 value just checked.  The motivation here is consistency with 
 PostingsHighlighter, and to allow for hl.maxAnalyzedChars to be pushed down 
 to term vector uninversion, which wouldn't be possible for multi-valued 
 fields based on the current way this parameter is used.
 Interestingly, I noticed Solr's use of FastVectorHighlighter doesn't honor 
 hl.maxAnalyzedChars as the FVH doesn't have a knob for that.  It does have 
 hl.phraseLimit which is a limit that could be used for a similar purpose, 
 albeit applied differently.
 Furthermore, DefaultSolrHighligher.doHighlightingByHighlighter should exit 
 early from it's field value loop if it reaches hl.snippets, and if 
 hl.preserveMulti=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3935) Change the default jetty connector to be the NIO implementation.

2015-04-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493505#comment-14493505
 ] 

Shawn Heisey commented on SOLR-3935:


It appears that Jetty 9 (trunk) uses a completely different connector - 
ServerConnector.

I believe that the upgrade to jetty 9 in branch_5x is planned soon ... which 
would make my earlier concerns moot.

 Change the default jetty connector to be the NIO implementation.
 

 Key: SOLR-3935
 URL: https://issues.apache.org/jira/browse/SOLR-3935
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.9, Trunk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7372) Limit LRUCache by RAM usage

2015-04-13 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493575#comment-14493575
 ] 

Noble Paul commented on SOLR-7372:
--

That is not the default value. It is the type information in this case it is 
20 means an XML attribute with an integer type

 Limit LRUCache by RAM usage
 ---

 Key: SOLR-7372
 URL: https://issues.apache.org/jira/browse/SOLR-7372
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: SOLR-7372.patch, SOLR-7372.patch, SOLR-7372.patch, 
 SOLR-7372.patch, SOLR-7372.patch


 Now that SOLR-7371 has made DocSet impls Accountable, we should add an option 
 to LRUCache to limit itself by RAM.
 I propose to add a 'maxRamBytes' configuration parameter which it can use to 
 evict items once the total RAM usage of the cache reaches this limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_40) - Build # 4556 - Failure!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4556/
Java: 64bit/jdk1.8.0_40 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.lucene.store.TestRateLimiter.testPause

Error Message:
we should sleep less than 2 seconds but did: 2244 millis

Stack Trace:
java.lang.AssertionError: we should sleep less than 2 seconds but did: 2244 
millis
at 
__randomizedtesting.SeedInfo.seed([885A6AB8E952A36D:EEFA2E86627FFA6B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.lucene.store.TestRateLimiter.testPause(TestRateLimiter.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 876 lines...]
   [junit4] Suite: org.apache.lucene.store.TestRateLimiter
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestRateLimiter 
-Dtests.method=testPause -Dtests.seed=885A6AB8E952A36D -Dtests.slow=true 
-Dtests.locale=sl_SI -Dtests.timezone=Africa/Conakry -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 2.22s J0 | TestRateLimiter.testPause 
   [junit4] Throwable #1: java.lang.AssertionError: we should sleep less 
than 2 seconds but did: 2244 millis
   [junit4]at 
__randomizedtesting.SeedInfo.seed([885A6AB8E952A36D:EEFA2E86627FFA6B]:0)
   

[jira] [Updated] (LUCENE-6421) Add two-phase support to MultiPhraseQuery

2015-04-13 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6421:

Attachment: LUCENE-6421_luceneutil.patch
LUCENE-6421.patch

See attached patch and benchmarks modifications / tasks file.

* no longer keeps subs one document ahead, its like a normal disjunction
* positions reading/merging are deferred until freq() is called.
* general cleanups

The problems with the current code is more than just two-phase iteration, 
because it always reads all positions from all subs on nextDoc()/advance(), it 
slows down even the simplest multiphrase queries like these added to the tasks 
file:
{noformat}
MultiPhraseHHH: multiPhrase//(body:in|of the)
MultiPhraseHHM: multiPhrase//(body:in|of your)
MultiPhraseHHL: multiPhrase//(body:in|of harvard)
MultiPhraseMMH: multiPhrase//(body:northern|southern states)
MultiPhraseMMM: multiPhrase//(body:northern|southern usa)
MultiPhraseMML: multiPhrase//(body:northern|southern iraq)
{noformat}

So in the example of northern|southern states, today all positions are read 
from either or both 'northern' and 'southern', regardless of whether 'states' 
is present in the doc at all. Filters will only aggravate the situation even 
more. 

Benchmarking these is super-slow, but after a few iterations it looks like this:
{noformat}
Task   QPS trunk  StdDev   QPS patch  StdDev
Pct diff
  MultiPhraseHHH0.34  (2.1%)0.33  (1.4%)   
-2.1% (  -5% -1%)
  MultiPhraseHHL   17.26  (0.7%)   17.67  (0.5%)
2.3% (   1% -3%)
  MultiPhraseHHM5.13  (1.6%)5.34  (0.3%)
4.1% (   2% -6%)
  MultiPhraseMMH   33.99  (1.3%)   39.19  (0.7%)   
15.3% (  13% -   17%)
  MultiPhraseMML  160.11  (0.2%)  202.29  (0.6%)   
26.3% (  25% -   27%)
  MultiPhraseMMM   72.20  (1.7%)   95.66  (2.0%)   
32.5% (  28% -   36%)
{noformat}

 Add two-phase support to MultiPhraseQuery
 -

 Key: LUCENE-6421
 URL: https://issues.apache.org/jira/browse/LUCENE-6421
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6421.patch, LUCENE-6421_luceneutil.patch


 Two-phase support currently works for both sloppy and exact Scorers but it 
 does not work if you have multiple terms at the same position 
 (MultiPhraseQuery).
 This is because UnionPostingsEnum.nextDoc() aggressively reads and merges all 
 the positions. Even making this initialization lazy might just be enough?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-7372) Limit LRUCache by RAM usage

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-7372:
-

Thanks [~noble.paul] for pointing out that this needs to be added to 
ConfigOverlay. How do I specify this config parameter without a default value?

 Limit LRUCache by RAM usage
 ---

 Key: SOLR-7372
 URL: https://issues.apache.org/jira/browse/SOLR-7372
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: SOLR-7372.patch, SOLR-7372.patch, SOLR-7372.patch, 
 SOLR-7372.patch, SOLR-7372.patch


 Now that SOLR-7371 has made DocSet impls Accountable, we should add an option 
 to LRUCache to limit itself by RAM.
 I propose to add a 'maxRamBytes' configuration parameter which it can use to 
 evict items once the total RAM usage of the cache reaches this limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7385) The clusterstatus API does not return the config name for a collection

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7385:

Description: The config name used while creating the collection is not 
returned by the 'clusterstatus' API. I propose to return the configset name 
used by a collection keyed by configName as part of the collection 
information.  (was: The config name used while creating the collection is not 
returned by the 'clusterstatus' API.)

 The clusterstatus API does not return the config name for a collection
 --

 Key: SOLR-7385
 URL: https://issues.apache.org/jira/browse/SOLR-7385
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.4, 5.0, 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7385.patch, SOLR-7385.patch


 The config name used while creating the collection is not returned by the 
 'clusterstatus' API. I propose to return the configset name used by a 
 collection keyed by configName as part of the collection information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7275) Pluggable authorization module in Solr

2015-04-13 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-7275:
---
Attachment: SOLR-7275.patch

Here's the first patch. This introduces the following:
1. SolrAuthorizationPlugin interface - The interface that would need to be 
implemented for custom security plugins e.g. Ranger/Sentry/...
2. Configuration mechanism for security - /security.json in zoo keeper.
3. SolrRequestContext - HttpHeader, UserPrincipal etc. I'm working on 
extracting more context from the request e.g. Collection, handler, etc. and 
populate those in here.

Usage:
To try this out, you need to add /security.json node in zk, with the following 
data format
{code}
{class:solr.SimpleSolrAuthorizationPlugin}
{code}

Also, access rules (black list for now) goes into /simplesecurity.json
{code}
{blacklist:[user1,user2]}
{code}

This uses the http param (uname) to filter out/authorize requests. 
The following request would then start returning 401:
http://localhost:8983/solr/techproducts/select?q=*:*wt=jsonuname=user1

NOTE: The authorization plugin doesn't really do anything about inter-shard 
communication (and doesn't propagate the user principal), it can be used only 
for black listing right now. You could write a plugin that sets up IP based 
rules or I could add those rules to the plugin that would be shipped out of the 
box to support white listing of User info + IP information.


To summarize, I'm still working on the following:
1. Extract more information and populate the context object.
2. Have a watch on the access rules file. I'm still debating between a watch vs 
having an explicit RELOAD like call that updates the access rules.
3. Support IP and/or user based whitelist.


 Pluggable authorization module in Solr
 --

 Key: SOLR-7275
 URL: https://issues.apache.org/jira/browse/SOLR-7275
 Project: Solr
  Issue Type: Sub-task
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-7275.patch


 Solr needs an interface that makes it easy for different authorization 
 systems to be plugged into it. Here's what I plan on doing:
 Define an interface {{SolrAuthorizationPlugin}} with one single method 
 {{isAuthorized}}. This would take in a {{SolrRequestContext}} object and 
 return an {{SolrAuthorizationResponse}} object. The object as of now would 
 only contain a single boolean value but in the future could contain more 
 information e.g. ACL for document filtering etc.
 The reason why we need a context object is so that the plugin doesn't need to 
 understand Solr's capabilities e.g. how to extract the name of the collection 
 or other information from the incoming request as there are multiple ways to 
 specify the target collection for a request. Similarly request type can be 
 specified by {{qt}} or {{/handler_name}}.
 Flow:
 Request - SolrDispatchFilter - isAuthorized(context) - Process/Return.
 {code}
 public interface SolrAuthorizationPlugin {
   public SolrAuthorizationResponse isAuthorized(SolrRequestContext context);
 }
 {code}
 {code}
 public  class SolrRequestContext {
   UserInfo; // Will contain user context from the authentication layer.
   HTTPRequest request;
   Enum OperationType; // Correlated with user roles.
   String[] CollectionsAccessed;
   String[] FieldsAccessed;
   String Resource;
 }
 {code}
 {code}
 public class SolrAuthorizationResponse {
   boolean authorized;
   public boolean isAuthorized();
 }
 {code}
 User Roles: 
 * Admin
 * Collection Level:
   * Query
   * Update
   * Admin
 Using this framework, an implementation could be written for specific 
 security systems e.g. Apache Ranger or Sentry. It would keep all the security 
 system specific code out of Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7372) Limit LRUCache by RAM usage

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493581#comment-14493581
 ] 

ASF subversion and git services commented on SOLR-7372:
---

Commit 1673359 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673359 ]

SOLR-7372: Enable maxRamMB to be configured via the Config APIs on filterCache 
and queryResultCache

 Limit LRUCache by RAM usage
 ---

 Key: SOLR-7372
 URL: https://issues.apache.org/jira/browse/SOLR-7372
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: SOLR-7372.patch, SOLR-7372.patch, SOLR-7372.patch, 
 SOLR-7372.patch, SOLR-7372.patch


 Now that SOLR-7371 has made DocSet impls Accountable, we should add an option 
 to LRUCache to limit itself by RAM.
 I propose to add a 'maxRamBytes' configuration parameter which it can use to 
 evict items once the total RAM usage of the cache reaches this limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7385) The clusterstatus API does not return the config name for a collection

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7385:

Attachment: SOLR-7385.patch

Added test for other cases (clusterstatus without collection, with collection, 
with alias). This is ready.

 The clusterstatus API does not return the config name for a collection
 --

 Key: SOLR-7385
 URL: https://issues.apache.org/jira/browse/SOLR-7385
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.4, 5.0, 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7385.patch, SOLR-7385.patch


 The config name used while creating the collection is not returned by the 
 'clusterstatus' API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7385) The clusterstatus API does not return the config name for a collection

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-7385.
-
Resolution: Fixed

Thanks for reporting [~shaie]!

 The clusterstatus API does not return the config name for a collection
 --

 Key: SOLR-7385
 URL: https://issues.apache.org/jira/browse/SOLR-7385
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.4, 5.0, 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7385.patch, SOLR-7385.patch


 The config name used while creating the collection is not returned by the 
 'clusterstatus' API. I propose to return the configset name used by a 
 collection keyed by configName as part of the collection information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7372) Limit LRUCache by RAM usage

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493578#comment-14493578
 ] 

Shalin Shekhar Mangar commented on SOLR-7372:
-

Ah, okay. Thanks. I'll commit your patch.

 Limit LRUCache by RAM usage
 ---

 Key: SOLR-7372
 URL: https://issues.apache.org/jira/browse/SOLR-7372
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: SOLR-7372.patch, SOLR-7372.patch, SOLR-7372.patch, 
 SOLR-7372.patch, SOLR-7372.patch


 Now that SOLR-7371 has made DocSet impls Accountable, we should add an option 
 to LRUCache to limit itself by RAM.
 I propose to add a 'maxRamBytes' configuration parameter which it can use to 
 evict items once the total RAM usage of the cache reaches this limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7372) Limit LRUCache by RAM usage

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493580#comment-14493580
 ] 

ASF subversion and git services commented on SOLR-7372:
---

Commit 1673358 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1673358 ]

SOLR-7372: Enable maxRamMB to be configured via the Config APIs on filterCache 
and queryResultCache

 Limit LRUCache by RAM usage
 ---

 Key: SOLR-7372
 URL: https://issues.apache.org/jira/browse/SOLR-7372
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: SOLR-7372.patch, SOLR-7372.patch, SOLR-7372.patch, 
 SOLR-7372.patch, SOLR-7372.patch


 Now that SOLR-7371 has made DocSet impls Accountable, we should add an option 
 to LRUCache to limit itself by RAM.
 I propose to add a 'maxRamBytes' configuration parameter which it can use to 
 evict items once the total RAM usage of the cache reaches this limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7385) The clusterstatus API does not return the config name for a collection

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493589#comment-14493589
 ] 

ASF subversion and git services commented on SOLR-7385:
---

Commit 1673360 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1673360 ]

SOLR-7385: The clusterstatus API now returns the config set used to create a 
collection inside a 'configName' key

 The clusterstatus API does not return the config name for a collection
 --

 Key: SOLR-7385
 URL: https://issues.apache.org/jira/browse/SOLR-7385
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.4, 5.0, 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7385.patch, SOLR-7385.patch


 The config name used while creating the collection is not returned by the 
 'clusterstatus' API. I propose to return the configset name used by a 
 collection keyed by configName as part of the collection information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7385) The clusterstatus API does not return the config name for a collection

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493590#comment-14493590
 ] 

ASF subversion and git services commented on SOLR-7385:
---

Commit 1673361 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673361 ]

SOLR-7385: The clusterstatus API now returns the config set used to create a 
collection inside a 'configName' key

 The clusterstatus API does not return the config name for a collection
 --

 Key: SOLR-7385
 URL: https://issues.apache.org/jira/browse/SOLR-7385
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.4, 5.0, 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7385.patch, SOLR-7385.patch


 The config name used while creating the collection is not returned by the 
 'clusterstatus' API. I propose to return the configset name used by a 
 collection keyed by configName as part of the collection information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b54) - Build # 12291 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12291/
Java: 64bit/jdk1.9.0-ea-b54 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.request.TestWriterPerf

Error Message:
1

Stack Trace:
java.lang.AssertionError: 1
at __randomizedtesting.SeedInfo.seed([8A3F006EBC7DFB0E]:0)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:201)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1176)
at org.apache.solr.core.SolrCores.close(SolrCores.java:117)
at org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:378)
at org.apache.solr.util.TestHarness.close(TestHarness.java:359)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:704)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:230)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
expected:1 but was:2

Stack Trace:
java.lang.AssertionError: expected:1 but was:2
at 
__randomizedtesting.SeedInfo.seed([8A3F006EBC7DFB0E:26B3FB4128196F6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdImplicitRouter(FullSolrCloudDistribCmdsTest.java:247)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 

[jira] [Commented] (SOLR-7384) Delete-by-id with _route_ parameter fails on replicas for collections with implicit router

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492870#comment-14492870
 ] 

ASF subversion and git services commented on SOLR-7384:
---

Commit 1673262 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1673262 ]

SOLR-7384: Disable the failing tests until the root cause is fixed

 Delete-by-id with _route_ parameter fails on replicas for collections with 
 implicit router
 --

 Key: SOLR-7384
 URL: https://issues.apache.org/jira/browse/SOLR-7384
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.2

 Attachments: FullSolrCloudDistribCmdsTest-2.log, 
 FullSolrCloudDistribCmdsTest.log


 The FullSolrCloudDistribCmdsTest test has been failing quite regularly on 
 jenkins. Some of those failures are spurious but there is an underlying bug 
 that delete-by-id requests with _route_ parameter on a collection with 
 implicit router, fails on replicas because of a missing _version_ field.
 {quote}
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
 Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC
 1 tests failed.
 FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test
 Error Message:
 Error from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 Stack Trace:
 org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
 from server at 
 http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1:
  no servers hosting shard:
 at 
 __randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
 at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3935) Change the default jetty connector to be the NIO implementation.

2015-04-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492695#comment-14492695
 ] 

Shawn Heisey commented on SOLR-3935:


Looking at the test infrastructure, I see both the nio and bio implementations 
imported into JettySolrRunner.  Although there is code that checks whether the 
connectorName is SelectChannel (nio) or Socket (bio), that code requires a test 
property to override the default of SelectChannel, and I don't see any 
indication that this override can happen automatically or randomly:

{code}
final String connectorName = System.getProperty(tests.jettyConnector, 
SelectChannel);
{code}

I think that line that the tests will never choose bio connector types unless 
the *user* asks for it.  If that thought is correct, we are testing one jetty 
connector and shipping Solr with a config that uses another, which might lead 
to subtle bugs.


 Change the default jetty connector to be the NIO implementation.
 

 Key: SOLR-3935
 URL: https://issues.apache.org/jira/browse/SOLR-3935
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.9, Trunk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3935) Change the default jetty connector to be the NIO implementation.

2015-04-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492695#comment-14492695
 ] 

Shawn Heisey edited comment on SOLR-3935 at 4/13/15 5:30 PM:
-

Looking at the test infrastructure, I see both the nio and bio implementations 
imported into JettySolrRunner.  Although there is code that checks whether the 
connectorName is SelectChannel (nio) or Socket (bio), that code requires a test 
property to override the default of SelectChannel, and I don't see any 
indication that this override can happen automatically or randomly:

{code}
final String connectorName = System.getProperty(tests.jettyConnector, 
SelectChannel);
{code}

I think that the tests will never choose bio connector types unless the *user* 
asks for it.  If that thought is correct, we are testing one jetty connector 
and shipping Solr with a config that uses another, which might lead to subtle 
bugs.



was (Author: elyograg):
Looking at the test infrastructure, I see both the nio and bio implementations 
imported into JettySolrRunner.  Although there is code that checks whether the 
connectorName is SelectChannel (nio) or Socket (bio), that code requires a test 
property to override the default of SelectChannel, and I don't see any 
indication that this override can happen automatically or randomly:

{code}
final String connectorName = System.getProperty(tests.jettyConnector, 
SelectChannel);
{code}

I think that line that the tests will never choose bio connector types unless 
the *user* asks for it.  If that thought is correct, we are testing one jetty 
connector and shipping Solr with a config that uses another, which might lead 
to subtle bugs.


 Change the default jetty connector to be the NIO implementation.
 

 Key: SOLR-3935
 URL: https://issues.apache.org/jira/browse/SOLR-3935
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.9, Trunk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6692) hl.maxAnalyzedChars should apply cumulatively on a multi-valued field

2015-04-13 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492702#comment-14492702
 ] 

Varun Thacker commented on SOLR-6692:
-

Hi [~dsmiley],

Are these two failures related? I can reproduce it on my machine but haven't 
looked into it in detail.

{code}
ant test  -Dtestcase=TestWriterPerf -Dtests.method=testPerf 
-Dtests.seed=88D84B0068AE130 -Dtests.slow=true -Dtests.locale=es_AR 
-Dtests.timezone=US/East-Indiana -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

ant test  -Dtestcase=TestWriterPerf -Dtests.seed=88D84B0068AE130 
-Dtests.slow=true -Dtests.locale=es_AR -Dtests.timezone=US/East-Indiana 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
{code}

 hl.maxAnalyzedChars should apply cumulatively on a multi-valued field
 -

 Key: SOLR-6692
 URL: https://issues.apache.org/jira/browse/SOLR-6692
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.2

 Attachments: 
 SOLR-6692_hl_maxAnalyzedChars_cumulative_multiValued,_and_more.patch


 in DefaultSolrHighlighter, the hl.maxAnalyzedChars figure is used to 
 constrain how much text is analyzed before the highlighter stops, in the 
 interests of performance.  For a multi-valued field, it effectively treats 
 each value anew, no matter how much text it was previously analyzed for other 
 values for the same field for the current document. The PostingsHighlighter 
 doesn't work this way -- hl.maxAnalyzedChars is effectively the total budget 
 for a field for a document, no matter how many values there might be.  It's 
 not reset for each value.  I think this makes more sense.  When we loop over 
 the values, we should subtract from hl.maxAnalyzedChars the length of the 
 value just checked.  The motivation here is consistency with 
 PostingsHighlighter, and to allow for hl.maxAnalyzedChars to be pushed down 
 to term vector uninversion, which wouldn't be possible for multi-valued 
 fields based on the current way this parameter is used.
 Interestingly, I noticed Solr's use of FastVectorHighlighter doesn't honor 
 hl.maxAnalyzedChars as the FVH doesn't have a knob for that.  It does have 
 hl.phraseLimit which is a limit that could be used for a similar purpose, 
 albeit applied differently.
 Furthermore, DefaultSolrHighligher.doHighlightingByHighlighter should exit 
 early from it's field value loop if it reaches hl.snippets, and if 
 hl.preserveMulti=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.1-Linux (64bit/jdk1.8.0_40) - Build # 256 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.1-Linux/256/
Java: 64bit/jdk1.8.0_40 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at 
http://127.0.0.1:50875/compositeid_collection_with_routerfield_shard1_replica1: 
no servers hosting shard: 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:50875/compositeid_collection_with_routerfield_shard1_replica1: 
no servers hosting shard: 
at 
__randomizedtesting.SeedInfo.seed([9F57FC1B2F2B3701:1703C3C181D75AF9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:556)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdCompositeRouterWithRouterField(FullSolrCloudDistribCmdsTest.java:357)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6416) BooleanQuery should only extract terms from scoring clauses

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491992#comment-14491992
 ] 

ASF subversion and git services commented on LUCENE-6416:
-

Commit 1673122 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673122 ]

LUCENE-6416: BooleanQuery.extractTerms now only extracts terms from scoring 
clauses.

 BooleanQuery should only extract terms from scoring clauses
 ---

 Key: LUCENE-6416
 URL: https://issues.apache.org/jira/browse/LUCENE-6416
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk, 5.1
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.2

 Attachments: LUCENE-6416.patch


 BooleanQuery should not extract terms from FILTER clauses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7381) Improve logging by adding MDC context in more places

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491991#comment-14491991
 ] 

ASF subversion and git services commented on SOLR-7381:
---

Commit 1673121 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673121 ]

SOLR-7381: Improve logging by adding node name in MDC in SolrCloud mode and 
adding MDC to all thread pools

 Improve logging by adding MDC context in more places
 

 Key: SOLR-7381
 URL: https://issues.apache.org/jira/browse/SOLR-7381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Critical
 Fix For: 5.2

 Attachments: SOLR-7381.patch, SOLR-7381.patch


 SOLR-6673 added MDC based logging in a few places but we have a lot of ground 
 to cover. Threads created via thread pool executors do not inherit MDC values 
 and those are some of the most interesting places to log MDC context. This is 
 critical to help debug SolrCloud failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6409) LongBitSet.ensureCapacity overflows on numBits Integer.MaxValue

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491999#comment-14491999
 ] 

ASF subversion and git services commented on LUCENE-6409:
-

Commit 1673123 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1673123 ]

LUCENE-6409: Fixed integer overflow in LongBitSet.ensureCapacity.

  LongBitSet.ensureCapacity overflows on numBits  Integer.MaxValue 
 ---

 Key: LUCENE-6409
 URL: https://issues.apache.org/jira/browse/LUCENE-6409
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/other
Reporter: Luc Vanlerberghe

 LongBitSet.ensureCapacity calculates the number of longs required to store 
 the number of bits correctly and allocates a long[] accordingly, but then 
 shifts the array length (which is an int!) left by 6 bits.  The int should be 
 cast to long *before* performing the shift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.1-Linux (64bit/jdk1.8.0_60-ea-b06) - Build # 257 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.1-Linux/257/
Java: 64bit/jdk1.8.0_60-ea-b06 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at 
http://127.0.0.1:50670/compositeid_collection_with_routerfield_shard1_replica1: 
no servers hosting shard: 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:50670/compositeid_collection_with_routerfield_shard1_replica1: 
no servers hosting shard: 
at 
__randomizedtesting.SeedInfo.seed([2E056F8C0970EF78:A6515056A78C8280]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:556)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdCompositeRouterWithRouterField(FullSolrCloudDistribCmdsTest.java:357)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6419) Add AssertingQuery / two-phase iteration to AssertingScorer

2015-04-13 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491964#comment-14491964
 ] 

Adrien Grand commented on LUCENE-6419:
--

+1, we need this!

 Add AssertingQuery / two-phase iteration to AssertingScorer
 ---

 Key: LUCENE-6419
 URL: https://issues.apache.org/jira/browse/LUCENE-6419
 Project: Lucene - Core
  Issue Type: Test
Reporter: Robert Muir

 I am working on a similar issue with Spans (LUCENE-6411).
 AssertingScorer is currently only used as a top-level wrapper, and it doesnt 
 support asserting for asTwoPhaseIterator (wouldn't help at the moment, the 
 way it is currently used).
 Today some good testing of this is done, but only when 
 RandomApproximationQuery is explicitly used.
 I think we should add AssertingQuery that can wrap a query with asserts and 
 we can then have checks everywhere in a complicated tree?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4669 - Still Failing!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4669/
Java: 32bit/jdk1.8.0_40 -client -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1\conf\configoverlay.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1\conf\configoverlay.json: The 
process cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1\conf\configoverlay.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1\conf\configoverlay.json: The 
process cannot access the file because it is being used by another process.

   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1\conf
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 7DD103EEE2CF6184-001: java.nio.file.DirectoryNotEmptyException: 

[jira] [Resolved] (LUCENE-6409) LongBitSet.ensureCapacity overflows on numBits Integer.MaxValue

2015-04-13 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6409.
--
   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

Thanks Luc, I committed your changes. Let's have another JIRA issue to ensure 
that bits beyond {{numBits}} remain clear as you suggested?

  LongBitSet.ensureCapacity overflows on numBits  Integer.MaxValue 
 ---

 Key: LUCENE-6409
 URL: https://issues.apache.org/jira/browse/LUCENE-6409
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/other
Reporter: Luc Vanlerberghe
 Fix For: Trunk, 5.2


 LongBitSet.ensureCapacity calculates the number of longs required to store 
 the number of bits correctly and allocates a long[] accordingly, but then 
 shifts the array length (which is an int!) left by 6 bits.  The int should be 
 cast to long *before* performing the shift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6416) BooleanQuery should only extract terms from scoring clauses

2015-04-13 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492081#comment-14492081
 ] 

Adrien Grand commented on LUCENE-6416:
--

We have two use-cases for extractTerms today: distributed IDF and highlighting 
and both of them only care about scoring clauses.

Also this way it is consistent with FilteredQuery which does not extract terms 
from the filter.

 BooleanQuery should only extract terms from scoring clauses
 ---

 Key: LUCENE-6416
 URL: https://issues.apache.org/jira/browse/LUCENE-6416
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk, 5.1
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6416.patch


 BooleanQuery should not extract terms from FILTER clauses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6416) BooleanQuery should only extract terms from scoring clauses

2015-04-13 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492062#comment-14492062
 ] 

Mikhail Khludnev commented on LUCENE-6416:
--

[~jpountz] I don't express a concern, but would you mind to clarify the 
motivation? 

 BooleanQuery should only extract terms from scoring clauses
 ---

 Key: LUCENE-6416
 URL: https://issues.apache.org/jira/browse/LUCENE-6416
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk, 5.1
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6416.patch


 BooleanQuery should not extract terms from FILTER clauses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.1.0 RC2

2015-04-13 Thread Shalin Shekhar Mangar
+1

SUCCESS! [0:54:01.027152]

On Fri, Apr 10, 2015 at 12:12 AM, Timothy Potter thelabd...@gmail.com
wrote:

 Please vote for the second release candidate for Lucene/Solr 5.1.0

 The artifacts can be downloaded from:

 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.1.0-RC2-rev1672403/

 You can run the smoke tester directly with this command:
 python3 -u dev-tools/scripts/smokeTestRelease.py

 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.1.0-RC2-rev1672403/

 Here's my +1 SUCCESS! [0:43:35.208102]

 Cheers,
 Tim

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Commented] (LUCENE-6415) TermsQuery.extractTerms should not throw an UOE

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491979#comment-14491979
 ] 

ASF subversion and git services commented on LUCENE-6415:
-

Commit 1673118 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1673118 ]

LUCENE-6415: Make TermsQuery.extractTerms a no-op instead of throwing an UOE.

 TermsQuery.extractTerms should not throw an UOE
 ---

 Key: LUCENE-6415
 URL: https://issues.apache.org/jira/browse/LUCENE-6415
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6415.patch, LUCENE-6415.patch


 TermsQuery inherits the default impl of extractTerms which throws an 
 UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6416) BooleanQuery should only extract terms from scoring clauses

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491990#comment-14491990
 ] 

ASF subversion and git services commented on LUCENE-6416:
-

Commit 1673120 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1673120 ]

LUCENE-6416: BooleanQuery.extractTerms now only extracts terms from scoring 
clauses.

 BooleanQuery should only extract terms from scoring clauses
 ---

 Key: LUCENE-6416
 URL: https://issues.apache.org/jira/browse/LUCENE-6416
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk, 5.1
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.2

 Attachments: LUCENE-6416.patch


 BooleanQuery should not extract terms from FILTER clauses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6373) Complete two phase doc id iteration support for Spans

2015-04-13 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492039#comment-14492039
 ] 

Adrien Grand commented on LUCENE-6373:
--

The patch looks like a good start, it's good to see SpanOr and disjunctions 
looking similar.

bq. //final long cost; //FIXME: needed?

This is needed by the min-should-match scorer in order to keep track of scorers 
which are behind the current document and advance the least-costly instance 
first.

Also a minor nitpick but I think we should move SpanPositionQueue to either its 
own java file or as an inner class of SpanOrQuery in order not to confuse 
incremental compilation.

Maybe Specialize SpanPositionQueue similar to DisiPriorityQueue, inline the 
position comparison function. could be delayed to another issue in order to 
keep this one small?

 Complete two phase doc id iteration support for Spans
 -

 Key: LUCENE-6373
 URL: https://issues.apache.org/jira/browse/LUCENE-6373
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
 Attachments: LUCENE-6373-SpanOr.patch


 Spin off from LUCENE-6308, see comments there from about 23 March 2015.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6415) TermsQuery.extractTerms should not throw an UOE

2015-04-13 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6415.
--
   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

 TermsQuery.extractTerms should not throw an UOE
 ---

 Key: LUCENE-6415
 URL: https://issues.apache.org/jira/browse/LUCENE-6415
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6415.patch, LUCENE-6415.patch


 TermsQuery inherits the default impl of extractTerms which throws an 
 UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6409) LongBitSet.ensureCapacity overflows on numBits Integer.MaxValue

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492001#comment-14492001
 ] 

ASF subversion and git services commented on LUCENE-6409:
-

Commit 1673124 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673124 ]

LUCENE-6409: Fixed integer overflow in LongBitSet.ensureCapacity.

  LongBitSet.ensureCapacity overflows on numBits  Integer.MaxValue 
 ---

 Key: LUCENE-6409
 URL: https://issues.apache.org/jira/browse/LUCENE-6409
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/other
Reporter: Luc Vanlerberghe

 LongBitSet.ensureCapacity calculates the number of longs required to store 
 the number of bits correctly and allocates a long[] accordingly, but then 
 shifts the array length (which is an int!) left by 6 bits.  The int should be 
 cast to long *before* performing the shift.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6415) TermsQuery.extractTerms should not throw an UOE

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491989#comment-14491989
 ] 

ASF subversion and git services commented on LUCENE-6415:
-

Commit 1673119 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673119 ]

LUCENE-6415: Make TermsQuery.extractTerms a no-op instead of throwing an UOE.

 TermsQuery.extractTerms should not throw an UOE
 ---

 Key: LUCENE-6415
 URL: https://issues.apache.org/jira/browse/LUCENE-6415
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.1
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6415.patch, LUCENE-6415.patch


 TermsQuery inherits the default impl of extractTerms which throws an 
 UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6416) BooleanQuery should only extract terms from scoring clauses

2015-04-13 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6416.
--
   Resolution: Fixed
Fix Version/s: Trunk

 BooleanQuery should only extract terms from scoring clauses
 ---

 Key: LUCENE-6416
 URL: https://issues.apache.org/jira/browse/LUCENE-6416
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk, 5.1
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6416.patch


 BooleanQuery should not extract terms from FILTER clauses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6420) Update forbiddenapis to 1.8

2015-04-13 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491971#comment-14491971
 ] 

Dawid Weiss commented on LUCENE-6420:
-

Thanks Uwe, these are great changes.

 Update forbiddenapis to 1.8
 ---

 Key: LUCENE-6420
 URL: https://issues.apache.org/jira/browse/LUCENE-6420
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6420.patch


 Update forbidden-apis plugin to 1.8:
 - Initial support for Java 9 including JIGSAW
 - Errors are now reported sorted by line numbers and correctly grouped 
 (synthetic methods/lambdas)
 - Package-level forbids: Deny all classes from a package: org.hatedpkg.** 
 (also other globs work)
 - In addition to file-level excludes, forbiddenapis now supports fine 
 granular excludes using Java annotations. You can use the one shipped, but 
 define your own, e.g. inside Lucene and pass its name to forbidden (e.g. 
 using a glob: **.SuppressForbidden would any annotation in any package to 
 suppress errors). Annotation need to be on class level, no runtime annotation 
 required.
 This will for now only update the dependency and remove the additional forbid 
 by [~shalinmangar] for MessageFormat (which is now shipped with forbidden). 
 But we should review and for example suppress forbidden failures in command 
 line tools using @SuppressForbidden (or similar annotation). The discussion 
 is open, I can make a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7381) Improve logging by adding MDC context in more places

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491958#comment-14491958
 ] 

ASF subversion and git services commented on SOLR-7381:
---

Commit 1673116 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1673116 ]

SOLR-7381: Improve logging by adding node name in MDC in SolrCloud mode and 
adding MDC to all thread pools

 Improve logging by adding MDC context in more places
 

 Key: SOLR-7381
 URL: https://issues.apache.org/jira/browse/SOLR-7381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Critical
 Fix For: 5.2

 Attachments: SOLR-7381.patch, SOLR-7381.patch


 SOLR-6673 added MDC based logging in a few places but we have a lot of ground 
 to cover. Threads created via thread pool executors do not inherit MDC values 
 and those are some of the most interesting places to log MDC context. This is 
 critical to help debug SolrCloud failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7381) Improve logging by adding MDC context in more places

2015-04-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7381:

Attachment: SOLR-7381-thread-names.patch

This patch takes us a step further in improving debuggability by exposing MDC 
values in thread names so that a thread dump can give us a better idea of what 
was happening at the time.

For example, here is a stack trace showing a CloudSolrClient update thread 
which has the URL of the remote host in its name:
{code}
CloudSolrClient 
ThreadPool-6-thread-1-processing-{CloudSolrClient.url=http:/127.0.0.1:53410/ollection1/
 #185 prio=5 os_prio=0 tid=0x7f7778097000 nid=0x218a runnable 
[0x7f77415d7000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
{code}

Here's another stack trace showing the update executor thread running on 
core=collection1, replica=core_node2, node_name=127.0.0.1:53410_ and making a 
call to http://127.0.0.1:57515/collection1:
{code}
updateExecutor-11-thread-1-processing-{core=collection1, replica=core_node2, 
node_name=127.0.0.1:53410_, 
ConcurrentUpdateSolrClient.baseUrl=http:/127.0.0.1:57515/ollection1, 
shard=shard3, collection=collection1} #177 prio=5 os_prio=0 
tid=0x7f775400a000 nid=0x2182 runnable [0x7f7741ddf000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
{code}

Interestingly, the java thread names seem to use the forward slash character as 
a special one and ignore the character following it so e.g. a url added to the 
name has http:/127.0.0.1:57515/ollection1 (notice 'ollection1'!)

All you need to do to take advantage of this feature is to set (any) MDC values 
before you submit a task to the thread pool and everything else is taken care 
for you.

I should probably add some upper limit to the thread names.

 Improve logging by adding MDC context in more places
 

 Key: SOLR-7381
 URL: https://issues.apache.org/jira/browse/SOLR-7381
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Critical
 Fix For: 5.2

 Attachments: SOLR-7381-thread-names.patch, SOLR-7381.patch, 
 SOLR-7381.patch


 SOLR-6673 added MDC based logging in a few places but we have a lot of ground 
 to cover. Threads created via thread pool executors do not inherit MDC values 
 and those are some of the most interesting places to log MDC context. This is 
 critical to help debug SolrCloud failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7110) Optimize JavaBinCodec to minimize string Object creation

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492299#comment-14492299
 ] 

ASF subversion and git services commented on SOLR-7110:
---

Commit 1673162 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673162 ]

SOLR-7110: reformat new code

 Optimize JavaBinCodec to minimize string Object creation
 

 Key: SOLR-7110
 URL: https://issues.apache.org/jira/browse/SOLR-7110
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7110.patch, SOLR-7110.patch, SOLR-7110.patch


 In JavabinCodec we already optimize on strings creation , if they are 
 repeated in the same payload. if we use a cache it is possible to avoid 
 string creation across objects as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7110) Optimize JavaBinCodec to minimize string Object creation

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492298#comment-14492298
 ] 

ASF subversion and git services commented on SOLR-7110:
---

Commit 1673161 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1673161 ]

SOLR-7110: reformat new code

 Optimize JavaBinCodec to minimize string Object creation
 

 Key: SOLR-7110
 URL: https://issues.apache.org/jira/browse/SOLR-7110
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7110.patch, SOLR-7110.patch, SOLR-7110.patch


 In JavabinCodec we already optimize on strings creation , if they are 
 repeated in the same payload. if we use a cache it is possible to avoid 
 string creation across objects as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7110) Optimize JavaBinCodec to minimize string Object creation

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492249#comment-14492249
 ] 

ASF subversion and git services commented on SOLR-7110:
---

Commit 1673150 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1673150 ]

SOLR-7110: Optimize JavaBinCodec to minimize string Object creation

 Optimize JavaBinCodec to minimize string Object creation
 

 Key: SOLR-7110
 URL: https://issues.apache.org/jira/browse/SOLR-7110
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7110.patch, SOLR-7110.patch, SOLR-7110.patch


 In JavabinCodec we already optimize on strings creation , if they are 
 repeated in the same payload. if we use a cache it is possible to avoid 
 string creation across objects as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2125 - Failure!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2125/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.TestHighlightDedupGrouping.test

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:60145//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:60145//collection1
at 
__randomizedtesting.SeedInfo.seed([45E3AC02B9E50801:CDB793D8171965F9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:567)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:174)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:139)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:153)
at 
org.apache.solr.TestHighlightDedupGrouping.addDoc(TestHighlightDedupGrouping.java:122)
at 
org.apache.solr.TestHighlightDedupGrouping.randomizedTest(TestHighlightDedupGrouping.java:96)
at 
org.apache.solr.TestHighlightDedupGrouping.test(TestHighlightDedupGrouping.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b54) - Build # 12286 - Failure!

2015-04-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12286/
Java: 32bit/jdk1.9.0-ea-b54 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard: 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:44672/implicit_collection_without_routerfield_shard1_replica1: 
no servers hosting shard: 
at 
__randomizedtesting.SeedInfo.seed([944EEE25A6B2D153:1C1AD1FF084EBCAB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:557)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdImplicitRouter(FullSolrCloudDistribCmdsTest.java:225)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7110) Optimize JavaBinCodec to minimize string Object creation

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492248#comment-14492248
 ] 

ASF subversion and git services commented on SOLR-7110:
---

Commit 1673149 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1673149 ]

SOLR-7110: Optimize JavaBinCodec to minimize string Object creation

 Optimize JavaBinCodec to minimize string Object creation
 

 Key: SOLR-7110
 URL: https://issues.apache.org/jira/browse/SOLR-7110
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Attachments: SOLR-7110.patch, SOLR-7110.patch, SOLR-7110.patch


 In JavabinCodec we already optimize on strings creation , if they are 
 repeated in the same payload. if we use a cache it is possible to avoid 
 string creation across objects as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Examples in JIRA issues CHANGES messages

2015-04-13 Thread Shalin Shekhar Mangar
+1 to everything.

It is also nice to give more details into what changed between patches.
Unless you use review board, this is sometimes the only way to understand
the changes between two patches. Especially, please call out any hacks,
gotchas and todo items that you may have thought about when writing the
code. This is not just for people following the development but also for
future contributors who may have to debug your code and need some
historical context to understand the design decisions. Finally, if someone
has given you review comments, please be kind enough to point out if/how
they've been addressed.

On Sun, Apr 12, 2015 at 12:21 AM, Yonik Seeley ysee...@gmail.com wrote:

 Devs  contributors, please remember to be nice to other contributors
 and describe what your patch is trying to do in the JIRA issue.

 For patches that add/change an API, that means giving an example or
 specifying what the API is.  People should not have to read through
 source code to try and reconstruct what an API actually looks like in
 order to give feedback on a proposed API.

 Also, for CHANGES, please consider what it will take for others to
 understand the actual change.  Don't automatically just use the JIRA
 description.
  - if you added a new parameter, then put that parameter in the description
  - where appropriate, put a short/concise example (not more than a few
 lines though) - when to do this is more subjective, but please think
 about it for very commonly used APIs.


 For the sake of example, I'll pick on the first feature added for 5.2:

 from CHANGES.txt:
 '''
 New Features
 --
 * SOLR-6637: Solr should have a way to restore a core from a backed up
 index.
 '''

 So it's saying we *should* have a feature (as opposed to saying we
 actually now do have a feature, and what that feature is), and doesn't
 give you any clue how that feature was actually implemented, or how
 you could go about finding out.

 So next, I go to SOLR-6637 to try and see what this feature actually
 consists of.
 Unfortunately, there's never an example of how someone is supposed to
 try this feature out.  We're setting a high bar for contribution from
 others.

 So next, I use the source to try and reconstruct what the API actually
 looks like.
 I find what looks like will be the right test class:

 https://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/test/org/apache/solr/handler/TestRestoreCore.java?view=markup

 Of course, the tests aren't going to directly give me what a command
 URL would look like, but this is the closest thing:
 TestReplicationHandlerBackup.runBackupCommand(masterJetty,
 ReplicationHandler.CMD_RESTORE, params);

 And continue following the source just to be able to construct a
 simple example like I gave here:

 http://yonik.com/solr-5-2/

 (so I finally tried it out, and it works... yay ;-)

 So to recap:
 - Consider CHANGES documentation.
 - Describe *what* you are trying to implement in your JIRA issues, and
 give API examples where appropriate.

 -Yonik

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Resolved] (SOLR-7110) Optimize JavaBinCodec to minimize string Object creation

2015-04-13 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-7110.
--
   Resolution: Fixed
Fix Version/s: 5.2
   Trunk

As of now , it is not used anywhere but the feature is in

 Optimize JavaBinCodec to minimize string Object creation
 

 Key: SOLR-7110
 URL: https://issues.apache.org/jira/browse/SOLR-7110
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7110.patch, SOLR-7110.patch, SOLR-7110.patch


 In JavabinCodec we already optimize on strings creation , if they are 
 repeated in the same payload. if we use a cache it is possible to avoid 
 string creation across objects as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492316#comment-14492316
 ] 

ASF subversion and git services commented on LUCENE-6196:
-

Commit 1673165 from [~dsmiley] in branch 'dev/branches/lucene6196'
[ https://svn.apache.org/r1673165 ]

LUCENE-6196: Geo3d initial checkin

Deltas from Karl's first upload: change of package, some hashCode() impls, a 
few toString() impls, some javadoc formatting.   New Geo3dRtTest.  Geo3dShape 
throws an exception if not geo.

 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: LUCENE-6196_Geo3d.patch, ShapeImpl.java, 
 geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7383) DIH rss example is broken again

2015-04-13 Thread Upayavira (JIRA)
Upayavira created SOLR-7383:
---

 Summary: DIH rss example is broken again
 Key: SOLR-7383
 URL: https://issues.apache.org/jira/browse/SOLR-7383
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 5.0, Trunk
Reporter: Upayavira
Priority: Minor


The DIH example (solr/example/example-DIH/solr/rss/conf/rss-data-config.xml) is 
broken again. See associated issues.

Below is a config that should work.

This is caused by Slashdot seemingly oscillating between RDF/RSS and pure RSS. 
Perhaps we should depend upon something more static, rather than an external 
service that is free to change as it desires.

dataConfig
dataSource type=URLDataSource /
document
entity name=slashdot
pk=link
url=http://rss.slashdot.org/Slashdot/slashdot;
processor=XPathEntityProcessor
forEach=/RDF/item
transformer=DateFormatTransformer

field column=source xpath=/RDF/channel/title 
commonField=true /
field column=source-link xpath=/RDF/channel/link 
commonField=true /
field column=subject xpath=/RDF/channel/subject 
commonField=true /

field column=title xpath=/RDF/item/title /
field column=link xpath=/RDF/item/link /
field column=description xpath=/RDF/item/description /
field column=creator xpath=/RDF/item/creator /
field column=item-subject xpath=/RDF/item/subject /
field column=date xpath=/RDF/item/date 
dateTimeFormat=-MM-dd'T'HH:mm:ss /
field column=slash-department xpath=/RDF/item/department /
field column=slash-section xpath=/RDF/item/section /
field column=slash-comments xpath=/RDF/item/comments /
/entity
/document
/dataConfig




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6692) hl.maxAnalyzedChars should apply cumulatively on a multi-valued field

2015-04-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492426#comment-14492426
 ] 

ASF subversion and git services commented on SOLR-6692:
---

Commit 1673200 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1673200 ]

SOLR-6692: hl.maxAnalyzedChars should apply cumulatively on a multi-valued field

 hl.maxAnalyzedChars should apply cumulatively on a multi-valued field
 -

 Key: SOLR-6692
 URL: https://issues.apache.org/jira/browse/SOLR-6692
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.2

 Attachments: 
 SOLR-6692_hl_maxAnalyzedChars_cumulative_multiValued,_and_more.patch


 in DefaultSolrHighlighter, the hl.maxAnalyzedChars figure is used to 
 constrain how much text is analyzed before the highlighter stops, in the 
 interests of performance.  For a multi-valued field, it effectively treats 
 each value anew, no matter how much text it was previously analyzed for other 
 values for the same field for the current document. The PostingsHighlighter 
 doesn't work this way -- hl.maxAnalyzedChars is effectively the total budget 
 for a field for a document, no matter how many values there might be.  It's 
 not reset for each value.  I think this makes more sense.  When we loop over 
 the values, we should subtract from hl.maxAnalyzedChars the length of the 
 value just checked.  The motivation here is consistency with 
 PostingsHighlighter, and to allow for hl.maxAnalyzedChars to be pushed down 
 to term vector uninversion, which wouldn't be possible for multi-valued 
 fields based on the current way this parameter is used.
 Interestingly, I noticed Solr's use of FastVectorHighlighter doesn't honor 
 hl.maxAnalyzedChars as the FVH doesn't have a knob for that.  It does have 
 hl.phraseLimit which is a limit that could be used for a similar purpose, 
 albeit applied differently.
 Furthermore, DefaultSolrHighligher.doHighlightingByHighlighter should exit 
 early from it's field value loop if it reaches hl.snippets, and if 
 hl.preserveMulti=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-7110) Optimize JavaBinCodec to minimize string Object creation

2015-04-13 Thread Noble Paul
Oh . This java 7
On Apr 13, 2015 7:19 PM, ASF subversion and git services (JIRA) 
j...@apache.org wrote:


 [
 https://issues.apache.org/jira/browse/SOLR-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492408#comment-14492408
 ]

 ASF subversion and git services commented on SOLR-7110:
 ---

 Commit 1673186 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
 [ https://svn.apache.org/r1673186 ]

 SOLR-7110: fix break to 5x build

  Optimize JavaBinCodec to minimize string Object creation
  
 
  Key: SOLR-7110
  URL: https://issues.apache.org/jira/browse/SOLR-7110
  Project: Solr
   Issue Type: Improvement
 Reporter: Noble Paul
 Assignee: Noble Paul
 Priority: Minor
  Fix For: Trunk, 5.2
 
  Attachments: SOLR-7110.patch, SOLR-7110.patch, SOLR-7110.patch
 
 
  In JavabinCodec we already optimize on strings creation , if they are
 repeated in the same payload. if we use a cache it is possible to avoid
 string creation across objects as well.



 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




  1   2   >