[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+140) - Build # 2326 - Unstable!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2326/
Java: 32bit/jdk-9-ea+140 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([E410DC227EBA4A24:1D5D4F8D42CF07AE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:280)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3686 - Failure!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3686/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 75097 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:765: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:645: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:633: Source 
checkout is modified!!! Offending files:
* solr/licenses/commons-fileupload-1.3.2.jar.sha1

Total time: 121 minutes 20 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 18429 - Still Failing!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18429/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 75109 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:765: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:645: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:633: Source checkout 
is modified!!! Offending files:
* solr/licenses/commons-fileupload-1.3.2.jar.sha1

Total time: 68 minutes 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-9824) Documents indexed in bulk are replicated using too many HTTP requests

2016-12-02 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717028#comment-15717028
 ] 

David Smiley commented on SOLR-9824:


Work-around: Thanks to some additional conditions of when the lastDocInBatch 
condition happens, it's possible to work-around this performance bug by setting 
{{-Dsolr.cloud.replication.runners=2}} (not 1).  I don't really like it at 2 
but since it fixes the bug, I'm going with it.  In addition, in my environment 
I've set {{-Dsolr.cloud.replication.poll-queue-time-ms=1000}}; the default is 
25.

solr.cloud.replication.poll-queue-time-ms:  The fact that this defaults to a 
measly 25ms is too low IMO.  I think it should be at least 250 -- which happens 
to be the default in ConcurrentUpdateSolrClient.  The lower it is, the greater 
likelihood of more indexing overhead including log messages.  The greater it 
is, it could delay a /update connection from completing up to this amount of 
time.

AddUpdateCommand.pollQueueTime defaults to 0, which is only modified to 
{{-Dsolr.cloud.replication.poll-queue-time-ms=25}} by javabin.  So if you send 
data to Solr in anything other than javabin, boy are you in for some HTTP 
connection frenzy (I've tried).  I insist we set this to a reasonable number.  
Perhaps 250 as per my other suggestion, and overridable using the same system 
property.

> Documents indexed in bulk are replicated using too many HTTP requests
> -
>
> Key: SOLR-9824
> URL: https://issues.apache.org/jira/browse/SOLR-9824
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.3
>Reporter: David Smiley
>
> This takes awhile to explain; bear with me. While working on bulk indexing 
> small documents, I looked at the logs of my SolrCloud nodes.  I noticed that 
> shards would see an /update log message every ~6ms which is *way* too much.  
> These are requests from one shard (that isn't a leader/replica for these docs 
> but the recipient from my client) to the target shard leader (no additional 
> replicas).  One might ask why I'm not sending docs to the right shard in the 
> first place; I have a reason but it's besides the point -- there's a real 
> Solr perf problem here and this probably applies equally to 
> replicationFactor>1 situations too.  I could turn off the logs but that would 
> hide useful stuff, and it's disconcerting to me that so many short-lived HTTP 
> requests are happening, somehow at the bequest of DistributedUpdateProcessor. 
>  After lots of analysis and debugging and hair pulling, I finally figured it 
> out.  
> In SOLR-7333 ([~tpot]) introduced an optimization called 
> {{UpdateRequest.isLastDocInBatch()}} in which ConcurrentUpdateSolrClient will 
> poll with a '0' timeout to the internal queue, so that it can close the 
> connection without it hanging around any longer than needed.  This part makes 
> sense to me.  Currently the only spot that has the smarts to set this flag is 
> {{JavaBinUpdateRequestCodec.unmarshal.readOuterMostDocIterator()}} at the 
> last document.  So if a shard received docs in a javabin stream (but not 
> other formats) one would expect the _last_ document to have this flag.  
> There's even a test.  Docs without this flag get the default poll time; for 
> javabin it's 25ms.  Okay.
> I _suspect_ that if someone used CloudSolrClient or HttpSolrClient to send 
> javabin data in a batch, the intended efficiencies of SOLR-7333 would apply.  
> I didn't try. In my case, I'm using ConcurrentUpdateSolrClient (and BTW 
> DistributedUpdateProcessor uses CUSC too).  CUSC uses the RequestWriter 
> (defaulting to javabin) to send each document separately without any leading 
> marker or trailing marker.  For the XML format by comparison, there is a 
> leading and trailing marker ( ... ).  Since there's no outer 
> container for the javabin unmarshalling to detect the last document, it marks 
> _every_ document as {{req.lastDocInBatch()}}!  Ouch!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9819) Upgrade commons-fileupload to 1.3.2

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717005#comment-15717005
 ] 

ASF subversion and git services commented on SOLR-9819:
---

Commit 8a13448c084cef68e0c44e6997c7a71bd24db278 in lucene-solr's branch 
refs/heads/branch_5x from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8a13448 ]

SOLR-9819: Add new line to the end of SHA


> Upgrade commons-fileupload to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache commons-fileupload 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9819) Upgrade commons-fileupload to 1.3.2

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717001#comment-15717001
 ] 

ASF subversion and git services commented on SOLR-9819:
---

Commit 3ce1ec3bff3b1ce294569ea3e48d3a2dc6aafb62 in lucene-solr's branch 
refs/heads/branch_6x from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3ce1ec3 ]

SOLR-9819: Add new line to the end of SHA


> Upgrade commons-fileupload to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache commons-fileupload 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9819) Upgrade commons-fileupload to 1.3.2

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716998#comment-15716998
 ] 

ASF subversion and git services commented on SOLR-9819:
---

Commit 39c2f3d80fd585c7ae4a4a559d53a19a3f100061 in lucene-solr's branch 
refs/heads/master from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=39c2f3d ]

SOLR-9819: Add new line to the end of SHA


> Upgrade commons-fileupload to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache commons-fileupload 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9824) Documents indexed in bulk are replicated using too many HTTP requests

2016-12-02 Thread David Smiley (JIRA)
David Smiley created SOLR-9824:
--

 Summary: Documents indexed in bulk are replicated using too many 
HTTP requests
 Key: SOLR-9824
 URL: https://issues.apache.org/jira/browse/SOLR-9824
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.3
Reporter: David Smiley


This takes awhile to explain; bear with me. While working on bulk indexing 
small documents, I looked at the logs of my SolrCloud nodes.  I noticed that 
shards would see an /update log message every ~6ms which is *way* too much.  
These are requests from one shard (that isn't a leader/replica for these docs 
but the recipient from my client) to the target shard leader (no additional 
replicas).  One might ask why I'm not sending docs to the right shard in the 
first place; I have a reason but it's besides the point -- there's a real Solr 
perf problem here and this probably applies equally to replicationFactor>1 
situations too.  I could turn off the logs but that would hide useful stuff, 
and it's disconcerting to me that so many short-lived HTTP requests are 
happening, somehow at the bequest of DistributedUpdateProcessor.  After lots of 
analysis and debugging and hair pulling, I finally figured it out.  

In SOLR-7333 ([~tpot]) introduced an optimization called 
{{UpdateRequest.isLastDocInBatch()}} in which ConcurrentUpdateSolrClient will 
poll with a '0' timeout to the internal queue, so that it can close the 
connection without it hanging around any longer than needed.  This part makes 
sense to me.  Currently the only spot that has the smarts to set this flag is 
{{JavaBinUpdateRequestCodec.unmarshal.readOuterMostDocIterator()}} at the last 
document.  So if a shard received docs in a javabin stream (but not other 
formats) one would expect the _last_ document to have this flag.  There's even 
a test.  Docs without this flag get the default poll time; for javabin it's 
25ms.  Okay.

I _suspect_ that if someone used CloudSolrClient or HttpSolrClient to send 
javabin data in a batch, the intended efficiencies of SOLR-7333 would apply.  I 
didn't try. In my case, I'm using ConcurrentUpdateSolrClient (and BTW 
DistributedUpdateProcessor uses CUSC too).  CUSC uses the RequestWriter 
(defaulting to javabin) to send each document separately without any leading 
marker or trailing marker.  For the XML format by comparison, there is a 
leading and trailing marker ( ... ).  Since there's no outer 
container for the javabin unmarshalling to detect the last document, it marks 
_every_ document as {{req.lastDocInBatch()}}!  Ouch!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 598 - Failure!

2016-12-02 Thread Anshum Gupta
This is me. I'll fix it if someone can help me out with it. I'm not really
sure about what to do about the 'offending file'.
The test and pre-commit passed happily for me.


On Fri, Dec 2, 2016 at 4:26 PM Policeman Jenkins Server 
wrote:

> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/598/
> Java: 32bit/jdk1.8.0_102 -client -XX:+UseParallelGC
>
> All tests passed
>
> Build Log:
> [...truncated 65884 lines...]
> BUILD FAILED
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:765: The
> following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:645: The
> following error occurred while executing this line:
> C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:633: Source
> checkout is modified!!! Offending files:
> * solr/licenses/commons-fileupload-1.3.2.jar.sha1
>
> Total time: 92 minutes 5 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 598 - Failure!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/598/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 65884 lines...]
BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:765: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:645: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:633: Source 
checkout is modified!!! Offending files:
* solr/licenses/commons-fileupload-1.3.2.jar.sha1

Total time: 92 minutes 5 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+140) - Build # 2325 - Failure!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2325/
Java: 32bit/jdk-9-ea+140 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:33723/knhk/yn","node_name":"127.0.0.1:33723_knhk%2Fyn","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:41687/knhk/yn;,   
"core":"c8n_1x3_lf_shard1_replica2",   
"node_name":"127.0.0.1:41687_knhk%2Fyn"}, "core_node2":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:37717/knhk/yn;,   
"node_name":"127.0.0.1:37717_knhk%2Fyn",   "state":"down"}, 
"core_node3":{   "core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:33723/knhk/yn;,   
"node_name":"127.0.0.1:33723_knhk%2Fyn",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:33723/knhk/yn","node_name":"127.0.0.1:33723_knhk%2Fyn","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:41687/knhk/yn;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:41687_knhk%2Fyn"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:37717/knhk/yn;,
  "node_name":"127.0.0.1:37717_knhk%2Fyn",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:33723/knhk/yn;,
  "node_name":"127.0.0.1:33723_knhk%2Fyn",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([7BF5A7B568603E9D:F3A1986FC69C5365]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)

[jira] [Comment Edited] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-12-02 Thread Judith Silverman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716678#comment-15716678
 ] 

Judith Silverman edited comment on SOLR-6203 at 12/3/16 12:09 AM:
--

Hi, yes, that does indeed make sense, not that I have a clear idea of what 
"weighting of sort" does.  And on that topic:

I've had more time this week to work on this jira than I will have in the 
foreseeable future, so I'm forging ahead rather than sensibly waiting for your 
comments.  I started calling new utility functions that make use of SortSpec's 
SchemaFields, but my updated unit tests kept failing with the same old 
"java.lang.Double cannot be cast to org.apache.lucene.util.BytesRef" error, and 
I got to wondering about the call to schema.getFileOrNull() in the new 
implWeightSortSpec() function from the SOLR-9660 patch.  That function allows 
the dynamic '*' field to lay claim to schema fields which SortSpecParsing 
carefully protected from it, just as it does when called by the 
XXXResultTransformer functions we are gearing up to modify.

I have only the vaguest understanding of what the 
weightSort()/rewrite()/createWeight() functions are all about.  Do they 
actually affect which SchemaField a SortField should be associated to?  I 
tweaked implWeightSortSpec() to leave SchemaFields alone except in the case 
that nullEquivalent kicks in, and all unit tests (including new ones testing 
the use of the new utility functions) now pass.  For now, I'm posting a patch 
to our branch containing just the tweak to implWSS() and a little cleanup 
(removing my questions and your replies).  

 
Have a good weekend yourself!  Thanks,
Judith


was (Author: judith):
Hi, yes, that does indeed make sense, not that I have a clear idea of what 
"weighting of sort" does.  And on that topic:

I've had more time this week to work on this jira than I will have in the 
foreseeable future, so I'm forging ahead rather than sensibly waiting for your 
comments.  I started calling new utility functions that make use of SortSpec's 
SchemaFields, but my updated unit tests kept failing with the same old 
"java.lang.Double cannot be cast to org.apache.lucene.util.BytesRef" error, and 
I got to wondering about the call to schema.getFileOrNull() in the new 
implWeightSortSpec() function from the SOLR-9660 patch.  It seems to me to be 
allowing the dynamic '*' field to lay claim to schema fields which 
SortSpecParsing carefully protected from it.

I have only the vaguest understanding of what weightSort()/rewrite()/
createWeight() functions are all about.  Do they actually affect which 
SchemaField a SortField should be associated to?  I tweaked 
implWeightSortSpec() to leave SchemaFields alone except in the case  that 
nullEquivalent kicks in, and all tests now pass.  I'll post a patch to our 
branch containing just that change and a little cleanup  (removing my questions 
and your replies).

 
Have a good weekend yourself!  Thanks,
Judith

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 

[jira] [Updated] (SOLR-9823) CoreContainer incorrectly setting MDCLoggingContext for core

2016-12-02 Thread Jessica Cheng Mallet (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Cheng Mallet updated SOLR-9823:
---
Attachment: SOLR-9823.diff

> CoreContainer incorrectly setting MDCLoggingContext for core
> 
>
> Key: SOLR-9823
> URL: https://issues.apache.org/jira/browse/SOLR-9823
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Jessica Cheng Mallet
>Priority: Minor
>  Labels: logging
> Attachments: SOLR-9823.diff
>
>
> One line bug fix for setting up the MDCLoggingContext for core in 
> CoreContainer. Currently the code is always setting "null".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9823) CoreContainer incorrectly setting MDCLoggingContext for core

2016-12-02 Thread Jessica Cheng Mallet (JIRA)
Jessica Cheng Mallet created SOLR-9823:
--

 Summary: CoreContainer incorrectly setting MDCLoggingContext for 
core
 Key: SOLR-9823
 URL: https://issues.apache.org/jira/browse/SOLR-9823
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Reporter: Jessica Cheng Mallet
Priority: Minor


One line bug fix for setting up the MDCLoggingContext for core in 
CoreContainer. Currently the code is always setting "null".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6263 - Failure!

2016-12-02 Thread Michael McCandless
I pushed a fix.

Mike McCandless

http://blog.mikemccandless.com


On Fri, Dec 2, 2016 at 5:50 PM, Policeman Jenkins Server
 wrote:
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6263/
> Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseParallelGC
>
> 1 tests failed.
> FAILED:  org.apache.lucene.index.TestTermsEnum.testIntersectRegexp
>
> Error Message:
> Unexpected exception type, expected IllegalArgumentException
>
> Stack Trace:
> junit.framework.AssertionFailedError: Unexpected exception type, expected 
> IllegalArgumentException
> at 
> __randomizedtesting.SeedInfo.seed([9F4D013AF40E0332:31B7B038EEC1FDC2]:0)
> at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2681)
> at 
> org.apache.lucene.index.TestTermsEnum.testIntersectRegexp(TestTermsEnum.java:1013)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.lucene.codecs.memory.FSTOrdTermsReader$TermsReader$IntersectTermsEnum.isAccept(FSTOrdTermsReader.java:762)
> at 
> 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+140) - Build # 18428 - Failure!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18428/
Java: 32bit/jdk-9-ea+140 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.component.SpellCheckComponentTest.testNumericQuery

Error Message:
List size mismatch @ spellcheck/suggestions

Stack Trace:
java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
at 
__randomizedtesting.SeedInfo.seed([AF96B240A7600093:A4BAE58038A9BC3C]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:906)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:853)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.testNumericQuery(SpellCheckComponentTest.java:154)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)




Build Log:
[...truncated 11641 lines...]
   [junit4] Suite: 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6263 - Failure!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6263/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.lucene.index.TestTermsEnum.testIntersectRegexp

Error Message:
Unexpected exception type, expected IllegalArgumentException

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception type, expected 
IllegalArgumentException
at 
__randomizedtesting.SeedInfo.seed([9F4D013AF40E0332:31B7B038EEC1FDC2]:0)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2681)
at 
org.apache.lucene.index.TestTermsEnum.testIntersectRegexp(TestTermsEnum.java:1013)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.lucene.codecs.memory.FSTOrdTermsReader$TermsReader$IntersectTermsEnum.isAccept(FSTOrdTermsReader.java:762)
at 
org.apache.lucene.codecs.memory.FSTOrdTermsReader$TermsReader$IntersectTermsEnum.(FSTOrdTermsReader.java:593)
at 
org.apache.lucene.codecs.memory.FSTOrdTermsReader$TermsReader.intersect(FSTOrdTermsReader.java:273)
at 
org.apache.lucene.index.TestTermsEnum.lambda$testIntersectRegexp$0(TestTermsEnum.java:1013)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2676)
 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 217 - Still Unstable

2016-12-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/217/

4 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([45F89C17A5E63F03]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([45F89C17A5E63F03]:0)


FAILED:  
org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Expected to see collection awhollynewcollection_0 null Last available state: 
DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/27)={
   "replicationFactor":"4",   "shards":{ "shard1":{   
"range":"8000-bfff",   "state":"active",   "replicas":{}}, 
"shard2":{   "range":"c000-",   "state":"active",   
"replicas":{}}, "shard3":{   "range":"0-3fff",   
"state":"active",   "replicas":{}}, "shard4":{   
"range":"4000-7fff",   "state":"active",   "replicas":{}}},   
"router":{"name":"compositeId"},   "maxShardsPerNode":"5",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected to see collection awhollynewcollection_0
null
Last available state: 
DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/27)={
  "replicationFactor":"4",
  "shards":{
"shard1":{
  "range":"8000-bfff",
  "state":"active",
  "replicas":{}},
"shard2":{
  "range":"c000-",
  "state":"active",
  "replicas":{}},
"shard3":{
  "range":"0-3fff",
  "state":"active",
  "replicas":{}},
"shard4":{
  "range":"4000-7fff",
  "state":"active",
  "replicas":{}}},
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"5",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([C8F88680C76DA882:808DF234C15E8717]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:237)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:496)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (SOLR-9819) Upgrade commons-fileupload to 1.3.2

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716779#comment-15716779
 ] 

ASF subversion and git services commented on SOLR-9819:
---

Commit fc59525dfbedd72d411c52e92279d421d276eb63 in lucene-solr's branch 
refs/heads/branch_5x from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fc59525 ]

SOLR-9819: Upgrade Apache commons-fileupload to 1.3.2, fixing a security 
vulnerability


> Upgrade commons-fileupload to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache commons-fileupload 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-12-02 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716772#comment-15716772
 ] 

Jeff Wartes commented on SOLR-4735:
---

That seems pretty viable too. As I mentioned, the memory overhead of a registry 
is pretty low, just a concurrent map and a list. Plus, the actual metric 
objects in the map would be shared by both registries, so I'd be more concerned 
about the work involved keeping them synchronized then with just having 
multiple registries.

I confess though, I don't have a clear idea whether that's more or less 
overhead than multiple identically-configured reporters. It feels like most of 
the possible performance issues here are linear, so it may not matter. Two 
reporters iterating through 10 metrics each sounds pretty much the same as one 
reporter iterating over 20 to me, all else being equal. 

> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716768#comment-15716768
 ] 

ASF subversion and git services commented on LUCENE-7576:
-

Commit 8cbcbc9d956754de1fab2c626705aa6d6ab9f910 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8cbcbc9 ]

LUCENE-7576: fix other codecs to detect when special case automaton is passed 
to Terms.intersect


> RegExp automaton causes NPE on Terms.intersect
> --
>
> Key: LUCENE-7576
> URL: https://issues.apache.org/jira/browse/LUCENE-7576
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/index
>Affects Versions: 6.2.1
> Environment: java version "1.8.0_77" macOS 10.12.1
>Reporter: Tom Mortimer
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7576.patch
>
>
> Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an 
> NPE:
> String index_path = 
> String term = 
> Directory directory = FSDirectory.open(Paths.get(index_path));
> IndexReader reader = DirectoryReader.open(directory);
> Fields fields = MultiFields.getFields(reader);
> Terms terms = fields.terms(args[1]);
> CompiledAutomaton automaton = new CompiledAutomaton(
>   new RegExp("do_not_match_anything").toAutomaton());
> TermsEnum te = terms.intersect(automaton, null);
> throws:
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127)
>   at 
> org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185)
>   at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85)
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716766#comment-15716766
 ] 

ASF subversion and git services commented on LUCENE-7576:
-

Commit a195a9868a7f7b57c56b3b8b6b8c9ada36109144 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a195a98 ]

LUCENE-7576: fix other codecs to detect when special case automaton is passed 
to Terms.intersect


> RegExp automaton causes NPE on Terms.intersect
> --
>
> Key: LUCENE-7576
> URL: https://issues.apache.org/jira/browse/LUCENE-7576
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/index
>Affects Versions: 6.2.1
> Environment: java version "1.8.0_77" macOS 10.12.1
>Reporter: Tom Mortimer
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7576.patch
>
>
> Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an 
> NPE:
> String index_path = 
> String term = 
> Directory directory = FSDirectory.open(Paths.get(index_path));
> IndexReader reader = DirectoryReader.open(directory);
> Fields fields = MultiFields.getFields(reader);
> Terms terms = fields.terms(args[1]);
> CompiledAutomaton automaton = new CompiledAutomaton(
>   new RegExp("do_not_match_anything").toAutomaton());
> TermsEnum te = terms.intersect(automaton, null);
> throws:
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127)
>   at 
> org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185)
>   at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85)
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-12-02 Thread Kelvin Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716698#comment-15716698
 ] 

Kelvin Wong commented on SOLR-4735:
---

Hmm wouldn't these aggregate registries defeat the point of keeping them 
separate in the first place (from a performance perspective)? For example, if a 
user configures a JMXReporter and a GraphiteReporter on all registries, Solr 
would have to basically make two copies of all of its registries.

Perhaps we can just "fake" an aggregate reporter? There can be configuration 
logic so that one reporter is instantiated for each registry that the user 
configured. This might be a bit wasteful but we won't have to deal with 
maintaining an aggregate registry or writing reporters that do the aggregation. 
And to the user, it seems as though they only needed to configure one reporter.

> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-12-02 Thread Judith Silverman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Judith Silverman updated SOLR-6203:
---
Attachment: SOLR-6203.patch

Hi, yes, that does indeed make sense, not that I have a clear idea of what 
"weighting of sort" does.  And on that topic:

I've had more time this week to work on this jira than I will have in the 
foreseeable future, so I'm forging ahead rather than sensibly waiting for your 
comments.  I started calling new utility functions that make use of SortSpec's 
SchemaFields, but my updated unit tests kept failing with the same old 
"java.lang.Double cannot be cast to org.apache.lucene.util.BytesRef" error, and 
I got to wondering about the call to schema.getFileOrNull() in the new 
implWeightSortSpec() function from the SOLR-9660 patch.  It seems to me to be 
allowing the dynamic '*' field to lay claim to schema fields which 
SortSpecParsing carefully protected from it.

I have only the vaguest understanding of what weightSort()/rewrite()/
createWeight() functions are all about.  Do they actually affect which 
SchemaField a SortField should be associated to?  I tweaked 
implWeightSortSpec() to leave SchemaFields alone except in the case  that 
nullEquivalent kicks in, and all tests now pass.  I'll post a patch to our 
branch containing just that change and a little cleanup  (removing my questions 
and your replies).

 
Have a good weekend yourself!  Thanks,
Judith

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-12-02 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716573#comment-15716573
 ] 

Michael Sun edited comment on SOLR-9764 at 12/2/16 9:29 PM:


bq.  I do not know how it would perform when actually used as a filterCache 
entry, compared to the current bitset implementation.
RoaringDocIdSet looks pretty interesting. From the link in comments,  
https://www.elastic.co/blog/frame-of-reference-and-roaring-bitmaps, however, it 
looks RoaringDocIdSet doesn't save any memory in case a query match all docs.

Basically the idea of RoaringDocIdSet is to divide the entire bitmap into 
multiple chunks. For each chunk, either a bitmap or a integer array (using diff 
compression) can be used depending on number of matched docs in that chunk. If 
matched doc is higher than a certain number in a chunk, a bitmap is used for 
that chunk. Otherwise integer array is used. It can help in some use cases but 
it would fall back to something equivalent to FixedBitMap in this use case.

In addition, the 'official' website for roaring bitmaps  
http://roaringbitmap.org mentioned roaring bitmaps can also use run length 
encoding to store the bitmap chunk but also mentioned one of the main goals of 
roaring bitmap is to solve the problem of run length encoding, which is 
expensive random access. Need to dig into source code to understand it better. 
Any suggestion is welcome.



was (Author: michael.sun):
bq.  I do not know how it would perform when actually used as a filterCache 
entry, compared to the current bitset implementation.
RoaringDocIdSet looks pretty interesting. From the link in comments,  
https://www.elastic.co/blog/frame-of-reference-and-roaring-bitmaps, however, it 
looks RoaringDocIdSet doesn't save any memory in case a query match all docs.

Basically the idea of RoaringDocIdSet is to divide the entire bitmap into 
multiple chunks. For each chunk, either a bitmap or a integer array (using diff 
compression) is used depending on number of matched docs in that chunk. If 
matched doc is higher than a certain number, a bitmap is used for that chunk. 
Otherwise integer array is used. It can help in some use cases but it would 
fall back to something equivalent to FixedBitMap in this use case.

In addition, the 'official' website for roaring bitmaps  
http://roaringbitmap.org mentioned roaring bitmaps can also use run length 
encoding to store the bitmap chunk but also mentioned one of the main goals of 
roaring bitmap is to solve the problem of run length encoding, which is 
expensive random access. Need to dig into source code to understand it better. 
Any suggestion is welcome.


> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-12-02 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716573#comment-15716573
 ] 

Michael Sun commented on SOLR-9764:
---

bq.  I do not know how it would perform when actually used as a filterCache 
entry, compared to the current bitset implementation.
RoaringDocIdSet looks pretty interesting. From the link in comments,  
https://www.elastic.co/blog/frame-of-reference-and-roaring-bitmaps, however, it 
looks RoaringDocIdSet doesn't save any memory in case a query match all docs.

Basically the idea of RoaringDocIdSet is to divide the entire bitmap into 
multiple chunks. For each chunk, either a bitmap or a integer array (using diff 
compression) is used depending on number of matched docs in that chunk. If 
matched doc is higher than a certain number, a bitmap is used for that chunk. 
Otherwise integer array is used. It can help in some use cases but it would 
fall back to something equivalent to FixedBitMap in this use case.

In addition, the 'official' website for roaring bitmaps  
http://roaringbitmap.org mentioned roaring bitmaps can also use run length 
encoding to store the bitmap chunk but also mentioned one of the main goals of 
roaring bitmap is to solve the problem of run length encoding, which is 
expensive random access. Need to dig into source code to understand it better. 
Any suggestion is welcome.


> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9819) Upgrade commons-fileupload to 1.3.2

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716567#comment-15716567
 ] 

ASF subversion and git services commented on SOLR-9819:
---

Commit 660f08a0b96887ad0ca4c147016179f041c522e8 in lucene-solr's branch 
refs/heads/branch_6x from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=660f08a ]

SOLR-9819: Upgrade Apache commons-fileupload to 1.3.2, fixing a security 
vulnerability


> Upgrade commons-fileupload to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache commons-fileupload 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2016-12-02 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716529#comment-15716529
 ] 

Michael Sun commented on SOLR-9764:
---

bq.  This would have the effect of making all queries that map onto all 
documents share the resulting DocSet
Ah, I see. That's a good idea.  Let me check it out. Thanks [~yo...@apache.org] 
for suggestion. 


> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-12-02 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716247#comment-15716247
 ] 

Jeff Wartes commented on SOLR-4735:
---

Yeah, I get that. I like this line of thought because it means we can create as 
many registries as make sense, (cores, collections, logical code sections, etc) 
without worrying about how to get everything reported. We only have to pick 
some names.

What about a class that extends MetricRegistry and also implements 
MetricRegistryListener? Call that a ListeningMetricRegistry or something. When 
the configuration asks for a reporter on some set of (registry) names, we 
create a new, perhaps non-shared ListeningMetricRegistry, use registerAll to 
scoop the metrics in the desired registries into it, and then call addListener 
on all the desired registries with the ListeningMetricRegistry so everything 
stays in sync?

So that could still mean a single registry with a ton of metrics, but only in 
cases where there's been an explicit request for a reporter on a ton of 
metrics. 

> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect

2016-12-02 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7576.

   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

Thank you [~TomMortimer].

> RegExp automaton causes NPE on Terms.intersect
> --
>
> Key: LUCENE-7576
> URL: https://issues.apache.org/jira/browse/LUCENE-7576
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/index
>Affects Versions: 6.2.1
> Environment: java version "1.8.0_77" macOS 10.12.1
>Reporter: Tom Mortimer
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7576.patch
>
>
> Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an 
> NPE:
> String index_path = 
> String term = 
> Directory directory = FSDirectory.open(Paths.get(index_path));
> IndexReader reader = DirectoryReader.open(directory);
> Fields fields = MultiFields.getFields(reader);
> Terms terms = fields.terms(args[1]);
> CompiledAutomaton automaton = new CompiledAutomaton(
>   new RegExp("do_not_match_anything").toAutomaton());
> TermsEnum te = terms.intersect(automaton, null);
> throws:
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127)
>   at 
> org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185)
>   at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85)
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Memory leak in Solr

2016-12-02 Thread Scott Blum
Are you sure it's an actual leak, not just memory pinned by caches?

Related: https://issues.apache.org/jira/browse/SOLR-9810

On Fri, Dec 2, 2016 at 2:01 PM, S G  wrote:

> Hi,
>
> This post shows some stats on Solr which indicate that there might be a
> memory leak in there.
>
> http://stackoverflow.com/questions/40939166/is-this-a-memory-leak-in-solr
>
> Can someone please help to debug this?
> It might be a very good step in making Solr stable if we can fix this.
>
> Thanks
> SG
>


[jira] [Commented] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716186#comment-15716186
 ] 

ASF subversion and git services commented on LUCENE-7576:
-

Commit fcccd317ddb44a742a0b3265fcf32923649f38cd in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fcccd31 ]

LUCENE-7576: detect when special case automaton is passed to Terms.intersect


> RegExp automaton causes NPE on Terms.intersect
> --
>
> Key: LUCENE-7576
> URL: https://issues.apache.org/jira/browse/LUCENE-7576
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/index
>Affects Versions: 6.2.1
> Environment: java version "1.8.0_77" macOS 10.12.1
>Reporter: Tom Mortimer
>Assignee: Michael McCandless
>Priority: Minor
> Attachments: LUCENE-7576.patch
>
>
> Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an 
> NPE:
> String index_path = 
> String term = 
> Directory directory = FSDirectory.open(Paths.get(index_path));
> IndexReader reader = DirectoryReader.open(directory);
> Fields fields = MultiFields.getFields(reader);
> Terms terms = fields.terms(args[1]);
> CompiledAutomaton automaton = new CompiledAutomaton(
>   new RegExp("do_not_match_anything").toAutomaton());
> TermsEnum te = terms.intersect(automaton, null);
> throws:
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127)
>   at 
> org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185)
>   at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85)
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7576) RegExp automaton causes NPE on Terms.intersect

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716189#comment-15716189
 ] 

ASF subversion and git services commented on LUCENE-7576:
-

Commit b6072f3ae539a5fc45a2bb9f99441dfeef4e440a in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b6072f3 ]

LUCENE-7576: detect when special case automaton is passed to Terms.intersect


> RegExp automaton causes NPE on Terms.intersect
> --
>
> Key: LUCENE-7576
> URL: https://issues.apache.org/jira/browse/LUCENE-7576
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/index
>Affects Versions: 6.2.1
> Environment: java version "1.8.0_77" macOS 10.12.1
>Reporter: Tom Mortimer
>Assignee: Michael McCandless
>Priority: Minor
> Attachments: LUCENE-7576.patch
>
>
> Calling org.apache.lucene.index.Terms.intersect(automaton, null) causes an 
> NPE:
> String index_path = 
> String term = 
> Directory directory = FSDirectory.open(Paths.get(index_path));
> IndexReader reader = DirectoryReader.open(directory);
> Fields fields = MultiFields.getFields(reader);
> Terms terms = fields.terms(args[1]);
> CompiledAutomaton automaton = new CompiledAutomaton(
>   new RegExp("do_not_match_anything").toAutomaton());
> TermsEnum te = terms.intersect(automaton, null);
> throws:
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.lucene.codecs.blocktree.IntersectTermsEnum.(IntersectTermsEnum.java:127)
>   at 
> org.apache.lucene.codecs.blocktree.FieldReader.intersect(FieldReader.java:185)
>   at org.apache.lucene.index.MultiTerms.intersect(MultiTerms.java:85)
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9819) Upgrade commons-fileupload to 1.3.2

2016-12-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716171#comment-15716171
 ] 

ASF subversion and git services commented on SOLR-9819:
---

Commit c61268f7cd2c47884f98513febee6bb5f33ea6dc in lucene-solr's branch 
refs/heads/master from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c61268f ]

SOLR-9819: Upgrade Apache commons-fileupload to 1.3.2, fixing a security 
vulnerability


> Upgrade commons-fileupload to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache commons-fileupload 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9822) Improve faceting performance with FieldCache

2016-12-02 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-9822:
--

Assignee: Yonik Seeley

> Improve faceting performance with FieldCache
> 
>
> Key: SOLR-9822
> URL: https://issues.apache.org/jira/browse/SOLR-9822
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master (7.0)
>
>
> This issue will try to specifically address the performance regressions of 
> faceting on FieldCache fields observed in SOLR-9599.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9822) Improve faceting performance with FieldCache

2016-12-02 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-9822:
--

 Summary: Improve faceting performance with FieldCache
 Key: SOLR-9822
 URL: https://issues.apache.org/jira/browse/SOLR-9822
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Yonik Seeley


This issue will try to specifically address the performance regressions of 
faceting on FieldCache fields observed in SOLR-9599.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9819) Upgrade commons-fileupload to 1.3.2

2016-12-02 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-9819:
---
Description: 
We use Apache commons-fileupload 1.3.1. According to CVE-2016-3092 :

"The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 
9.x before 9.0.0.M7 and other products, allows remote attackers to cause a 
denial of service (CPU consumption) via a long boundary string."

[Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]

We should upgrade to 1.3.2.

  was:
We use Apache fileupload-commons 1.3.1. According to CVE-2016-3092 :

"The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 
9.x before 9.0.0.M7 and other products, allows remote attackers to cause a 
denial of service (CPU consumption) via a long boundary string."

[Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]

We should upgrade to 1.3.2.


> Upgrade commons-fileupload to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache commons-fileupload 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9819) Upgrade commons-fileupload to 1.3.2

2016-12-02 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-9819:
---
Summary: Upgrade commons-fileupload to 1.3.2  (was: Upgrade 
fileupload-commons to 1.3.2)

> Upgrade commons-fileupload to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache fileupload-commons 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-12-02 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716091#comment-15716091
 ] 

Christine Poerschke commented on SOLR-6203:
---

Created SOLR-9821 to separately (at some point) pursue the 
"QueryComponent.prepareGrouping: //TODO: move weighting of sort" mentioned 
above.

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9821) QueryComponent.prepareGrouping: //TODO: move weighting of sort

2016-12-02 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-9821:
-

 Summary: QueryComponent.prepareGrouping: //TODO: move weighting of 
sort
 Key: SOLR-9821
 URL: https://issues.apache.org/jira/browse/SOLR-9821
 Project: Solr
  Issue Type: Wish
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke
Priority: Minor


[QueryComponent.java#L254-L259|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java#L254-L259]
 has two {{//TODO: move weighting of sort}} comments. This ticket is to see 
what it would take to move the weighting.

Motivation: to potentially permit removal of GroupingSpecification's 
groupSortSpec (in favour of ResponseBuilder's sortSpec)

{code}
GroupingSpecification groupingSpec = new GroupingSpecification();
rb.setGroupingSpec(groupingSpec);

final SortSpec sortSpec = rb.getSortSpec();

//TODO: move weighting of sort
final SortSpec groupSortSpec = searcher.weightSortSpec(sortSpec, 
Sort.RELEVANCE);
...
groupingSpec.setGroupSortSpec(groupSortSpec);
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-12-02 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716061#comment-15716061
 ] 

Christine Poerschke commented on SOLR-6203:
---

Hi Judith.

bq. ...  it's already confusing ... that the master code base has two Sorts 
(rb.getSortSpec().getSort() and rb.getGroupingSpec().getGroupSort()) in play at 
the same time in a grouped search (to say nothing of the within-group Sort) ...

Yes, that's a very fair point and looking beyond the scope of this bug fix 
ticket here it would be good if confusing code can be made less confusing.

bq. ... the one about rb.getSortSpec() in 
SearchGroupShardResponseProcessor.java: ...

* Actually it's then not just about SearchGroupShardResponseProcessor (SGSRP) 
really but we can question not just why SGSRP uses both 
{{rb.getGroupingSpec().getGroupSort\[Spec\]()}} and {{rb.getSortSpec()}} but 
why _anything_ uses both {{rb.getGroupingSpec().getGroupSortSpec().get...()}} 
and {{rb.getSortSpec().get...()}} - and from a quick look around all the places 
concerned have access to {{rb}} and so have the possibility of using either.

* Okay, then the next logical question is "Is there a difference between the 
two?" and a very cursory lookaround finds 
[QueryComponent.java|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java]
 and
{code}
GroupingSpecification groupingSpec = new GroupingSpecification();
rb.setGroupingSpec(groupingSpec);

final SortSpec sortSpec = rb.getSortSpec();

//TODO: move weighting of sort
final SortSpec groupSortSpec = searcher.weightSortSpec(sortSpec, 
Sort.RELEVANCE);
...
groupingSpec.setGroupSortSpec(groupSortSpec);
{code}

* So based on that it seems that:
** the "TODO: move weighting of sort" could help get rid of 
GroupingSpecification's groupSort\[Spec\]
** since SearchGroupShardResponseProcessor.process uses (as you say) only the 
offset and count part of {{ss}} the
{code}
-SortSpec ss = rb.getSortSpec();
-Sort groupSort = rb.getGroupingSpec().getGroupSort();
+SortSpec groupSortSpec = rb.getGroupingSpec().getGroupSortSpec();
{code}
change you propose should work and be a small step towards making the code less 
confusing.

Code reading and 'thinking aloud' done - does that kind of make sense?

PS: I am not suggesting we get rid of GroupingSpecification's groupSort\[Spec\] 
here/at this time nor to pursue the "//TODO: move weighting of sort" here/at 
this time. But both would be worthwhile and yes here at this time we can stop 
using {{rb.getSortSpec()}} in SearchGroupShardResponseProcessor.process as you 
suggested.

Have a good weekend!

Christine

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> 

[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-12-02 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716043#comment-15716043
 ] 

Scott Blum commented on SOLR-9811:
--

I'm not sure, but it might have something to with race conditions when "moving" 
a replica.

An operation we do a lot of is create a new replica on a new machine, wait for 
it to become active, then delete the old replica.  It's possible that this 
process is what sometimes leaves us with a single replica marked both "DOWN" 
and "LEADER".

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 986 - Still Unstable!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/986/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZk2Test.test

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([8F3CC530A3C942D2:768FAEA0D352F2A]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1143)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1530)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.brindDownShardIndexSomeDocsAndRecover(BasicDistributedZk2Test.java:278)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.test(BasicDistributedZk2Test.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-9760) solr.cmd on Windows requires modify permissions in the current directory

2016-12-02 Thread Alex Crome (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716017#comment-15716017
 ] 

Alex Crome commented on SOLR-9760:
--

This same fix also fixes a problem with running solr in Azure Websites - 
https://stackoverflow.com/questions/40794626/unable-to-run-solr-on-azure-web-apps

> solr.cmd on Windows requires modify permissions in the current directory
> 
>
> Key: SOLR-9760
> URL: https://issues.apache.org/jira/browse/SOLR-9760
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3
> Environment: Windows
>Reporter: Alex Crome
>
> Currently starting solr fails if the user does not have permission to write 
> to the current directory.  This is caused by the resolve_java_vendor function 
> writing a temporary file to the current directory (javares). 
> {code}
> :resolve_java_vendor
> set "JAVA_VENDOR=Oracle"
> "%JAVA%" -version 2>&1 | findstr /i "IBM J9" > javares
> set /p JAVA_VENDOR_OUT= del javares
> if NOT "%JAVA_VENDOR_OUT%" == "" (
>   set "JAVA_VENDOR=IBM J9"
> )
> {code}
> Rather than writing this temporary file to disk, The exit code of findstr can 
> be used to determine if there is a match.  (0 == match, 1 == no match, 2 == 
> syntax error)
> {code}
> :resolve_java_vendor
> "%JAVA%" -version 2>&1 | findstr /i "IBM J9" > nul
> if %ERRORLEVEL% == 1 set "JAVA_VENDOR=Oracle" else set "JAVA_VENDOR=IBM J9"
> {code}
> By not writing this temp file, you can reduce the permissions solr needs.  As 
> a work around until this is fixed, you can start solr in a directory that has 
> the required permissions, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9819) Upgrade fileupload-commons to 1.3.2

2016-12-02 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-9819:
---
Affects Version/s: 6.1
   6.2
   6.3

> Upgrade fileupload-commons to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0, 6.1, 6.2, 6.3
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache fileupload-commons 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9819) Upgrade fileupload-commons to 1.3.2

2016-12-02 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-9819:
---
Attachment: SOLR-9819.patch

The tests pass, so seems like we're good to go.

> Upgrade fileupload-commons to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
> Attachments: SOLR-9819.patch
>
>
> We use Apache fileupload-commons 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-12-02 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715972#comment-15715972
 ] 

Andrzej Bialecki  commented on SOLR-4735:
-

bq. `MetricRegistry` is really just a bunch of convenience methods and 
thread-safety around a `MetricSet`
Well, my comment referred to the fact that {{MetricRegistry}} actually uses 
{{ConcurrentHashMap}} for keeping metrics, and having a map with 100k+ keys is 
never good. But I agree the API could have been more flexible - if reporters 
were taking {{MetricSet}} we could fake one either from multiple registries or 
from a subset of metrics from one registry, or a combination thereof.

We can implement an aggregating franken-registry by overriding all methods in 
{{MetricRegistry}} to always delegate operations to sub-registries. It's a 
little bit hackish but doable. We could create these as non-shared registries 
only for the purpose of reporting.

> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Memory leak in Solr

2016-12-02 Thread S G
Hi,

This post shows some stats on Solr which indicate that there might be a
memory leak in there.

http://stackoverflow.com/questions/40939166/is-this-a-memory-leak-in-solr

Can someone please help to debug this?
It might be a very good step in making Solr stable if we can fix this.

Thanks
SG


[jira] [Comment Edited] (SOLR-4735) Improve Solr metrics reporting

2016-12-02 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715931#comment-15715931
 ] 

Andrzej Bialecki  edited comment on SOLR-4735 at 12/2/16 6:53 PM:
--

bq. Will we be instantiating separate reporters for each `Group` then?
Part of the refactoring I'm working on is moving reporter configs to 
{{solr.xml}} under {{ ...}} . Then appropriate reporters would be created 
for each group at the time when the component that manages this group of 
metrics is initialized (eg. "core" on {{SolrCore}} creation, "node" when 
{{CoreContainer}} is loaded etc).

Regarding reporters that could take multiple registries ... yeah, it seems a 
waste to create separate reporters for each core if they have identical 
configs. I'm not sure yet how to solve this - eg. for JMX reporting any sort of 
aggregate reporter would have to create multiple {{JMXReporter}}-s anyway, one 
per registry, because that's how the API is implemented.

bq. it would be nice if we could just specify which registry we'd like a 
reporter to attach to
Hmm, we could perhaps use either {{group}} or {{registry}} attribute in the 
reporter config.
(edit: ugh, Markdown vs Jira markup)


was (Author: ab):
bq. Will we be instantiating separate reporters for each `Group` then?
Part of the refactoring I'm working on is moving reporter configs to `solr.xml` 
under ` ...` . Then appropriate reporters would be created for each group 
at the time when the component that manages this group of metrics is 
initialized (eg. "core" on SolrCore creation, "node" when `CoreContainer` is 
loaded etc).

Regarding reporters that could take multiple registries ... yeah, it seems a 
waste to create separate reporters for each core if they have identical 
configs. I'm not sure yet how to solve this - eg. for JMX reporting any sort of 
aggregate reporter would have to create multiple `JMXReporter`s anyway, one per 
registry, because that's how the API is implemented.

bq. it would be nice if we could just specify which registry we'd like a 
reporter to attach to
Hmm, we could perhaps use either `group` or `registry` attribute in the 
reporter config.

> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-12-02 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715931#comment-15715931
 ] 

Andrzej Bialecki  commented on SOLR-4735:
-

bq. Will we be instantiating separate reporters for each `Group` then?
Part of the refactoring I'm working on is moving reporter configs to `solr.xml` 
under ` ...` . Then appropriate reporters would be created for each group 
at the time when the component that manages this group of metrics is 
initialized (eg. "core" on SolrCore creation, "node" when `CoreContainer` is 
loaded etc).

Regarding reporters that could take multiple registries ... yeah, it seems a 
waste to create separate reporters for each core if they have identical 
configs. I'm not sure yet how to solve this - eg. for JMX reporting any sort of 
aggregate reporter would have to create multiple `JMXReporter`s anyway, one per 
registry, because that's how the API is implemented.

bq. it would be nice if we could just specify which registry we'd like a 
reporter to attach to
Hmm, we could perhaps use either `group` or `registry` attribute in the 
reporter config.

> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9820) PerSegmentSingleValuedFaceting - mark "contains" and "ignoreCase" fields private

2016-12-02 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715884#comment-15715884
 ] 

David Smiley commented on SOLR-9820:


They are currently so-called package-protected, not public.  What is the 
motivation here?  I think you might potentially get interest in this getting 
committed if you morph this issue to include the follow-on stuff you speak of.  
But as-is; it's too boring to commit (speaking for myself).

> PerSegmentSingleValuedFaceting - mark "contains" and "ignoreCase" fields 
> private
> 
>
> Key: SOLR-9820
> URL: https://issues.apache.org/jira/browse/SOLR-9820
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jonny Marks
>Priority: Minor
> Attachments: SOLR-9820.patch
>
>
> This patch marks the "contains" and "ignoreCase" fields in 
> PerSegmentSingleValuedFaceting private (they are currently public). 
> A separate patch will follow where I propose to replace them with a 
> customizable variant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-12-02 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715880#comment-15715880
 ] 

Jeff Wartes commented on SOLR-4735:
---

`MetricRegistry` is really just a bunch of convenience methods and 
thread-safety around a `MetricSet`. There isn't much overhead difference 
between the two. But really, when I think of a MetricRegistry, I think of it as 
"a set of metrics I want to attach a reporter to", nothing more. 
It's a bit disappointing that reporters take a Registry instead of a MetricSet, 
since a Registry isa MetricSet.

With that in mind, one strategy would be have every logical grouping of metrics 
use its own dedicated (probably shared) registry, and then bind the 
reporter-registry concept together at reporter definition time. 

That is, create a non-shared registry explicitly for the purpose of attaching a 
reporter to it, and only when asked to define a reporter. The reporter 
definition would then include the names of the registries to be reported. Under 
the hood, a new registry would be created as the union of the requested 
registries, and the reporter instantiated and attached to that. We'd have to 
make sure the namespace of all the metrics in the metric groups is unique, so 
that arbitrary groups can be combined without conflict, but that sounds 
desirable regardless.


> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-12-02 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715867#comment-15715867
 ] 

David Smiley commented on SOLR-9811:


FWIW I also recently just tried REQUESTRECOVERY and it didn't work; and I was 
very patient.  Eventually I restarted the node and it was then happy.  I don't 
recall seeing the message Scott saw, but I don't have the logs anymore (I 
think) to be sure.

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-12-02 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715858#comment-15715858
 ] 

Ishan Chattopadhyaya commented on SOLR-9811:


[~dragonsinth] do you know how to reproduce or what's the cause?

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-12-02 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715840#comment-15715840
 ] 

Scott Blum commented on SOLR-9811:
--

The replica is marked by LEADER and DOWN.

Basically, I can't FORCELEADER because the replica isn't active, and I can't 
force recovery because the replica is already leader.

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-12-02 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715835#comment-15715835
 ] 

Scott Blum commented on SOLR-9811:
--

REQUESTRECOVERY did not work:

{code}
2016-12-02 18:03:02.611 ERROR 
(recoveryExecutor-3-thread-10-processing-n:10.240.0.69:8983_solr 
x:24VFQ_shard1_replica0 s:shard1 c:24VFQ r:core_node1) [c:24VFQ s:shard1 
r:core_node1 x:24VFQ_shard1_replica0] o.a.s.c.RecoveryStrategy Error while 
trying to recover. 
core=24VFQ_shard1_replica0:org.apache.solr.common.SolrException: Cloud state 
still says we are leader.
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:320)
{code}

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9820) PerSegmentSingleValuedFaceting - mark "contains" and "ignoreCase" fields private

2016-12-02 Thread Jonny Marks (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonny Marks updated SOLR-9820:
--
Attachment: SOLR-9820.patch

> PerSegmentSingleValuedFaceting - mark "contains" and "ignoreCase" fields 
> private
> 
>
> Key: SOLR-9820
> URL: https://issues.apache.org/jira/browse/SOLR-9820
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jonny Marks
>Priority: Minor
> Attachments: SOLR-9820.patch
>
>
> This patch marks the "contains" and "ignoreCase" fields in 
> PerSegmentSingleValuedFaceting private (they are currently public). 
> A separate patch will follow where I propose to replace them with a 
> customizable variant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9820) PerSegmentSingleValuedFaceting - mark "contains" and "ignoreCase" fields private

2016-12-02 Thread Jonny Marks (JIRA)
Jonny Marks created SOLR-9820:
-

 Summary: PerSegmentSingleValuedFaceting - mark "contains" and 
"ignoreCase" fields private
 Key: SOLR-9820
 URL: https://issues.apache.org/jira/browse/SOLR-9820
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Jonny Marks
Priority: Minor


This patch marks the "contains" and "ignoreCase" fields in 
PerSegmentSingleValuedFaceting private (they are currently public). 

A separate patch will follow where I propose to replace them with a 
customizable variant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9817) Make Solr server startup directory configurable

2016-12-02 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715725#comment-15715725
 ] 

Hrishikesh Gadre commented on SOLR-9817:


[~elyograg] 

bq. Why do you want to do this?

I ran into some issues integrating Solr 6 in Cloudera platform. But I have 
figured out an alternative solution which does not require this change. So I 
think we should close this as "Won't Fix".

> Make Solr server startup directory configurable
> ---
>
> Key: SOLR-9817
> URL: https://issues.apache.org/jira/browse/SOLR-9817
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
>
> The solr startup script (bin/solr) is hardcoded to use the 
> /server directory as the working directory during the 
> startup. 
> https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652
> This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9819) Upgrade fileupload-commons to 1.3.2

2016-12-02 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-9819:
---
Reporter: Anshum Gupta  (was: Jeff Field)

> Upgrade fileupload-commons to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
>
> We use Apache fileupload-commons 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9819) Upgrade fileupload-commons to 1.3.2

2016-12-02 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-9819:
---
Description: 
We use Apache fileupload-commons 1.3.1. According to CVE-2016-3092 :

"The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 
9.x before 9.0.0.M7 and other products, allows remote attackers to cause a 
denial of service (CPU consumption) via a long boundary string."

[Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]

We should upgrade to 1.3.2.

  was:
The project appears to pull in FileUpload 1.2.1. According to CVE-2014-0050:

"MultipartStream.java in Apache Commons FileUpload before 1.3.1, as used in 
Apache Tomcat, JBoss Web, and other products, allows remote attackers to cause 
a denial of service (infinite loop and CPU consumption) via a crafted 
Content-Type header that bypasses a loop's intended exit conditions."

[Source|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0050]


> Upgrade fileupload-commons to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0
>Reporter: Jeff Field
>Assignee: Jan Høydahl
>  Labels: commons-file-upload
>
> We use Apache fileupload-commons 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9819) Upgrade fileupload-commons to 1.3.2

2016-12-02 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reassigned SOLR-9819:
--

Assignee: Anshum Gupta  (was: Jan Høydahl)

> Upgrade fileupload-commons to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0
>Reporter: Jeff Field
>Assignee: Anshum Gupta
>  Labels: commons-file-upload
>
> We use Apache fileupload-commons 1.3.1. According to CVE-2016-3092 :
> "The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used 
> in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, 
> and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause 
> a denial of service (CPU consumption) via a long boundary string."
> [Source|http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092]
> We should upgrade to 1.3.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-12-02 Thread Judith Silverman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715697#comment-15715697
 ] 

Judith Silverman commented on SOLR-6203:


Hi, Christine, thanks for updating the branch and replying to my   
questions.  Re the one about rb.getSortSpec() in
SearchGroupShardResponseProcessor.java:  it's already confusing to this newbie 
that the master code base has two Sorts (rb.getSortSpec().getSort() and 
rb.getGroupingSpec().getGroupSort()) in play at the same time in a grouped 
search (to say nothing of the within-group Sort), and by the time we're 
finished with this branch we will have not only two Sorts but two full-fledged 
SortSpecs.  If we can do something at this early stage to make it clear which 
is to be used for what, or to consolidate them, I'm in favor. 
Thanks,
Judith

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9819) Upgrade fileupload-commons to 1.3.2

2016-12-02 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-9819:
---
Fix Version/s: (was: 5.5.2)
   (was: 6.0.1)
   (was: 5.6)
   (was: 6.1)

> Upgrade fileupload-commons to 1.3.2
> ---
>
> Key: SOLR-9819
> URL: https://issues.apache.org/jira/browse/SOLR-9819
> Project: Solr
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 4.6, 5.5, 6.0
>Reporter: Jeff Field
>Assignee: Jan Høydahl
>  Labels: commons-file-upload
>
> The project appears to pull in FileUpload 1.2.1. According to CVE-2014-0050:
> "MultipartStream.java in Apache Commons FileUpload before 1.3.1, as used in 
> Apache Tomcat, JBoss Web, and other products, allows remote attackers to 
> cause a denial of service (infinite loop and CPU consumption) via a crafted 
> Content-Type header that bypasses a loop's intended exit conditions."
> [Source|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0050]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9819) Upgrade fileupload-commons to 1.3.2

2016-12-02 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-9819:
--

 Summary: Upgrade fileupload-commons to 1.3.2
 Key: SOLR-9819
 URL: https://issues.apache.org/jira/browse/SOLR-9819
 Project: Solr
  Issue Type: Improvement
  Components: security
Affects Versions: 4.6, 5.5, 6.0
Reporter: Jeff Field
Assignee: Jan Høydahl
 Fix For: 5.5.2, 5.6, 6.0.1, 6.1


The project appears to pull in FileUpload 1.2.1. According to CVE-2014-0050:

"MultipartStream.java in Apache Commons FileUpload before 1.3.1, as used in 
Apache Tomcat, JBoss Web, and other products, allows remote attackers to cause 
a denial of service (infinite loop and CPU consumption) via a crafted 
Content-Type header that bypasses a loop's intended exit conditions."

[Source|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0050]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+140) - Build # 2322 - Unstable!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2322/
Java: 32bit/jdk-9-ea+140 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([474BAEBDDD3D7152:70D05AA3E5F1ACF6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.renewDelegationToken(TestSolrCloudWithDelegationTokens.java:131)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.verifyDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:316)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:333)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 597 - Unstable!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/597/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard1_replica2\data\tlog\tlog.001:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard1_replica2\data\tlog\tlog.001:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard1_replica2\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard1_replica2\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard1_replica2\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard1_replica2\data

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard1_replica2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard1_replica2

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard2_replica2\data\tlog\tlog.001:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard2_replica2\data\tlog\tlog.001:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard2_replica2\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard2_replica2\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard2_replica2\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard2_replica2\data

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard2_replica2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1\collection1_shard2_replica2

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node1

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node2\collection1_shard1_replica1\data\tlog\tlog.001:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_AD83954A96A859D3-001\tempDir-001\node2\collection1_shard1_replica1\data\tlog\tlog.001:
 The process cannot access the file because it is being used by another 
process. 

[jira] [Comment Edited] (SOLR-9817) Make Solr server startup directory configurable

2016-12-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715190#comment-15715190
 ] 

Mark Miller edited comment on SOLR-9817 at 12/2/16 1:58 PM:


I don't think I have a full handle on this yet.

I'm still a little confused because we call this setting the Solr server 
startup directory and "setting the working directory" but then the env variable 
is called the Solr config directory and when you set it, a copy of some config 
files is actually made. 

I think it will be a little confusing to users to understand what this option 
is, what it does, and why it would be used.

We should look at the name and some documentation. Given we don't have anything 
setup to test these scripts really (we really should get Jenkins jobs), it's 
especially important we are very clear to other devs what we are trying to 
support and how and why it works as part of the script / code if we want to see 
the feature get properly supported and not disappear or break easily.


was (Author: markrmil...@gmail.com):
I don't think I have a full handle on this yet.

I'm still a little confused because we call this setting the Solr server 
startup directory and the "setting the working directory" but then the env 
variable is called the Solr config directory and when you set it, a copy of 
some config files is actually made. 

I think it will be a little confusing to users to understand what this option 
is, what it does, and why it would be used.

We should look at the name and some documentation. Given we don't have anything 
setup to test these scripts really (we really should get Jenkins jobs), it's 
especially important we are very clear to other devs what we are trying to 
support and how and why it works as part of the script / code if we want to see 
the feature get properly support and not disappear or break easily.

> Make Solr server startup directory configurable
> ---
>
> Key: SOLR-9817
> URL: https://issues.apache.org/jira/browse/SOLR-9817
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
>
> The solr startup script (bin/solr) is hardcoded to use the 
> /server directory as the working directory during the 
> startup. 
> https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652
> This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9817) Make Solr server startup directory configurable

2016-12-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715190#comment-15715190
 ] 

Mark Miller commented on SOLR-9817:
---

I don't think I have a full handle on this yet.

I'm still a little confused because we call this setting the Solr server 
startup directory and the "setting the working directory" but then the env 
variable is called the Solr config directory and when you set it, a copy of 
some config files is actually made. 

I think it will be a little confusing to users to understand what this option 
is, what it does, and why it would be used.

We should look at the name and some documentation. Given we don't have anything 
setup to test these scripts really (we really should get Jenkins jobs), it's 
especially important we are very clear to other devs what we are trying to 
support and how and why it works as part of the script / code if we want to see 
the feature get properly support and not disappear or break easily.

> Make Solr server startup directory configurable
> ---
>
> Key: SOLR-9817
> URL: https://issues.apache.org/jira/browse/SOLR-9817
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
>
> The solr startup script (bin/solr) is hardcoded to use the 
> /server directory as the working directory during the 
> startup. 
> https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652
> This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1170 - Still Unstable

2016-12-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1170/

6 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([35DC787A5B47C7A4:E72C349905E86196]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange(CdcrReplicationDistributedZkTest.java:306)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9817) Make Solr server startup directory configurable

2016-12-02 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715153#comment-15715153
 ] 

Mark Miller commented on SOLR-9817:
---

We have to look at the Windows script as well.

> Make Solr server startup directory configurable
> ---
>
> Key: SOLR-9817
> URL: https://issues.apache.org/jira/browse/SOLR-9817
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
>
> The solr startup script (bin/solr) is hardcoded to use the 
> /server directory as the working directory during the 
> startup. 
> https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652
> This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9817) Make Solr server startup directory configurable

2016-12-02 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-9817:
-

Assignee: Mark Miller

> Make Solr server startup directory configurable
> ---
>
> Key: SOLR-9817
> URL: https://issues.apache.org/jira/browse/SOLR-9817
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
>
> The solr startup script (bin/solr) is hardcoded to use the 
> /server directory as the working directory during the 
> startup. 
> https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652
> This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-12-02 Thread Kelvin Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715107#comment-15715107
 ] 

Kelvin Wong commented on SOLR-4735:
---

Hi Andrzej, 

{quote}
I added a notion of "group" of metrics, which corresponds to a top-level 
subsystem in a Solr node
{quote}
* Nice! I really like this concept. Will we be instantiating separate reporters 
for each `Group` then? That way, reporting can be more flexibly configured. 
(ex. Jetty goes to JMX and Graphite, JVM goes to only JMX, etc...)

{quote}
I'll look into reusing single global-level reporters when possible, and 
creating new instances only if there are per-collection overrides.
{quote}

* It looks like 
[JmxReporter|https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/JmxReporter.java#L701]
 takes only one `MetricRegistry` at a time (and 
[GraphiteReporter|https://github.com/dropwizard/metrics/blob/3.2-development/metrics-graphite/src/main/java/com/codahale/metrics/graphite/GraphiteReporter.java#L145],
 etc. for that matter). Will we need to build some sort of 
`AggregateMetricRegistry` to join each core's registries? Or do you have 
something else in mind?
* On a separate note, it would be nice if we could just specify which registry 
we'd like a reporter to attach to. So for example, we can attach one reporter 
to `collection1`, another to `zookeeper`, and one more to `jvm`. These are at 
different levels in the metrics hierarchy but perhaps we can just pass in the 
registry's name as part of the config for a reporter?


> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6673) Maven build fails for target javadoc:jar on trunk/Java8

2016-12-02 Thread Daniel Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715106#comment-15715106
 ] 

Daniel Collins commented on LUCENE-6673:


We had patched this locally on our 4.8 branch (with the added complication that 
in Java 7 this flag isn't needed).  Getting back to trunk, this still applies, 
any thoughts on this being applied?

> Maven build fails for target javadoc:jar on trunk/Java8
> ---
>
> Key: LUCENE-6673
> URL: https://issues.apache.org/jira/browse/LUCENE-6673
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Daniel Collins
>Assignee: Ramkumar Aiyengar
>Priority: Minor
> Attachments: LUCENE-6673.patch
>
>
> We currently disable missing checks for doclint, but the maven poms don't 
> have it, as a result javadoc:jar fails.
> (thanks to [~dancollins] for spotting this)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7563) BKD index should compress unused leading bytes

2016-12-02 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7563:
---
Attachment: LUCENE-7563.patch

New patch; I think it's ready.

This breaks out a private BKD implementation for {{SimpleText}} which
is a nice cleanup for the core BKD implementation, e.g. {{BKDReader}}
is now final; its strange protected constructor is gone; protected
methods are now private.

This patch also implements [~jpountz]'s last compression idea, to often
use only 1 byte to encode prefix, splitDim and first-byte-delta of the
suffix instead of the 2 bytes required in the previous iterations.
This gives another ~4-5% further compression improvement:

  * sparse-sorted -> 2.37 MB

  * sparse -> 2.07 MB

  * dense -> 2.00 MB

And the OpenStreetMaps geo benchmark:

  * geo3d -> 1.75 MB

  * LatLonPoint -> 1.72 MB

I'm running the 2B BKD and Points tests now ... if those pass, I plan
to push to master first and let this bake a bit before backporting.


> BKD index should compress unused leading bytes
> --
>
> Key: LUCENE-7563
> URL: https://issues.apache.org/jira/browse/LUCENE-7563
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7563.patch, LUCENE-7563.patch, LUCENE-7563.patch, 
> LUCENE-7563.patch
>
>
> Today the BKD (points) in-heap index always uses {{dimensionNumBytes}} per 
> dimension, but if e.g. you are indexing {{LongPoint}} yet only use the bottom 
> two bytes in a given segment, we shouldn't store all those leading 0s in the 
> index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9818) Solr admin UI rapidly retries any request(s) if it loses connection with the server

2016-12-02 Thread Yago Riveiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714706#comment-15714706
 ] 

Yago Riveiro commented on SOLR-9818:


This problem is critical when we use the UI to create replicas, last time I did 
the operation and the cluster was busy, the result was 23 new replicas for my 
shard ...

> Solr admin UI rapidly retries any request(s) if it loses connection with the 
> server
> ---
>
> Key: SOLR-9818
> URL: https://issues.apache.org/jira/browse/SOLR-9818
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Ere Maijala
>
> It seems that whenever the Solr admin UI loses connection with the server, be 
> the reason that the server is too slow to answer or that it's gone away 
> completely, it starts hammering the server with the previous request until it 
> gets a success response, it seems. That can be especially bad if the last 
> attempted action was something like collection reload with a SolrCloud 
> instance. The admin UI will quickly add hundreds of reload commands to 
> overseer/collection-queue-work, which may essentially cause the replicas to 
> get overloaded when they're trying to handle all the reload commands.
> I believe the UI should never retry the previous command blindly when the 
> connection is lost, but instead just ping the server until it responds again.
> Steps to reproduce:
> 1.) Fire up Solr
> 2.) Open the admin UI in browser
> 3.) Open a web console in the browser to see the requests it sends
> 4.) Stop solr
> 5.) Try an action in the admin UI
> 6.) Observe the web console in browser quickly fill up with repeats of the 
> originally attempted request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 529 - Unstable!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/529/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.spelling.suggest.SuggesterFSTTest.testRebuild

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([209CE6E1B3D498D7:7BB944A287D4E24D]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:812)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:779)
at 
org.apache.solr.spelling.suggest.SuggesterTest.testRebuild(SuggesterTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.lucene.search.suggest.fst.FSTCompletionLookup.lookup(FSTCompletionLookup.java:271)
at org.apache.lucene.search.suggest.Lookup.lookup(Lookup.java:240)
at 

[jira] [Updated] (SOLR-9818) Solr admin UI rapidly retries any request(s) if it loses connection with the server

2016-12-02 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated SOLR-9818:
--
Summary: Solr admin UI rapidly retries any request(s) if it loses 
connection with the server  (was: Solr admin UI must not retry any request if 
it loses connection with the server)

> Solr admin UI rapidly retries any request(s) if it loses connection with the 
> server
> ---
>
> Key: SOLR-9818
> URL: https://issues.apache.org/jira/browse/SOLR-9818
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: web gui
>Affects Versions: 6.3
>Reporter: Ere Maijala
>
> It seems that whenever the Solr admin UI loses connection with the server, be 
> the reason that the server is too slow to answer or that it's gone away 
> completely, it starts hammering the server with the previous request until it 
> gets a success response, it seems. That can be especially bad if the last 
> attempted action was something like collection reload with a SolrCloud 
> instance. The admin UI will quickly add hundreds of reload commands to 
> overseer/collection-queue-work, which may essentially cause the replicas to 
> get overloaded when they're trying to handle all the reload commands.
> I believe the UI should never retry the previous command blindly when the 
> connection is lost, but instead just ping the server until it responds again.
> Steps to reproduce:
> 1.) Fire up Solr
> 2.) Open the admin UI in browser
> 3.) Open a web console in the browser to see the requests it sends
> 4.) Stop solr
> 5.) Try an action in the admin UI
> 6.) Observe the web console in browser quickly fill up with repeats of the 
> originally attempted request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9818) Solr admin UI must not retry any request if it loses connection with the server

2016-12-02 Thread Ere Maijala (JIRA)
Ere Maijala created SOLR-9818:
-

 Summary: Solr admin UI must not retry any request if it loses 
connection with the server
 Key: SOLR-9818
 URL: https://issues.apache.org/jira/browse/SOLR-9818
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: web gui
Affects Versions: 6.3
Reporter: Ere Maijala


It seems that whenever the Solr admin UI loses connection with the server, be 
the reason that the server is too slow to answer or that it's gone away 
completely, it starts hammering the server with the previous request until it 
gets a success response, it seems. That can be especially bad if the last 
attempted action was something like collection reload with a SolrCloud 
instance. The admin UI will quickly add hundreds of reload commands to 
overseer/collection-queue-work, which may essentially cause the replicas to get 
overloaded when they're trying to handle all the reload commands.

I believe the UI should never retry the previous command blindly when the 
connection is lost, but instead just ping the server until it responds again.

Steps to reproduce:
1.) Fire up Solr
2.) Open the admin UI in browser
3.) Open a web console in the browser to see the requests it sends
4.) Stop solr
5.) Try an action in the admin UI
6.) Observe the web console in browser quickly fill up with repeats of the 
originally attempted request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 985 - Unstable!

2016-12-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/985/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([46FCE90E215BFF23:2E43DC24F1C1EDCF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at