[jira] [Updated] (SOLR-8395) query-time join (with scoring) for single value numeric fields

2015-12-22 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8395:
---
Attachment: SOLR-8395.patch

[~mkhludnev] Thanks you for show me the important point. I updated class 
{{OtherCoreJoinQuery}}.

> query-time join (with scoring) for single value numeric fields
> --
>
> Key: SOLR-8395
> URL: https://issues.apache.org/jira/browse/SOLR-8395
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: easytest, features, newbie, starter
> Fix For: 5.5
>
> Attachments: SOLR-8395.patch, SOLR-8395.patch, SOLR-8395.patch, 
> SOLR-8395.patch
>
>
> since LUCENE-5868 we have an opportunity to improve SOLR-6234 to make it join 
> int and long fields. I suppose it's worth to add "simple" test in Solr 
> NoScore suite. 
> * Alongside with that we can set _multipleValues_ parameters giving 
> _fromField_ cardinality declared in schema;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8456) Investigate org.apache.solr.cloud.HttpPartitionTest.test failing more commonly.

2015-12-22 Thread Mark Miller (JIRA)
Mark Miller created SOLR-8456:
-

 Summary: Investigate org.apache.solr.cloud.HttpPartitionTest.test 
failing more commonly.
 Key: SOLR-8456
 URL: https://issues.apache.org/jira/browse/SOLR-8456
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2015-12-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068279#comment-15068279
 ] 

Mark Miller commented on SOLR-8453:
---

I have not seen any failures due to that change and have been running it for a 
lot of days. I don't know that it's very important for this issue though - I'll 
probably pare a couple of those things out or into their own issues. I think 
most tests now are actually using that wait till cores are loaded option 
anyway? Logically, this does make more sense to me though.

I'm still fighting with some DIH tests a little. There test classes really want 
the old processor throws out exception behavior - and while I can easily 
simulate that, it seems that also leaves those tests with the random connection 
reset issue.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop streaming more 
> updates.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is sending out some further updates. It seems 
> previously this burst was sent on the connection and ignored? But after the 
> Jetty upgrade from 9.2 to 9.3, Jetty closes the connection on the server when 
> we throw certain document level exceptions, and the client does not end up 
> getting notified of the original exception at all and instead hits a 
> connection reset exception. Even before this update, it does not seem like we 
> are acting in a safe or 'behaved' manner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2015-12-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8453:
--
Attachment: SOLR-8453.patch

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop streaming more 
> updates.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is sending out some further updates. It seems 
> previously this burst was sent on the connection and ignored? But after the 
> Jetty upgrade from 9.2 to 9.3, Jetty closes the connection on the server when 
> we throw certain document level exceptions, and the client does not end up 
> getting notified of the original exception at all and instead hits a 
> connection reset exception. Even before this update, it does not seem like we 
> are acting in a safe or 'behaved' manner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2015-12-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068279#comment-15068279
 ] 

Mark Miller edited comment on SOLR-8453 at 12/22/15 3:45 PM:
-

I have not seen any failures due to that change and have been running it for a 
lot of days. I don't know that it's very important for this issue though - I'll 
probably pare a couple of those things out or into their own issues. I think 
most tests now are actually using that wait till cores are loaded option 
anyway? Logically, this does make more sense to me though.

I'm still fighting with some DIH tests a little. Their test classes really want 
the 'old processor throws out an exception' behavior - and while I can easily 
simulate that, it seems that also leaves those tests with the random connection 
reset issue. Old behavior, old problem I guess. Will probably have to try and 
tweak those tests a bit more.


was (Author: markrmil...@gmail.com):
I have not seen any failures due to that change and have been running it for a 
lot of days. I don't know that it's very important for this issue though - I'll 
probably pare a couple of those things out or into their own issues. I think 
most tests now are actually using that wait till cores are loaded option 
anyway? Logically, this does make more sense to me though.

I'm still fighting with some DIH tests a little. There test classes really want 
the old processor throws out exception behavior - and while I can easily 
simulate that, it seems that also leaves those tests with the random connection 
reset issue.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop streaming more 
> updates.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is sending out some further updates. It seems 
> previously this burst was sent on the connection and ignored? But after the 
> Jetty upgrade from 9.2 to 9.3, Jetty closes the connection on the server when 
> we throw certain document level exceptions, and the client does not end up 
> getting notified of the original exception at all and instead hits a 
> connection reset exception. Even before this update, it does not seem like we 
> are acting in a safe or 'behaved' manner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2015-12-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8453:
--
Attachment: SOLR-8453.patch

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop streaming more 
> updates.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is sending out some further updates. It seems 
> previously this burst was sent on the connection and ignored? But after the 
> Jetty upgrade from 9.2 to 9.3, Jetty closes the connection on the server when 
> we throw certain document level exceptions, and the client does not end up 
> getting notified of the original exception at all and instead hits a 
> connection reset exception. Even before this update, it does not seem like we 
> are acting in a safe or 'behaved' manner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8452) replace "partialResults" occurrences with SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY

2015-12-22 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068638#comment-15068638
 ] 

Yonik Seeley commented on SOLR-8452:


It's not a big deal, but I'm not sure we should be replacing literals with 
references in tests, as it actually reduces what is actually tested.  If 
someone accidentally (or on purpose) change the constant, all of the tests will 
magically pass.


> replace "partialResults" occurrences with 
> SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY
> ---
>
> Key: SOLR-8452
> URL: https://issues.apache.org/jira/browse/SOLR-8452
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8452.patch
>
>
> proposed patch against trunk to follow (The 
> {{TestSolrQueryResponse.testResponseHeaderPartialResults()}} test within the 
> patch is to ensure that inadvertent, non-backwards-compatible changes to 
> {{SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY}} result in test 
> failure.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8457) DocumentObjectBinder should be able to work with highlight

2015-12-22 Thread Alessandro Benedetti (JIRA)
Alessandro Benedetti created SOLR-8457:
--

 Summary: DocumentObjectBinder should be able to work with highlight
 Key: SOLR-8457
 URL: https://issues.apache.org/jira/browse/SOLR-8457
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: 5.4
Reporter: Alessandro Benedetti
Priority: Trivial


It is a common use case, when you have configured the highlighter in your 
request handler to quickly get a pojo object containing the highlighted 
content, instead of accessing the highlighting snippet map.

it could be useful to have an option in the document binder to take a look 
first to the highlighted snippets and then fallback to the normal field ( 
usually the fallback is already in the highlighter component anyway) .

In this way would be much simpler for a SolrJ user to get directly a java pojo 
bean, with the fields already with the highlighted value.
Without accessing both the pojo and the highlighting map, and make intersection.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15289 - Still Failing!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15289/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:35120/oh/u/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:35120/oh/u/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([E38AABEA547597FF:6BDE9430FA89FA07]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:638)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Resolved] (LUCENE-6946) SortField.equals does not take the missing value into account

2015-12-22 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6946.
--
Resolution: Fixed

> SortField.equals does not take the missing value into account
> -
>
> Key: LUCENE-6946
> URL: https://issues.apache.org/jira/browse/LUCENE-6946
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6946.patch
>
>
> SortField.equals does not check whether both objects have the same missing 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8452) replace "partialResults" occurrences with SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068604#comment-15068604
 ] 

ASF subversion and git services commented on SOLR-8452:
---

Commit 1721450 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1721450 ]

SOLR-8452: replace "partialResults" occurrences with 
SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY

> replace "partialResults" occurrences with 
> SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY
> ---
>
> Key: SOLR-8452
> URL: https://issues.apache.org/jira/browse/SOLR-8452
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8452.patch
>
>
> proposed patch against trunk to follow (The 
> {{TestSolrQueryResponse.testResponseHeaderPartialResults()}} test within the 
> patch is to ensure that inadvertent, non-backwards-compatible changes to 
> {{SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY}} result in test 
> failure.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8452) replace "partialResults" occurrences with SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY

2015-12-22 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068645#comment-15068645
 ] 

Christine Poerschke commented on SOLR-8452:
---

The (added by this revision) 
{{TestSolrQueryResponse.testResponseHeaderPartialResults()}} test would also 
have to accidentally (or on purpose) be changed to match the changed constant 
value, but yes, that would be only one specific failed test instead of 
potentially several general failing tests.

> replace "partialResults" occurrences with 
> SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY
> ---
>
> Key: SOLR-8452
> URL: https://issues.apache.org/jira/browse/SOLR-8452
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8452.patch
>
>
> proposed patch against trunk to follow (The 
> {{TestSolrQueryResponse.testResponseHeaderPartialResults()}} test within the 
> patch is to ensure that inadvertent, non-backwards-compatible changes to 
> {{SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY}} result in test 
> failure.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8415) Provide command to switch between non/secure mode in ZK

2015-12-22 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068742#comment-15068742
 ] 

Mike Drob commented on SOLR-8415:
-

bq. should take a path, like CLEAN
Optional path or required path? Could still default to / if no path given, or 
could make the path required for consistency. Or could accept multiple paths.
I think operating on / will be the most common use case, so it would make sense 
to default to it, but I'll defer to you on this.


bq. catch NoNodeException, like CLEAN
Good catch.

bq. Will this work if the version of the znode is set?
Yea, the -1 means don't care about the version.

bq. Why don't you support retryOnConnLoss?
Not sure what this means.

bq. Would be good to test that the acls get applied recursively
The existing test does this. Set acls on /, test on /collections/collection1

bq. maybe change your test to do this (or do both this and the 
secure/non-secure version, should be simple to do both probably).
I've been tinkering with a test for this, I'm having some trouble getting the 
providers and credentials lines up in a way that tests something meaningful. I 
think I can get it though.

> Provide command to switch between non/secure mode in ZK
> ---
>
> Key: SOLR-8415
> URL: https://issues.apache.org/jira/browse/SOLR-8415
> Project: Solr
>  Issue Type: Improvement
>  Components: security, SolrCloud
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk
>
> Attachments: SOLR-8415.patch, SOLR-8415.patch
>
>
> We have the ability to run both with and without zk acls, but we don't have a 
> great way to switch between the two modes. Most common use case, I imagine, 
> would be upgrading from an old version that did not support this to a new 
> version that does, and wanting to protect all of the existing content in ZK, 
> but it is conceivable that a user might want to remove ACLs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6946) SortField.equals does not take the missing value into account

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068531#comment-15068531
 ] 

ASF subversion and git services commented on LUCENE-6946:
-

Commit 1721444 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1721444 ]

LUCENE-6946: SortField.equals now takes the missing value into account.

> SortField.equals does not take the missing value into account
> -
>
> Key: LUCENE-6946
> URL: https://issues.apache.org/jira/browse/LUCENE-6946
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6946.patch
>
>
> SortField.equals does not check whether both objects have the same missing 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8452) replace "partialResults" occurrences with SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068747#comment-15068747
 ] 

ASF subversion and git services commented on SOLR-8452:
---

Commit 1721465 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1721465 ]

SOLR-8452: replace "partialResults" occurrences with 
SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY (merge in revision 
1721450 from trunk)

> replace "partialResults" occurrences with 
> SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY
> ---
>
> Key: SOLR-8452
> URL: https://issues.apache.org/jira/browse/SOLR-8452
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8452.patch
>
>
> proposed patch against trunk to follow (The 
> {{TestSolrQueryResponse.testResponseHeaderPartialResults()}} test within the 
> patch is to ensure that inadvertent, non-backwards-compatible changes to 
> {{SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY}} result in test 
> failure.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15290 - Still Failing!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15290/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:54906/xlywc/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:54906/xlywc/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([2983DA7549556A09:A1D7E5AFE7A907F1]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:638)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[jira] [Updated] (SOLR-8420) Date statistics: sumOfSquares overflows long

2015-12-22 Thread Tom Hill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Hill updated SOLR-8420:
---
Attachment: StdDev.java

Just a quick demo of why TestDistributedSearch is failing, when running with 
the patch.

When TestDistributedSearch#test is run with two partitions, it gets a slightly 
different value than when run on one partition. 

The two results are 
100010100011010110010111011100010110010101111000110
100010100011010110010111011100010110010101111000101

This matches the numbers seen in TestDistributedSearch.

It looks like we need to add some delta into the compare for doubles in 

BaseDistributedSearchTestCase#public static String compare(Object a, Object b, 
int flags, Map handle)

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Priority: Minor
> Attachments: 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch, StdDev.java
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2015-12-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069011#comment-15069011
 ] 

Mark Miller commented on SOLR-7339:
---

bq. My patch for SOLR-8453 seems to solve pretty much all the connection resets 
I have been seeing except for the Locale issue.

So this still stands. I think I've solved the general issue with the patch in 
SOLR-8453. I'm still banging around with it, but it's looking pretty good on my 
machine.

I still think we probably have to roll this update back because it appears to 
break Solr under certain default Locales. We should probably try and isolate 
what is causing it so we can file a bug, but we would still need to wait for a 
good version.

> Upgrade Jetty from 9.2 to 9.3
> -
>
> Key: SOLR-7339
> URL: https://issues.apache.org/jira/browse/SOLR-7339
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gregg Donovan
>Assignee: Shalin Shekhar Mangar
> Fix For: Trunk
>
> Attachments: SOLR-7339.patch, SOLR-7339.patch, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng
>
>
> Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor 
> SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] 
> and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx].
> Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are:
> * multiplexing requests over a single TCP connection ("streams")
> * canceling a single request without closing the TCP connection
> * removing [head-of-line 
> blocking|https://http2.github.io/faq/#why-is-http2-multiplexed]
> * header compression
> Caveats:
> * Jetty 9.3 is at M2, not released.
> * Full Solr support for HTTP/2 would require more work than just upgrading 
> Jetty. The server configuration would need to change and a new HTTP client 
> ([Jetty's own 
> client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], 
> [Square's OkHttp|http://square.github.io/okhttp/], 
> [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need 
> to be selected and wired up. Perhaps this is worthy of a branch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8248) Log a query as soon as it comes in and assign a unique id to it

2015-12-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069029#comment-15069029
 ] 

Shawn Heisey commented on SOLR-8248:


bq. Is there a reason Solr doesn't use log4j2 or logback as these are supposed 
to faster than log4j 

Boring TL;DR history:

For most of its history, Solr has been using slf4j logging (from slf4j.org), 
bound to java.util.logging.

In version 4.3, logging jars were removed from the war file, moved to jetty's 
lib/ext folder, and the binding was changed to log4j 1.2.  I do not know why 
the older log4j was chosen, perhaps it was simply a familiar library.  
Speculation says that part of it may have come from loyalty to a fellow Apache 
project, and the fact that the library on slf4j.org does not support log4j2.

Just this year, log4j 1.x was declared completely end of life, so we have an 
issue to upgrade the binding to log4j2 (still using slf4j within Solr), but 
this is not a simple drop-in replacement.  Code changes will likely be required 
to keep the Logging tab in the admin UI working.  Jar changes will also be 
required.


> Log a query as soon as it comes in and assign a unique id to it
> ---
>
> Key: SOLR-8248
> URL: https://issues.apache.org/jira/browse/SOLR-8248
> Project: Solr
>  Issue Type: Improvement
>  Components: Server
>Affects Versions: 5.3
>Reporter: Pushkar Raste
>Priority: Minor
>
> Often times when there is an OutOfMemory error Solr fails to log details 
> about query that might have caused it. Solr doesn't provide enough 
> information to investigate the root cause in such case. 
> We can log a query as soon as it comes in and reference it by it's unique id 
> to log details like Hits, Status and QTime  when query finishes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069064#comment-15069064
 ] 

Joel Bernstein commented on SOLR-7535:
--

A Dennis mentioned the SelectStream will handle the field mappings so no need 
to build that in.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8458) Parameter substitution for Streaming Expressions

2015-12-22 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8458:


 Summary: Parameter substitution for Streaming Expressions
 Key: SOLR-8458
 URL: https://issues.apache.org/jira/browse/SOLR-8458
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein
Priority: Minor


As Streaming Expression become more complicated it would be nice to be able to 
support parameter substitution. For example:

{code}
http://localhost:8983/col/stream?expr=merge($left, 
$right)=search(...)=search(...)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069064#comment-15069064
 ] 

Joel Bernstein edited comment on SOLR-7535 at 12/23/15 3:14 AM:


As Dennis mentioned the SelectStream will handle the field mappings so no need 
to build that in.


was (Author: joel.bernstein):
A Dennis mentioned the SelectStream will handle the field mappings so no need 
to build that in.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8451) We should be calling method.abort before response.close in HttpSolrClient

2015-12-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069018#comment-15069018
 ] 

Mark Miller commented on SOLR-8451:
---

I'm evolving this patch in SOLR-8453. Quote from that issue: 

bq. I think we overdo method.abort and I think it messes up connection reuse. 
On a clean server->client exception, we should simply make sure the response 
entity content is fully consumed and closed like a normal request.

> We should be calling method.abort before response.close in HttpSolrClient
> -
>
> Key: SOLR-8451
> URL: https://issues.apache.org/jira/browse/SOLR-8451
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-22 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069034#comment-15069034
 ] 

Jason Gerlowski commented on SOLR-7535:
---

I'm in the process of hacking together a first pass at this.

Going well for the most part, but I did run into one sticking point.  
{{UpdateStream.read()}} takes each tuple and sends it along to a SolrCloud 
collection.  I was planning on converting the tuple into a 
{{SolrInputDocument}}, and then using {{CloudSolrClient.add(doc)}} to send 
along the converted tuple.

It's not super hard to take a straw-man approach to the conversion:
{code}
final SolrInputDocument doc = new SolrInputDocument();
for (Object s : tupleFromSource.fields.keySet()) {
  doc.addField((String)s, tupleFromSource.get(s));
}   
{code}

Is this a reasonable approach?  I think this'll work for simple cases, but I 
wasn't sure how it'd do with more complex tuples.  Do tuples ever have 
non-String keys?  Is there any special treatment that I should know about for 
nested-docs (I wasn't sure how these mapped to tuples).

I'm assuming there must be some code out there that does the reverse-conversion 
(*from* Solr results *to* tuples).  I nosed around a bit in 
{{StreamHandler.handleRequestBody}} and the various TupleStream 
implementations, but I didn't find anything too promising.  Does anyone know 
where that might live.  If I found that code it'd probably be helpful for doing 
the opposite conversion for {{UpdateStream}}

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2015-12-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069016#comment-15069016
 ] 

Mark Miller edited comment on SOLR-8453 at 12/23/15 1:47 AM:
-

bq. I'll run the tests in a loop with this patch to see if I can reproduce the 
failures.

I'm still looping more recent versions. If you get a chance, update to the most 
recent patch. It has some changes to how our clients handle cleanup under 
failures. I think we overdo method.abort and I think it messes up connection 
reuse. On a clean server->client exception, we should simply make sure the 
response entity content is fully consumed and closed like a normal request.


was (Author: markrmil...@gmail.com):
bq. I'll run the tests in a loop with this patch to see if I can reproduce the 
failures.

I'm still looping more recent versions. If you get a chance, update to the most 
recent patch. It has some chances to how our clients handle cleanup under 
failures. I think we overdo method.abort and I think it messes up connection 
reuse.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2015-12-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069016#comment-15069016
 ] 

Mark Miller commented on SOLR-8453:
---

bq. I'll run the tests in a loop with this patch to see if I can reproduce the 
failures.

I'm still looping more recent versions. If you get a chance, update to the most 
recent patch. It has some chances to how our clients handle cleanup under 
failures. I think we overdo method.abort and I think it messes up connection 
reuse.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5489 - Still Failing!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5489/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [SolrCore, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([C86B6821C207D53C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:229)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=11727, name=searcherExecutor-4858-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=11727, name=searcherExecutor-4858-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at 

[jira] [Updated] (SOLR-8458) Parameter substitution for Streaming Expressions

2015-12-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8458:
-
Description: 
As Streaming Expressions become more complicated it would be nice to support 
parameter substitution. For example:

{code}
http://localhost:8983/col/stream?expr=merge($left, $right, 
...)=search(...)=search(...)
{code}

  was:
As Streaming Expressions become more complicated it would be nice to support 
parameter substitution. For example:

{code}
http://localhost:8983/col/stream?expr=merge($left, 
$right)=search(...)=search(...)
{code}


> Parameter substitution for Streaming Expressions
> 
>
> Key: SOLR-8458
> URL: https://issues.apache.org/jira/browse/SOLR-8458
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
>
> As Streaming Expressions become more complicated it would be nice to support 
> parameter substitution. For example:
> {code}
> http://localhost:8983/col/stream?expr=merge($left, $right, 
> ...)=search(...)=search(...)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1054 - Still Failing

2015-12-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1054/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at https://127.0.0.1:34590/m_mn/w: Expected mime type 
application/octet-stream but got text/html.Error 
500HTTP ERROR: 500 Problem accessing 
/m_mn/w/admin/cores. Reason: {msg=Java heap 
space,trace=java.lang.OutOfMemoryError: Java heap space ,code=500} 
Powered by Jetty://   

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:34590/m_mn/w: Expected mime type 
application/octet-stream but got text/html. 


Error 500 


HTTP ERROR: 500
Problem accessing /m_mn/w/admin/cores. Reason:
{msg=Java heap space,trace=java.lang.OutOfMemoryError: Java heap space
,code=500}
Powered by Jetty://



at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:543)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:295)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:412)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-8326) PKIAuthenticationPlugin doesn't report any errors in case of stale or wrong keys and returns garbage

2015-12-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069002#comment-15069002
 ] 

Noble Paul commented on SOLR-8326:
--

Sure. Anyway, that is the plan. But I was wondering of that was the problem. It 
was an idea as a patch.  

> PKIAuthenticationPlugin doesn't report any errors in case of stale or wrong 
> keys and returns garbage
> 
>
> Key: SOLR-8326
> URL: https://issues.apache.org/jira/browse/SOLR-8326
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3, 5.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Blocker
> Fix For: 5.3.2, 5.4, Trunk
>
> Attachments: SOLR-8326.patch, SOLR-8326.patch, SOLR-8326.patch, 
> pkiauth_ttl.patch
>
>
> This was reported on the mailing list:
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg115921.html
> I tested it out as follows to confirm that adding a 'read' rule causes 
> replication to break. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-22 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069053#comment-15069053
 ] 

Dennis Gove commented on SOLR-7535:
---

For the original mapping take a look at SolrStream, particular the 
{code}mapFields(...){code} function and where it is called from. 

It might make sense to require a SelectStream as the inner stream so that one 
can select the fields they want to insert. Or perhaps supporting a way to 
select fields as part of this stream's expression and it can internally use a 
SelectStream to implement that feature. 

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-22 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069053#comment-15069053
 ] 

Dennis Gove edited comment on SOLR-7535 at 12/23/15 2:30 AM:
-

For the original mapping take a look at SolrStream, particularly the 
{code}mapFields(...){code} function and where it is called from. 

It might make sense to require a SelectStream as the inner stream so that one 
can select the fields they want to insert. Or perhaps supporting a way to 
select fields as part of this stream's expression and it can internally use a 
SelectStream to implement that feature. 


was (Author: dpgove):
For the original mapping take a look at SolrStream, particular the 
{code}mapFields(...){code} function and where it is called from. 

It might make sense to require a SelectStream as the inner stream so that one 
can select the fields they want to insert. Or perhaps supporting a way to 
select fields as part of this stream's expression and it can internally use a 
SelectStream to implement that feature. 

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069059#comment-15069059
 ] 

Joel Bernstein commented on SOLR-7535:
--

I would start with the simplest case of key/value pairs. Assume String keys for 
the first round of work as well. So your approach looks fine.

I would shoot for enough functionality to support a SQL SELECT INTO query, 
because the next step will be to wire the UpdateStream into the SQLHandler.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2015-12-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069059#comment-15069059
 ] 

Joel Bernstein edited comment on SOLR-7535 at 12/23/15 2:52 AM:


I would start with the simplest case of key/value pairs. Assume String keys for 
the first round of work as well. So your approach looks fine.

I would shoot for enough functionality to support a *SQL SELECT INTO* query, 
because the next step will be to wire the UpdateStream into the SQLHandler.


was (Author: joel.bernstein):
I would start with the simplest case of key/value pairs. Assume String keys for 
the first round of work as well. So your approach looks fine.

I would shoot for enough functionality to support a SQL SELECT INTO query, 
because the next step will be to wire the UpdateStream into the SQLHandler.

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8458) Parameter substitution for Streaming Expressions

2015-12-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8458:
-
Description: 
As Streaming Expressions become more complicated it would be nice to support 
parameter substitution. For example:

{code}
http://localhost:8983/col/stream?expr=merge($left, 
$right)=search(...)=search(...)
{code}

  was:
As Streaming Expression become more complicated it would be nice to be able to 
support parameter substitution. For example:

{code}
http://localhost:8983/col/stream?expr=merge($left, 
$right)=search(...)=search(...)
{code}


> Parameter substitution for Streaming Expressions
> 
>
> Key: SOLR-8458
> URL: https://issues.apache.org/jira/browse/SOLR-8458
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
>
> As Streaming Expressions become more complicated it would be nice to support 
> parameter substitution. For example:
> {code}
> http://localhost:8983/col/stream?expr=merge($left, 
> $right)=search(...)=search(...)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5488 - Still Failing!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5488/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:61680/pvy/zu/awholynewcollection_0: non 
ok status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:61680/pvy/zu/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([F364EA40E42D9030:7B30D59A4AD1FDC8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:658)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-8415) Provide command to switch between non/secure mode in ZK

2015-12-22 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068609#comment-15068609
 ] 

Gregory Chanan commented on SOLR-8415:
--

bq. Is this true? Let's say you wanted to switch from secure setup old: (old 
acls, old credentials) to secure setup new (new acls, new credentials). You can 
call resetacls with (old acls + new acls, old credentials). Then call reset 
acls with (new acls, new credentials). That requires an intermediate step, but 
it isn't unsecure.

maybe change your test to do this (or do both this and the secure/non-secure 
version, should be simple to do both probably).

> Provide command to switch between non/secure mode in ZK
> ---
>
> Key: SOLR-8415
> URL: https://issues.apache.org/jira/browse/SOLR-8415
> Project: Solr
>  Issue Type: Improvement
>  Components: security, SolrCloud
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk
>
> Attachments: SOLR-8415.patch, SOLR-8415.patch
>
>
> We have the ability to run both with and without zk acls, but we don't have a 
> great way to switch between the two modes. Most common use case, I imagine, 
> would be upgrading from an old version that did not support this to a new 
> version that does, and wanting to protect all of the existing content in ZK, 
> but it is conceivable that a user might want to remove ACLs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2958 - Still Failing!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2958/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F6893A566F61793A-001/solr-instance-002/./collection1/data/index.20151222170600284,
 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F6893A566F61793A-001/solr-instance-002/./collection1/data,
 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F6893A566F61793A-001/solr-instance-002/./collection1/data/index.20151222170600607]
 expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: 
[/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F6893A566F61793A-001/solr-instance-002/./collection1/data/index.20151222170600284,
 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F6893A566F61793A-001/solr-instance-002/./collection1/data,
 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F6893A566F61793A-001/solr-instance-002/./collection1/data/index.20151222170600607]
 expected:<2> but was:<3>
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:815)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (SOLR-8436) Realtime-get should support filters

2015-12-22 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8436:
---
Attachment: SOLR-8436.patch

Updated patch that improves the tests and adds filtering into the RTG stress 
test.  Now to loop the test for a while and make sure it's all air-tight...


> Realtime-get should support filters
> ---
>
> Key: SOLR-8436
> URL: https://issues.apache.org/jira/browse/SOLR-8436
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-8436.patch, SOLR-8436.patch, SOLR-8436.patch, 
> SOLR-8436.patch
>
>
> RTG currently ignores filters.  There are probably other use-cases for RTG 
> and filters, but one that comes to mind is security filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2015-12-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8453:
--
Attachment: SOLR-8453.patch

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8248) Log a query as soon as it comes in and assign a unique id to it

2015-12-22 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068898#comment-15068898
 ] 

Pushkar Raste commented on SOLR-8248:
-

[SOLR-6919|https://issues.apache.org/jira/browse/SOLR-6919] logs REST request, 
however request may get modified based on the solr config. 

[~yo...@apache.org] I do agree that correlating query would be a problem 
(especially in multi sharded envirnment), other alternative is to log query 
twice and control logging query the first time if a flag is turned on. This 
however would also increase disk footprint of log files a lot. 

Is there a reason Solr doesn't use log4j2 or logback as these are supposed to 
faster than log4j 

> Log a query as soon as it comes in and assign a unique id to it
> ---
>
> Key: SOLR-8248
> URL: https://issues.apache.org/jira/browse/SOLR-8248
> Project: Solr
>  Issue Type: Improvement
>  Components: Server
>Affects Versions: 5.3
>Reporter: Pushkar Raste
>Priority: Minor
>
> Often times when there is an OutOfMemory error Solr fails to log details 
> about query that might have caused it. Solr doesn't provide enough 
> information to investigate the root cause in such case. 
> We can log a query as soon as it comes in and reference it by it's unique id 
> to log details like Hits, Status and QTime  when query finishes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8415) Provide command to switch between non/secure mode in ZK

2015-12-22 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068778#comment-15068778
 ] 

Gregory Chanan commented on SOLR-8415:
--

{quote}Optional path or required path? Could still default to / if no path 
given, or could make the path required for consistency. Or could accept 
multiple paths.
I think operating on / will be the most common use case, so it would make sense 
to default to it, but I'll defer to you on this.{quote}

Whatever you think is best.

{quote}Why don't you support retryOnConnLoss?
Not sure what this means.{quote}

See a bunch of the other commands in SolrZkClient, like makePath.  They support 
a retryOnConnLoss parameter, which would be useful here.

{quote}The existing test does this. Set acls on /, test on 
/collections/collection1{quote}

My mistake.  I'd check "/" as well, that sort of thing is easy to screw up.

> Provide command to switch between non/secure mode in ZK
> ---
>
> Key: SOLR-8415
> URL: https://issues.apache.org/jira/browse/SOLR-8415
> Project: Solr
>  Issue Type: Improvement
>  Components: security, SolrCloud
>Reporter: Mike Drob
>Assignee: Gregory Chanan
> Fix For: Trunk
>
> Attachments: SOLR-8415.patch, SOLR-8415.patch
>
>
> We have the ability to run both with and without zk acls, but we don't have a 
> great way to switch between the two modes. Most common use case, I imagine, 
> would be upgrading from an old version that did not support this to a new 
> version that does, and wanting to protect all of the existing content in ZK, 
> but it is conceivable that a user might want to remove ACLs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk-9-ea+95) - Build # 14994 - Failure!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14994/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:58443","node_name":"127.0.0.1:58443_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:38773;,   
"core":"c8n_1x3_lf_shard1_replica3",   "node_name":"127.0.0.1:38773_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:60938;,   "node_name":"127.0.0.1:60938_",  
 "state":"down"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:58443;,   "node_name":"127.0.0.1:58443_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:58443","node_name":"127.0.0.1:58443_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:38773;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:38773_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:60938;,
  "node_name":"127.0.0.1:60938_",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:58443;,
  "node_name":"127.0.0.1:58443_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([8E1E66F68B664DA0:64A592C259A2058]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:171)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 276 - Still Failing!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/276/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:34530/solr/testSolrCloudCollection]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:34530/solr/testSolrCloudCollection]
at 
__randomizedtesting.SeedInfo.seed([74721A64E32297AD:9AA1A4B466CEAD49]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck(PingRequestHandlerTest.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15288 - Failure!

2015-12-22 Thread Michael McCandless
I'll dig

Mike McCandless

http://blog.mikemccandless.com


On Tue, Dec 22, 2015 at 11:40 AM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15288/
> Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC
>
> 1 tests failed.
> FAILED:  
> org.apache.lucene.search.TestDimensionalRangeQuery.testAllDimensionalDocsWereDeletedAndThenMergedAgain
>
> Error Message:
> must index at least one point
>
> Stack Trace:
> java.lang.IllegalStateException: must index at least one point
> at 
> __randomizedtesting.SeedInfo.seed([AAC8CFA944E142D1:1C9E6E14877E764C]:0)
> at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:742)
> at 
> org.apache.lucene.codecs.simpletext.SimpleTextDimensionalWriter.writeField(SimpleTextDimensionalWriter.java:160)
> at 
> org.apache.lucene.codecs.DimensionalWriter.mergeOneField(DimensionalWriter.java:44)
> at 
> org.apache.lucene.codecs.DimensionalWriter.merge(DimensionalWriter.java:106)
> at 
> org.apache.lucene.index.SegmentMerger.mergeDimensionalValues(SegmentMerger.java:168)
> at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:117)
> at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4062)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3642)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at 
> org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1917)
> at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1750)
> at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1707)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery.testAllDimensionalDocsWereDeletedAndThenMergedAgain(TestDimensionalRangeQuery.java:1003)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> 

[jira] [Commented] (SOLR-8452) replace "partialResults" occurrences with SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY

2015-12-22 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068694#comment-15068694
 ] 

Yonik Seeley commented on SOLR-8452:


Ah, ok, I hadn't realized you added a specific test.

I've seen people write tests in both styles in the past (just carrying over 
what is good style in the code to the tests), but I never got around to 
mentioning the downsides of doing that.

> replace "partialResults" occurrences with 
> SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY
> ---
>
> Key: SOLR-8452
> URL: https://issues.apache.org/jira/browse/SOLR-8452
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8452.patch
>
>
> proposed patch against trunk to follow (The 
> {{TestSolrQueryResponse.testResponseHeaderPartialResults()}} test within the 
> patch is to ensure that inadvertent, non-backwards-compatible changes to 
> {{SolrQueryResponse.RESPONSE_HEADER_PARTIAL_RESULTS_KEY}} result in test 
> failure.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2903 - Failure!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2903/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh

Error Message:
expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([3B2796BB2413CA10:249DE74CF4730CD5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:137)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh(ZkStateReaderTest.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2015-12-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069089#comment-15069089
 ] 

Mark Miller commented on SOLR-7339:
---

It's not very often, but I seem to see the following pop up after the update 
and don't remember it being an issue before (for a long time anyway):

  [junit4]   2> 327513 ERROR 
(TEST-BasicDistributedZk2Test.test-seed#[EAADE41C3F03EAEF]) 
[n:127.0.0.1:38892_i c:collection1 s:shard1 r:core_node2 x:collection1] 
o.a.s.c.ChaosMonkey Could not get the port to start jetty again
   [junit4]   2> java.net.BindException: Address already in use


> Upgrade Jetty from 9.2 to 9.3
> -
>
> Key: SOLR-7339
> URL: https://issues.apache.org/jira/browse/SOLR-7339
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gregg Donovan
>Assignee: Shalin Shekhar Mangar
> Fix For: Trunk
>
> Attachments: SOLR-7339.patch, SOLR-7339.patch, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng
>
>
> Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor 
> SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] 
> and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx].
> Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are:
> * multiplexing requests over a single TCP connection ("streams")
> * canceling a single request without closing the TCP connection
> * removing [head-of-line 
> blocking|https://http2.github.io/faq/#why-is-http2-multiplexed]
> * header compression
> Caveats:
> * Jetty 9.3 is at M2, not released.
> * Full Solr support for HTTP/2 would require more work than just upgrading 
> Jetty. The server configuration would need to change and a new HTTP client 
> ([Jetty's own 
> client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], 
> [Square's OkHttp|http://square.github.io/okhttp/], 
> [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need 
> to be selected and wired up. Perhaps this is worthy of a branch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2015-12-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069122#comment-15069122
 ] 

Mark Miller commented on SOLR-7339:
---

Interesting exception in a CollectionsAPIDistributedZkTest fail:

{noformat}
   [junit4]   2> 232756 ERROR (qtp543233699-1130) [n:127.0.0.1:43044_kqam] 
o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Error trying to 
proxy request for url: http://127.0.0.1:34586/kqam/awholynewcollection_0/select
   [junit4]   2>at 
org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:591)
   [junit4]   2>at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:441)
   [junit4]   2>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:226)
   [junit4]   2>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
   [junit4]   2>at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:111)
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
   [junit4]   2>at 
org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:45)
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
   [junit4]   2>at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1158)
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
   [junit4]   2>at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1090)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:437)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119)
   [junit4]   2>at 
org.eclipse.jetty.server.Server.handle(Server.java:517)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242)
   [junit4]   2>at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:261)
   [junit4]   2>at 
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
   [junit4]   2>at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:75)
   [junit4]   2>at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213)
   [junit4]   2>at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147)
   [junit4]   2>at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
   [junit4]   2>at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
   [junit4]   2>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> Caused by: java.io.IOException: Response header too large
   [junit4]   2>at 
org.eclipse.jetty.http.HttpGenerator.generateResponse(HttpGenerator.java:404)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:678)
   [junit4]   2>at 
org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
   [junit4]   2>at 
org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:509)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:668)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:722)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.gzip.GzipHttpOutputInterceptor.commit(GzipHttpOutputInterceptor.java:201)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.gzip.GzipHttpOutputInterceptor.write(GzipHttpOutputInterceptor.java:100)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:177)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:163)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:413)
   [junit4]   

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 729 - Failure

2015-12-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/729/

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:54693","node_name":"127.0.0.1:54693_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:49353;,   
"core":"c8n_1x3_lf_shard1_replica2",   "node_name":"127.0.0.1:49353_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:46153;,   "node_name":"127.0.0.1:46153_",  
 "state":"down"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:54693;,   "node_name":"127.0.0.1:54693_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:54693","node_name":"127.0.0.1:54693_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:49353;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:49353_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:46153;,
  "node_name":"127.0.0.1:46153_",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:54693;,
  "node_name":"127.0.0.1:54693_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([749FE05789A5A029:FCCBDF8D2759CDD1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:171)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2959 - Still Failing!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2959/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:57086;, 
"node_name":"127.0.0.1:57086_", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:57073;,  
   "node_name":"127.0.0.1:57073_", "state":"active", 
"leader":"true"},   "core_node3":{ "core":"collection1",
 "base_url":"http://127.0.0.1:57098;, 
"node_name":"127.0.0.1:57098_", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:57066;, 
"node_name":"127.0.0.1:57066_", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:57098;, 
"node_name":"127.0.0.1:57098_", "state":"active", 
"leader":"true"},   "core_node2":{ 
"core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:57066;, 
"node_name":"127.0.0.1:57066_", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"},   "collMinRf_1x3":{ "replicationFactor":"3",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:57086;, 
"node_name":"127.0.0.1:57086_", "state":"active", 
"leader":"true"},   "core_node2":{ 
"core":"collMinRf_1x3_shard1_replica3", 
"base_url":"http://127.0.0.1:57073;, 
"node_name":"127.0.0.1:57073_", "state":"active"},   
"core_node3":{ "core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:57098;, 
"node_name":"127.0.0.1:57098_", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:57086;,
"node_name":"127.0.0.1:57086_",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:57073;,
"node_name":"127.0.0.1:57073_",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:57098;,
"node_name":"127.0.0.1:57098_",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:57066;,
"node_name":"127.0.0.1:57066_",
"state":"active",
"leader":"true",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  

[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2015-12-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069229#comment-15069229
 ] 

Shawn Heisey commented on SOLR-8241:


ARC was a cache type that I had read about when I went looking for something 
better than LRU.  If I had known the idea was patented, I never would have 
created an issue for it and would have went straight to LFU.

If I ever find some time, I will work on SOLR-3393.  I haven't looked at how 
W-TinyLfu works or whether it would be a good alternative.  I think there are 
few things to consider:  How the speed compares to the code I cobbled together 
on SOLR-3393, how difficult it is to incorporate/debug, and whether any 
significant library dependencies are added.  It looks like you've used the 
Apache License, so there's no conflicts there.


> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2015-12-22 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069335#comment-15069335
 ] 

Ben Manes commented on SOLR-8241:
-

[Benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks] of Caffeine 
shows that the cache is ~33% as fast as an unbounded ConcurrentHashMap. As an 
earlier version is already a dependency, for a proof-of-concept the easiest 
would be to use an adapter into a Solr 
[Cache|https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/common/util/Cache.java].
 If the results are attractive, the next decision can be whether to use 
Caffeine or incorporate its ideas into a Solr cache instead.

LRU and LFU only retain information of the current working set. That turns out 
to be a limitation and by capturing more history a significantly better 
prediction (and hit rate) can be achieved. How that history is stored and used 
is how many newer polices differ (ARC, LIRS, 2Q, etc). Regardless they 
outperform a LRU / LFU by sometimes very wide margins, which makes choosing one 
very attractive. In the case of TinyLFU its very easy to adapt onto an existing 
policy as it works by filtering (admission) rather than organizing the order of 
exiting (eviction).

The [paper|http://arxiv.org/pdf/1512.00727.pdf] is a bit long, but a good read. 
The simulation code is very simple, though Caffeine's version isn't due to 
tackling the concurrency aspect as well.

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8454) Improve logging by ZkStateReader and clean up dead code

2015-12-22 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069211#comment-15069211
 ] 

Anshum Gupta commented on SOLR-8454:


Thanks Shai.
Seems like you missed the Change log entry though. I'll add that.

> Improve logging by ZkStateReader and clean up dead code
> ---
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch, SOLR-8454.patch, 
> SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message
> Also, clean up dead code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8454) Improve logging by ZkStateReader and clean up dead code

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069217#comment-15069217
 ] 

ASF subversion and git services commented on SOLR-8454:
---

Commit 1721492 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1721492 ]

SOLR-8454: Adding change log entry

> Improve logging by ZkStateReader and clean up dead code
> ---
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch, SOLR-8454.patch, 
> SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message
> Also, clean up dead code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8454) Improve logging by ZkStateReader and clean up dead code

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069220#comment-15069220
 ] 

ASF subversion and git services commented on SOLR-8454:
---

Commit 1721495 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1721495 ]

SOLR-8454: Adding change log entry (merge from trunk)

> Improve logging by ZkStateReader and clean up dead code
> ---
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch, SOLR-8454.patch, 
> SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message
> Also, clean up dead code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8326) PKIAuthenticationPlugin doesn't report any errors in case of stale or wrong keys and returns garbage

2015-12-22 Thread Nirmala Venkatraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068290#comment-15068290
 ] 

Nirmala Venkatraman edited comment on SOLR-8326 at 12/22/15 3:57 PM:
-

Anshum/Noble,
Iam still seeing intermittent PKIAuth invalid key errors in solr.log  while 
indexing is running against our solrcloud with basic auth enabled and with the 
patches for SOLR-8326

2015-12-22 14:39:42.685 ERROR (qtp201069753-644) [c:collection52 s:shard1 
r:core_node2 x:collection52_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.706 ERROR (qtp201069753-1121) [   ] 
o.a.s.s.PKIAuthenticationPlugin Invalid key
2015-12-22 14:39:42.705 ERROR (qtp201069753-481) [   ] 
o.a.s.s.PKIAuthenticationPlugin Invalid key
2015-12-22 14:39:42.698 ERROR (qtp201069753-1224) [c:collection52 s:shard1 
r:core_node2 x:collection52_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.697 ERROR (qtp201069753-577) [c:collection17 s:shard1 
r:core_node2 x:collection17_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.691 ERROR (qtp201069753-1062) [c:collection52 s:shard1 
r:core_node2 x:collection52_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.685 ERROR (qtp201069753-1063) [c:collection27 s:shard1 
r:core_node1 x:collection27_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 15:04:10.247 ERROR (qtp201069753-1045) [c:collection23 s:shard1 
r:core_node1 x:collection23_shard1_replica1] o.a.s.s.PKIAuthenticationPlugin 
Invalid key

In Access/request logs on the same solr server, I see update requests coming 
from other solr servers returning a 401 
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection42_shard1_replica2/update?update.distrib=TOLEADER=http%3A%2F%2Fsgdsolar1.swg.usma.ibm.com%3A8983%2Fsolr%2Fcollection42_shard1_replica1%2F=javabin=2
 HTTP/1.1" 401 386
9.32.182.60 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection40/update?_route_=Q049c2dkbWFpbDI5L089U0dfVVMx20106052!=true
 HTTP/1.1" 401 370
9.32.179.190 - - [22/Dec/2015:14:39:42 +] "GET 
/solr/collection59/get?_route_=Q049c2dkbWFpbDI5L089U0dfVVMx20106072!=Q049c2dkbWFpbDI5L089U0dfVVMx20106072!354405B096A7252500257DF2006B4EBB,Q049c2dkbWFpbDI5L089U0dfVVMx20106072!E05CD420388D090200257DF2006B4F0C,Q049c2dkbWFpbDI5L089U0dfVVMx20106072!0C64A415C05985FD00257DF2006B4EE5,Q049c2dkbWFpbDI5L089U0dfVVMx20106072!CB209D64E6CFD95700257DF2006B4F58,Q049c2dkbWFpbDI5L089U0dfVVMx20106072!416F4C73022EFA1200257DF2006B4F33=unid,sequence,folderunid=xml=10
 HTTP/1.1" 401 367
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection40/update?_route_=Q049c2dkbWFpbDI2L089U0dfVVMx20105988!=true
 HTTP/1.1" 401 370
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection29_shard1_replica1/update?update.distrib=TOLEADER=http%3A%2F%2Fsgdsolar1.swg.usma.ibm.com%3A8983%2Fsolr%2Fcollection29_shard1_replica2%2F=javabin=2
 HTTP/1.1" 401 386
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection9_shard1_replica1/update?update.distrib=TOLEADER=http%3A%2F%2Fsgdsolar1.swg.usma.ibm.com%3A8983%2Fsolr%2Fcollection9_shard1_replica2%2F=javabin=2
 HTTP/1.1" 401 385
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection52_shard1_replica2/update?update.distrib=TOLEADER=http%3A%2F%2Fsgdsolar1.swg.usma.ibm.com%3A8983%2Fsolr%2Fcollection52_shard1_replica1%2F=javabin=2
 HTTP/1.1" 401 386
9.32.179.191 - - [22/Dec/2015:15:04:10 +] "POST 
/solr/collection59/update?_route_=Q049c2dkbWFpbDI4L089U0dfVVMx20106007!=true
 HTTP/1.1" 401 370

Should this be treated as a new bug/issue?


was (Author: nirmalav):
Anshum/Noble,
Iam still seeing intermittent PKIAuth invalid key errors in solr.log  while 
indexing is running against our solrcloud with basic auth enabled

2015-12-22 14:39:42.685 ERROR (qtp201069753-644) [c:collection52 s:shard1 
r:core_node2 x:collection52_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.706 ERROR (qtp201069753-1121) [   ] 
o.a.s.s.PKIAuthenticationPlugin Invalid key
2015-12-22 14:39:42.705 ERROR (qtp201069753-481) [   ] 
o.a.s.s.PKIAuthenticationPlugin Invalid key
2015-12-22 14:39:42.698 ERROR (qtp201069753-1224) [c:collection52 s:shard1 
r:core_node2 x:collection52_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.697 ERROR (qtp201069753-577) [c:collection17 s:shard1 
r:core_node2 x:collection17_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.691 ERROR (qtp201069753-1062) [c:collection52 s:shard1 
r:core_node2 x:collection52_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.685 ERROR (qtp201069753-1063) [c:collection27 s:shard1 
r:core_node1 x:collection27_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 15:04:10.247 ERROR 

[jira] [Commented] (LUCENE-6946) SortField.equals does not take the missing value into account

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068341#comment-15068341
 ] 

ASF subversion and git services commented on LUCENE-6946:
-

Commit 1721422 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1721422 ]

LUCENE-6946: SortField.equals now takes the missing value into account.

> SortField.equals does not take the missing value into account
> -
>
> Key: LUCENE-6946
> URL: https://issues.apache.org/jira/browse/LUCENE-6946
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6946.patch
>
>
> SortField.equals does not check whether both objects have the same missing 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8436) Realtime-get should support filters

2015-12-22 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8436:
---
Attachment: SOLR-8436.patch

Here's a new patch with an efficient filtering implementation... it goes 
straight to the segment where the ID was found and then tries to advance the 
filter to that ID only.

> Realtime-get should support filters
> ---
>
> Key: SOLR-8436
> URL: https://issues.apache.org/jira/browse/SOLR-8436
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-8436.patch, SOLR-8436.patch
>
>
> RTG currently ignores filters.  There are probably other use-cases for RTG 
> and filters, but one that comes to mind is security filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6939) BlendedInfixSuggester to support exponential reciprocal BlenderType

2015-12-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067872#comment-15067872
 ] 

Michael McCandless commented on LUCENE-6939:


Thanks [~arcadius], patch looks good!  I'll commit soon ... but looks like svn 
is down at the moment.

> BlendedInfixSuggester to support exponential reciprocal BlenderType
> ---
>
> Key: LUCENE-6939
> URL: https://issues.apache.org/jira/browse/LUCENE-6939
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spellchecker
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Attachments: LUCENE-6939.patch
>
>
> The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
> - {{BlenderType.POSITION_LINEAR}} and 
> - {{BlenderType.POSITION_RECIPROCAL}} .
> These are used to score documents based on the position of the matched token 
> i.e the closer is the matched term to the beginning, the higher score you get.
> In some use cases, we need a more aggressive scoring based on the position.
> That's where the exponential reciprocal comes into play 
> i.e 
> {{coef = 1/Math.pow(position+1, exponent)}}
> where the {{exponent}} is a configurable variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8454) Improve logging by ZkStateReader and clean up dead code

2015-12-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8454:
---
Summary: Improve logging by ZkStateReader and clean up dead code  (was: 
Improve logging by ZkStateReader)

> Improve logging by ZkStateReader and clean up dead code
> ---
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch, SOLR-8454.patch, 
> SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8454) Improve logging by ZkStateReader and clean up dead code

2015-12-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8454:
---
Description: 
Improve logging output by ZkStateReader, by adding the following:

* Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
* Surround parameters with [], to help readability, especially w/ empty values
* Add missing string messages, where I felt a message will clarify
* Convert some try-catch to a try-multicatch and improve output log message

Also, clean up dead code.

  was:
Improve logging output by ZkStateReader, by adding the following:

* Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
* Surround parameters with [], to help readability, especially w/ empty values
* Add missing string messages, where I felt a message will clarify
* Convert some try-catch to a try-multicatch and improve output log message


> Improve logging by ZkStateReader and clean up dead code
> ---
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch, SOLR-8454.patch, 
> SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message
> Also, clean up dead code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6939) BlendedInfixSuggester to support exponential reciprocal BlenderType

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067916#comment-15067916
 ] 

ASF subversion and git services commented on LUCENE-6939:
-

Commit 1721330 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1721330 ]

LUCENE-6939: add exponential reciprocal scoring mode to BlendedInfixSuggester

> BlendedInfixSuggester to support exponential reciprocal BlenderType
> ---
>
> Key: LUCENE-6939
> URL: https://issues.apache.org/jira/browse/LUCENE-6939
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spellchecker
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Attachments: LUCENE-6939.patch
>
>
> The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
> - {{BlenderType.POSITION_LINEAR}} and 
> - {{BlenderType.POSITION_RECIPROCAL}} .
> These are used to score documents based on the position of the matched token 
> i.e the closer is the matched term to the beginning, the higher score you get.
> In some use cases, we need a more aggressive scoring based on the position.
> That's where the exponential reciprocal comes into play 
> i.e 
> {{coef = 1/Math.pow(position+1, exponent)}}
> where the {{exponent}} is a configurable variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6939) BlendedInfixSuggester to support exponential reciprocal BlenderType

2015-12-22 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6939.

   Resolution: Fixed
Fix Version/s: Trunk
   5.5

Thanks [~arcadius]!

> BlendedInfixSuggester to support exponential reciprocal BlenderType
> ---
>
> Key: LUCENE-6939
> URL: https://issues.apache.org/jira/browse/LUCENE-6939
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spellchecker
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6939.patch
>
>
> The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
> - {{BlenderType.POSITION_LINEAR}} and 
> - {{BlenderType.POSITION_RECIPROCAL}} .
> These are used to score documents based on the position of the matched token 
> i.e the closer is the matched term to the beginning, the higher score you get.
> In some use cases, we need a more aggressive scoring based on the position.
> That's where the exponential reciprocal comes into play 
> i.e 
> {{coef = 1/Math.pow(position+1, exponent)}}
> where the {{exponent}} is a configurable variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6939) BlendedInfixSuggester to support exponential reciprocal BlenderType

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067917#comment-15067917
 ] 

ASF subversion and git services commented on LUCENE-6939:
-

Commit 1721331 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1721331 ]

LUCENE-6939: add exponential reciprocal scoring mode to BlendedInfixSuggester

> BlendedInfixSuggester to support exponential reciprocal BlenderType
> ---
>
> Key: LUCENE-6939
> URL: https://issues.apache.org/jira/browse/LUCENE-6939
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spellchecker
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6939.patch
>
>
> The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
> - {{BlenderType.POSITION_LINEAR}} and 
> - {{BlenderType.POSITION_RECIPROCAL}} .
> These are used to score documents based on the position of the matched token 
> i.e the closer is the matched term to the beginning, the higher score you get.
> In some use cases, we need a more aggressive scoring based on the position.
> That's where the exponential reciprocal comes into play 
> i.e 
> {{coef = 1/Math.pow(position+1, exponent)}}
> where the {{exponent}} is a configurable variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8423:
---
Attachment: SOLR-8423.patch

Patch that uses deleteDataDir, deleteInstanceDir and defaults them to true.
It now also supports setting those to false for DELETESHARD and DELETEREPLICA.

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8423.patch, SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6943) Jvm Crashes occassionaly with Lucene 4.6.1

2015-12-22 Thread amit bhengra (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067892#comment-15067892
 ] 

amit bhengra commented on LUCENE-6943:
--

Have run with NIOFSDirectory haven't seen the issue so far, but have to wait 
more as in one instance it took around 7 days for this issue to surface and  in 
another it was within hours.

> Jvm Crashes occassionaly with Lucene 4.6.1
> --
>
> Key: LUCENE-6943
> URL: https://issues.apache.org/jira/browse/LUCENE-6943
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Affects Versions: 4.6.1
>Reporter: amit bhengra
> Attachments: hs_err_pid9889.log
>
>
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7f5625212cd7, pid=9889, tid=139920130201344
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_60-b19) (build 
> 1.7.0_60-b19)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode 
> linux-amd64 )
> # Problematic frame:
> # J 11490 C2 org.apache.lucene.store.ByteBufferIndexInput.readByte()B (126 
> bytes) @ 0x7f5625212cd7 [0x7f5625212c80+0x57]
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.sun.com/bugreport/crash.jsp
> #
> Register to memory mapping:
> RAX=0x7f55de053510 is an oop
> {instance class} 
>  - klass: {other class}
> RBX=0x7f549dffc028 is an oop
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader 
>  - klass: 'org/apache/lucene/codecs/lucene41/Lucene41PostingsReader'
> RCX=0x0004 is an unknown value
> RDX=0x0080 is an unknown value
> RSP=0x7f41b1a7f640 is pointing into the stack for thread: 
> 0x7f48f81a4000
> RBP=0x7f4b1bff3630 is an oop
> java.nio.DirectByteBufferR 
>  - klass: 'java/nio/DirectByteBufferR'
> RSI=0x7f4b1bff35a8 is an oop
> org.apache.lucene.store.MMapDirectory$MMapIndexInput 
>  - klass: 'org/apache/lucene/store/MMapDirectory$MMapIndexInput'
> RDI=0x237c532a is an unknown value
> R8 =0x237c4da3 is an unknown value
> R9 =0x7f4b1bff35a8 is an oop
> org.apache.lucene.store.MMapDirectory$MMapIndexInput 
>  - klass: 'org/apache/lucene/store/MMapDirectory$MMapIndexInput'
> R10=0x7f3a8f98a000 is an unknown value
> R11=0x237c4da3 is an unknown value
> R12=0x7f41b1a81f30 is pointing into the stack for thread: 
> 0x7f48f81a4000
> R13=0x0093 is an unknown value
> R14=0x431f is an unknown value
> R15=0x7f48f81a4000 is a thread
> Stack: [0x7f41b1985000,0x7f41b1a86000],  sp=0x7f41b1a7f640,  free 
> space=1001k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 11490 C2 org.apache.lucene.store.ByteBufferIndexInput.readByte()B (126 
> bytes) @ 0x7f5625212cd7 [0x7f5625212c80+0x57]
> J 4940 C2 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.nextPosition()I
>  (118 bytes) @ 0x7f5624515cb4 [0x7f5624515980+0x334]
> J 10578 C2 org.apache.lucene.search.ExactPhraseScorer.phraseFreq()I (624 
> bytes) @ 0x7f56256de588 [0x7f56256de4e0+0xa8]
> J 10629 C2 org.apache.lucene.search.ExactPhraseScorer.advance(I)I (152 bytes) 
> @ 0x7f5625729d84 [0x7f5625729ba0+0x1e4]
> J 10433 C2 org.apache.lucene.search.MinShouldMatchSumScorer.advance(I)I (113 
> bytes) @ 0x7f5625653c34 [0x7f5625653ae0+0x154]
> J 5630 C2 org.apache.lucene.search.BooleanScorer2.advance(I)I (14 bytes) @ 
> 0x7f5624642a10 [0x7f5624642820+0x1f0]
> J 5826 C2 org.apache.lucene.search.DisjunctionScorer.advance(I)I (87 bytes) @ 
> 0x7f56246f140c [0x7f56246f13c0+0x4c]
> J 8801 C2 
> org.apache.lucene.search.join.FkToChildBlockJoinQuery$FkToChildBlockJoinScorer.advance(I)I
>  (284 bytes) @ 0x7f56251274c0 [0x7f5625127440+0x80]
> J 5630 C2 org.apache.lucene.search.BooleanScorer2.advance(I)I (14 bytes) @ 
> 0x7f56246429fc [0x7f5624642820+0x1dc]
> J 4797 C2 
> org.apache.lucene.search.FilteredQuery$LeapFrogScorer.score(Lorg/apache/lucene/search/Collector;)V
>  (91 bytes) @ 0x7f56244e9ccc [0x7f56244e9c40+0x8c]
> J 4613 C2 
> org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
>  (93 bytes) @ 0x7f562446cbec [0x7f562446ca80+0x16c]
> J 6159 C2 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/SolrIndexSearcher$QueryResult;Lorg/apache/solr/search/SolrIndexSearcher$QueryCommand;)V
>  (708 bytes) @ 0x7f562486cc30 [0x7f562486c9a0+0x290]
> J 11811 C2 
> 

[jira] [Commented] (LUCENE-6943) Jvm Crashes occassionaly with Lucene 4.6.1

2015-12-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067860#comment-15067860
 ] 

Michael McCandless commented on LUCENE-6943:


> can there be an issue with Solr in this case?

Hmm maybe it's possible there were Solr bugs in 4.6.1 around closing searchers 
that are still in use by active searches (4.6.1 is quite old by now) ... did 
switching to {{NIOFSDirectory}} avoid the crash?

> Jvm Crashes occassionaly with Lucene 4.6.1
> --
>
> Key: LUCENE-6943
> URL: https://issues.apache.org/jira/browse/LUCENE-6943
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Affects Versions: 4.6.1
>Reporter: amit bhengra
> Attachments: hs_err_pid9889.log
>
>
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7f5625212cd7, pid=9889, tid=139920130201344
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_60-b19) (build 
> 1.7.0_60-b19)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode 
> linux-amd64 )
> # Problematic frame:
> # J 11490 C2 org.apache.lucene.store.ByteBufferIndexInput.readByte()B (126 
> bytes) @ 0x7f5625212cd7 [0x7f5625212c80+0x57]
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.sun.com/bugreport/crash.jsp
> #
> Register to memory mapping:
> RAX=0x7f55de053510 is an oop
> {instance class} 
>  - klass: {other class}
> RBX=0x7f549dffc028 is an oop
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader 
>  - klass: 'org/apache/lucene/codecs/lucene41/Lucene41PostingsReader'
> RCX=0x0004 is an unknown value
> RDX=0x0080 is an unknown value
> RSP=0x7f41b1a7f640 is pointing into the stack for thread: 
> 0x7f48f81a4000
> RBP=0x7f4b1bff3630 is an oop
> java.nio.DirectByteBufferR 
>  - klass: 'java/nio/DirectByteBufferR'
> RSI=0x7f4b1bff35a8 is an oop
> org.apache.lucene.store.MMapDirectory$MMapIndexInput 
>  - klass: 'org/apache/lucene/store/MMapDirectory$MMapIndexInput'
> RDI=0x237c532a is an unknown value
> R8 =0x237c4da3 is an unknown value
> R9 =0x7f4b1bff35a8 is an oop
> org.apache.lucene.store.MMapDirectory$MMapIndexInput 
>  - klass: 'org/apache/lucene/store/MMapDirectory$MMapIndexInput'
> R10=0x7f3a8f98a000 is an unknown value
> R11=0x237c4da3 is an unknown value
> R12=0x7f41b1a81f30 is pointing into the stack for thread: 
> 0x7f48f81a4000
> R13=0x0093 is an unknown value
> R14=0x431f is an unknown value
> R15=0x7f48f81a4000 is a thread
> Stack: [0x7f41b1985000,0x7f41b1a86000],  sp=0x7f41b1a7f640,  free 
> space=1001k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 11490 C2 org.apache.lucene.store.ByteBufferIndexInput.readByte()B (126 
> bytes) @ 0x7f5625212cd7 [0x7f5625212c80+0x57]
> J 4940 C2 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.nextPosition()I
>  (118 bytes) @ 0x7f5624515cb4 [0x7f5624515980+0x334]
> J 10578 C2 org.apache.lucene.search.ExactPhraseScorer.phraseFreq()I (624 
> bytes) @ 0x7f56256de588 [0x7f56256de4e0+0xa8]
> J 10629 C2 org.apache.lucene.search.ExactPhraseScorer.advance(I)I (152 bytes) 
> @ 0x7f5625729d84 [0x7f5625729ba0+0x1e4]
> J 10433 C2 org.apache.lucene.search.MinShouldMatchSumScorer.advance(I)I (113 
> bytes) @ 0x7f5625653c34 [0x7f5625653ae0+0x154]
> J 5630 C2 org.apache.lucene.search.BooleanScorer2.advance(I)I (14 bytes) @ 
> 0x7f5624642a10 [0x7f5624642820+0x1f0]
> J 5826 C2 org.apache.lucene.search.DisjunctionScorer.advance(I)I (87 bytes) @ 
> 0x7f56246f140c [0x7f56246f13c0+0x4c]
> J 8801 C2 
> org.apache.lucene.search.join.FkToChildBlockJoinQuery$FkToChildBlockJoinScorer.advance(I)I
>  (284 bytes) @ 0x7f56251274c0 [0x7f5625127440+0x80]
> J 5630 C2 org.apache.lucene.search.BooleanScorer2.advance(I)I (14 bytes) @ 
> 0x7f56246429fc [0x7f5624642820+0x1dc]
> J 4797 C2 
> org.apache.lucene.search.FilteredQuery$LeapFrogScorer.score(Lorg/apache/lucene/search/Collector;)V
>  (91 bytes) @ 0x7f56244e9ccc [0x7f56244e9c40+0x8c]
> J 4613 C2 
> org.apache.lucene.search.IndexSearcher.search(Ljava/util/List;Lorg/apache/lucene/search/Weight;Lorg/apache/lucene/search/Collector;)V
>  (93 bytes) @ 0x7f562446cbec [0x7f562446ca80+0x16c]
> J 6159 C2 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(Lorg/apache/solr/search/SolrIndexSearcher$QueryResult;Lorg/apache/solr/search/SolrIndexSearcher$QueryCommand;)V
>  (708 bytes) @ 0x7f562486cc30 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5487 - Failure!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5487/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([2D0F23EB035FBAF9:A55B1C31ADA3D701]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.allTests(CloudSolrClientTest.java:273)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.test(CloudSolrClientTest.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Updated] (SOLR-8454) Improve logging by ZkStateReader

2015-12-22 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-8454:
-
Attachment: SOLR-8454.patch

While I was at it, I removed some unused code, thrown exceptions etc. Those are 
minor additions over the previous patch.

> Improve logging by ZkStateReader
> 
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8454) Improve logging by ZkStateReader

2015-12-22 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-8454:
-
Attachment: SOLR-8454.patch

Thanks Anshum, addressed your comment in this patch.

> Improve logging by ZkStateReader
> 
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch, SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3864 - Failure

2015-12-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3864/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:42277/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42277/awholynewcollection_0: non ok status: 
500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([4A004922218B990B:C25476F88F77F4F3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:508)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:658)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8454) Improve logging by ZkStateReader

2015-12-22 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067732#comment-15067732
 ] 

Anshum Gupta commented on SOLR-8454:


Thanks for doing this. The usage of LOG vs log is kind of split in the code 
base and I don't have a strong opinion on that so that's ok.

I see you've capitalized log messages i.e. upper case starting char, at all 
places but here
{code}
LOG.debug("server older than client {}<{}", collection.getZNodeVersion(), 
version);
{code}

The rest all looks good to me to commit.

> Improve logging by ZkStateReader
> 
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8454) Improve logging by ZkStateReader

2015-12-22 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8454:
---
Attachment: SOLR-8454.patch

Updated patch. 

I've cleaned up more code in this patch while we are at it. We should however 
explicit mention that in the ticket summary that we are not only improving the 
logging here but also cleaning up code.

Here are the things that I have changed:
* Removed unused import for ThreadFactory
* Removed the unwanted _path_ param from addSecuritynodeWatcher() method. The 
path is always SOLR_SECURITY_CONF_PATH and so it makes sense to just directly 
use it.
* In addSecuritynodeWatcher.process() removed the following code as those 
conditions are never true. The code block doesn't throw KeeperException or 
InterruptedException:
{code}
if (e instanceof KeeperException) throw (KeeperException) e;
if (e instanceof InterruptedException) throw (InterruptedException) e;
{code}
* Capitalized the log line I mentioned in my last comment.


> Improve logging by ZkStateReader
> 
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch, SOLR-8454.patch, 
> SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8454) Improve logging by ZkStateReader and clean up dead code

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068111#comment-15068111
 ] 

ASF subversion and git services commented on SOLR-8454:
---

Commit 1721393 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1721393 ]

SOLR-8454: Improve logging by ZkStateReader and clean up dead code

> Improve logging by ZkStateReader and clean up dead code
> ---
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch, SOLR-8454.patch, 
> SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message
> Also, clean up dead code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-22 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068039#comment-15068039
 ] 

Anshum Gupta commented on SOLR-8423:


Right, working on that. Got stuck up with something else.

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8423.patch, SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6945) factor out TestCorePlus(Queries|Extensions)Parser from TestParser

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068042#comment-15068042
 ] 

ASF subversion and git services commented on LUCENE-6945:
-

Commit 1721381 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1721381 ]

LUCENE-6945: factor out TestCorePlus(Queries|Extensions)Parser from TestParser

> factor out TestCorePlus(Queries|Extensions)Parser from TestParser
> -
>
> Key: LUCENE-6945
> URL: https://issues.apache.org/jira/browse/LUCENE-6945
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6945.patch
>
>
> Tests for the xml query parser in SOLR-839 for example could then be 
> extending the {{TestParser}} or {{TestCorePlusQueriesParser}} or 
> {{TestCorePlusExtensionsParser}} depending on requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6946) SortField.equals does not take the missing value into account

2015-12-22 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6946:
-
Attachment: LUCENE-6946.patch

Here is a patch.

> SortField.equals does not take the missing value into account
> -
>
> Key: LUCENE-6946
> URL: https://issues.apache.org/jira/browse/LUCENE-6946
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6946.patch
>
>
> SortField.equals does not check whether both objects have the same missing 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6946) SortField.equals does not take the missing value into account

2015-12-22 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6946:


 Summary: SortField.equals does not take the missing value into 
account
 Key: LUCENE-6946
 URL: https://issues.apache.org/jira/browse/LUCENE-6946
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.5, Trunk


SortField.equals does not check whether both objects have the same missing 
value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2015-12-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068087#comment-15068087
 ] 

Mark Miller commented on SOLR-7339:
---

My patch for SOLR-8453 seems to solve pretty much all the connection resets I 
have been seeing except for the Locale issue. SolrExampleBinaryTest, 
TestManagedSchemaDynamicFieldResource and a bunch of others can fail with the 
wrong Locale.

> Upgrade Jetty from 9.2 to 9.3
> -
>
> Key: SOLR-7339
> URL: https://issues.apache.org/jira/browse/SOLR-7339
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gregg Donovan
>Assignee: Shalin Shekhar Mangar
> Fix For: Trunk
>
> Attachments: SOLR-7339.patch, SOLR-7339.patch, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng
>
>
> Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor 
> SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] 
> and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx].
> Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are:
> * multiplexing requests over a single TCP connection ("streams")
> * canceling a single request without closing the TCP connection
> * removing [head-of-line 
> blocking|https://http2.github.io/faq/#why-is-http2-multiplexed]
> * header compression
> Caveats:
> * Jetty 9.3 is at M2, not released.
> * Full Solr support for HTTP/2 would require more work than just upgrading 
> Jetty. The server configuration would need to change and a new HTTP client 
> ([Jetty's own 
> client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], 
> [Square's OkHttp|http://square.github.io/okhttp/], 
> [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need 
> to be selected and wired up. Perhaps this is worthy of a branch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6947) SortField.missingValue should not be public

2015-12-22 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6947:


 Summary: SortField.missingValue should not be public
 Key: LUCENE-6947
 URL: https://issues.apache.org/jira/browse/LUCENE-6947
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk


Today we have SortField.setMissingValue that tries to perform validation of the 
missing value, except that given that SortField.missingValue is public, it is 
very easy to bypass it. Let's make it protected (some sub-classes use it) and 
add a getter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-12-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068027#comment-15068027
 ] 

Shalin Shekhar Mangar commented on SOLR-8220:
-

Thanks Ishan.

bq. Since, for the fl=* case, we need all non-stored DVs that have 
useDocValuesAsStored=true, but for the general filtering case of fl=dv1,dv2 we 
need to filter using all non-stored DVs (irrespective of the 
useDocValuesAsStored flag)

Okay I see what you are saying. The useDocValuesAsStored=true default applies 
when you request all fields but if you are explicitly asking for a field then 
we can return it from DVs even if it was marked as useDocValuesAsStored=false. 
I have mixed feelings about this but I can see where it can be useful e.g. 1st 
phase of distributed search.

bq. . However, I had a look, and found that responseWriters (e.g. 
JSONResponseWriter) get the whole SolrDocument at the writeSolrDocument() 
method, from where it does the following call to drop fields it doesn't need

Hmm, yeah, we can't do that with doc values, it'd be too expensive.

Is there a test which creates a new field with useDocValuesAsStored as true and 
separately as false using the schema API? I'm assuming you will address Erick's 
concern above about multi-valued fields.

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-5x.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068033#comment-15068033
 ] 

Shalin Shekhar Mangar commented on SOLR-8423:
-

Thanks Anshum. Looks good to me. We need a test though.

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8423.patch, SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-12-22 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068075#comment-15068075
 ] 

Ishan Chattopadhyaya commented on SOLR-8220:


bq. Is there a test which creates a new field with useDocValuesAsStored as true 
and separately as false using the schema API?
I had added SchemaVersionSpecificBehaviorTest to test for these various 
true/false cases. However, there is no useDocValuesAsStored=false case with 
checking of output. I'll add such a test.

bq. I'm assuming you will address Erick's concern above about multi-valued 
fields.
I'm working through them. So far as I can see, both the current loop with 
values.getValueCount() and what Erick suggested as a loop are running 
identically, i.e values.getValueCount() is indeed returning the count of values 
per document. But I am adding a test to prove it.

For the {{DocValues.getDocsWithField(atomicReader, fieldName).get(docid)}}, not 
having it was resulting in empty fields being returned for documents that 
weren't supposed to have an docValue (the user never added a docValue for that 
document during indexing). Again, I think I should add a specific test for 
that, testing for the number of fields returned (maybe there already is one 
from Keith, but I'll check again).

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-5x.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6947) SortField.missingValue should not be public

2015-12-22 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6947:
-
Attachment: LUCENE-6947.patch

Here is a patch.

> SortField.missingValue should not be public
> ---
>
> Key: LUCENE-6947
> URL: https://issues.apache.org/jira/browse/LUCENE-6947
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: Trunk
>
> Attachments: LUCENE-6947.patch
>
>
> Today we have SortField.setMissingValue that tries to perform validation of 
> the missing value, except that given that SortField.missingValue is public, 
> it is very easy to bypass it. Let's make it protected (some sub-classes use 
> it) and add a getter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6835) Directory.deleteFile should "own" retrying deletions on Windows

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067980#comment-15067980
 ] 

ASF subversion and git services commented on LUCENE-6835:
-

Commit 1721365 from [~mikemccand] in branch 'dev/branches/lucene6835'
[ https://svn.apache.org/r1721365 ]

LUCENE-6835: exempt tests from virus checker if they do direct file deletes, or 
stop doing unnecessary direct file deletes; address some nocommits; fix 
compilation errors

> Directory.deleteFile should "own" retrying deletions on Windows
> ---
>
> Key: LUCENE-6835
> URL: https://issues.apache.org/jira/browse/LUCENE-6835
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6835.patch
>
>
> Rob's idea:
> Today, we have hairy logic in IndexFileDeleter to deal with Windows file 
> systems that cannot delete still open files.
> And with LUCENE-6829, where OfflineSorter now must deal with the situation 
> too ... I worked around it by fixing all tests to disable the virus checker.
> I think it makes more sense to push this "platform specific problem" lower in 
> the stack, into Directory?  I.e., its deleteFile method would catch the 
> access denied, and then retry the deletion later.  Then we could re-enable 
> virus checker on all these tests, simplify IndexFileDeleter, etc.
> Maybe in the future we could further push this down, into WindowsDirectory,  
> and fix FSDirectory.open to return WindowsDirectory on windows ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6939) BlendedInfixSuggester to support exponential reciprocal BlenderType

2015-12-22 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068054#comment-15068054
 ] 

Arcadius Ahouansou commented on LUCENE-6939:


Thank you very much [~mikemccand] for your help.
As part of this change, we would need to update the BlenderType section of the 
wiki at
https://cwiki.apache.org/confluence/display/solr/Suggester

- change {{linear}} and {{reciprocal}} to {{position_linear}} and 
{{position_reciprocal}} respectively, and
- also add a section for {{position_exponential_reciprocal}} and the 
{{exponent}} config

Again, thank you very much [~mikemccand] for your help.

> BlendedInfixSuggester to support exponential reciprocal BlenderType
> ---
>
> Key: LUCENE-6939
> URL: https://issues.apache.org/jira/browse/LUCENE-6939
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spellchecker
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6939.patch
>
>
> The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
> - {{BlenderType.POSITION_LINEAR}} and 
> - {{BlenderType.POSITION_RECIPROCAL}} .
> These are used to score documents based on the position of the matched token 
> i.e the closer is the matched term to the beginning, the higher score you get.
> In some use cases, we need a more aggressive scoring based on the position.
> That's where the exponential reciprocal comes into play 
> i.e 
> {{coef = 1/Math.pow(position+1, exponent)}}
> where the {{exponent}} is a configurable variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 889 - Still Failing

2015-12-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/889/

123 tests failed.
FAILED:  org.apache.solr.client.solrj.SolrExampleBinaryTest.testAugmentFields

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:33585/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:33585/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([BBFB197EC0553C4B:BFB0BA9C985E3D04]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:896)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:859)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:874)
at 
org.apache.solr.client.solrj.SolrExampleTests.testAugmentFields(SolrExampleTests.java:477)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Updated] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2015-12-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8453:
--
Description: 
The basic problem is that when we are streaming in updates via a client, an 
update can fail in a way that further updates in the request will not be 
processed, but not in a way that causes the client to stop and finish up the 
request before the server does something else with that connection.

This seems to mean that even after the server stops processing the request, the 
concurrent update client is still in the process of sending the request. It 
seems previously, Jetty would not go after the connection very quickly after 
the server processing thread was stopped via exception, and the client 
(usually?) had time to clean up properly. But after the Jetty upgrade from 9.2 
to 9.3, Jetty closes the connection on the server sooner than previous versions 
(?), and the client does not end up getting notified of the original exception 
at all and instead hits a connection reset exception. The result was random 
fails due to connection reset throughout our tests and one particular test 
failing consistently. Even before this update, it does not seem like we are 
acting in a safe or 'behaved' manner, but our version of Jetty was relaxed 
enough (or a bug was fixed?) for our tests to work out.

  was:
The basic problem is that when we are streaming in updates via a client, an 
update can fail in a way that further updates in the request will not be 
processed, but not in a way that causes the client to stop streaming more 
updates.

This seems to mean that even after the server stops processing the request, the 
concurrent update client is sending out some further updates. It seems 
previously this burst was sent on the connection and ignored? But after the 
Jetty upgrade from 9.2 to 9.3, Jetty closes the connection on the server when 
we throw certain document level exceptions, and the client does not end up 
getting notified of the original exception at all and instead hits a connection 
reset exception. Even before this update, it does not seem like we are acting 
in a safe or 'behaved' manner.


> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6908) TestGeoUtils.testGeoRelations is buggy with irregular rectangles

2015-12-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068321#comment-15068321
 ] 

Steve Rowe commented on LUCENE-6908:


One more reproducible failure from my Jenkins on branch_5x (I'll stop adding 
more after this one):

{noformat}
   [junit4] Suite: org.apache.lucene.util.TestGeoUtils
   [junit4]   1> doc=1431 matched but should not on iteration 50
   [junit4]   1>   lon=14.089814610779285 lat=88.21761829778552 
distanceMeters=310627.1321615869 vs radiusMeters=308762.06620344025
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeoUtils 
-Dtests.method=testGeoRelations -Dtests.seed=70AF7A8C5D104698 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=es_CR -Dtests.timezone=Africa/Maputo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 1.28s J3 | TestGeoUtils.testGeoRelations <<<
   [junit4]> Throwable #1: java.lang.AssertionError: 1 incorrect hits (see 
above)
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([70AF7A8C5D104698:B28C6E3929E73026]:0)
   [junit4]>at 
org.apache.lucene.util.TestGeoUtils.testGeoRelations(TestGeoUtils.java:533)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene54): {}, 
docValues:{}, sim=DefaultSimilarity, locale=es_CR, timezone=Africa/Maputo
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_45 (64-bit)/cpus=16,threads=1,free=426047992,total=514850816
   [junit4]   2> NOTE: All tests run in this JVM: [TestGeoUtils]
   [junit4] Completed [3/20 (1!)] on J3 in 1.74s, 8 tests, 1 failure <<< 
FAILURES!
{noformat}

> TestGeoUtils.testGeoRelations is buggy with irregular rectangles
> 
>
> Key: LUCENE-6908
> URL: https://issues.apache.org/jira/browse/LUCENE-6908
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
> Attachments: LUCENE-6908.patch, LUCENE-6908.patch, LUCENE-6908.patch, 
> LUCENE-6908.patch
>
>
> The {{.testGeoRelations}} method doesn't exactly test the behavior of 
> GeoPoint*Query as its using the BKD split technique (instead of quad cell 
> division) to divide the space on each pass. For "large" distance queries this 
> can create a lot of irregular rectangles producing large radial distortion 
> error when using the cartesian approximation methods provided by 
> {{GeoUtils}}. This issue improves the accuracy of GeoUtils cartesian 
> approximation methods on irregular rectangles without having to cut over to 
> an expensive oblate geometry approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8220) Read field from docValues for non stored fields

2015-12-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068319#comment-15068319
 ] 

Erick Erickson commented on SOLR-8220:
--

bq: For the DocValues.getDocsWithField(atomicReader, fieldName).get(docid), not 
having it was resulting in empty fields being returned for documents that 
weren't supposed to have an docValue (the user never added a docValue for that 
document during indexing).

Right, I had to add a test at the end to avoid that. I didn't track the code 
thoroughly, but does the DocValues.getDocsWithField allocate a BitSet or just 
return a pre-existing instance? Or even cache the BitSet somewhere? If it 
allocates a new BitSet (or even fills up a cache entry), the test at the end 
might be much less expensive. I didn't track it down though, and if it returns 
a reference to a cached bitset that will be created _anyway_, then it's just a 
style thing

{code}
 if (outValues.size() > 0) {
   sdoc.addField()
}
{code}

As for whether the loop returns all values in the field, I saw this "by 
inspection" on the techproducts example (with a few mods for adding 
docValues="true" to the schema). Again, though, this is 4.x after I hacked a 
backport and put it in an entirely different place in the code, specifically 
NOT a visitor pattern. So it's entirely possible that the semantics have 
changed or hacking it into a different part of the code base has a different 
context.  A test would settle it for all time though.

> Read field from docValues for non stored fields
> ---
>
> Key: SOLR-8220
> URL: https://issues.apache.org/jira/browse/SOLR-8220
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8220-5x.patch, SOLR-8220-ishan.patch, 
> SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, SOLR-8220-ishan.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, SOLR-8220.patch, 
> SOLR-8220.patch, SOLR-8220.patch
>
>
> Many times a value will be both stored="true" and docValues="true" which 
> requires redundant data to be stored on disk. Since reading from docValues is 
> both efficient and a common practice (facets, analytics, streaming, etc), 
> reading values from docValues when a stored version of the field does not 
> exist would be a valuable disk usage optimization.
> The only caveat with this that I can see would be for multiValued fields as 
> they would always be returned sorted in the docValues approach. I believe 
> this is a fair compromise.
> I've done a rough implementation for this as a field transform, but I think 
> it should live closer to where stored fields are loaded in the 
> SolrIndexSearcher.
> Two open questions/observations:
> 1) There doesn't seem to be a standard way to read values for docValues, 
> facets, analytics, streaming, etc, all seem to be doing their own ways, 
> perhaps some of this logic should be centralized.
> 2) What will the API behavior be? (Below is my proposed implementation)
> Parameters for fl:
> - fl="docValueField"
>   -- return field from docValue if the field is not stored and in docValues, 
> if the field is stored return it from stored fields
> - fl="*"
>   -- return only stored fields
> - fl="+"
>-- return stored fields and docValue fields
> 2a - would be easiest implementation and might be sufficient for a first 
> pass. 2b - is current behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6908) TestGeoUtils.testGeoRelations is buggy with irregular rectangles

2015-12-22 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068336#comment-15068336
 ] 

Nicholas Knize commented on LUCENE-6908:


++ Thanks [~steve_rowe] There's a geometric approximation in the within that's 
a bit too lenient for BKD. These seeds have been super helpful in regression 
testing a fix. Should have a patch shortly. Sorry for the noise!

> TestGeoUtils.testGeoRelations is buggy with irregular rectangles
> 
>
> Key: LUCENE-6908
> URL: https://issues.apache.org/jira/browse/LUCENE-6908
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
> Attachments: LUCENE-6908.patch, LUCENE-6908.patch, LUCENE-6908.patch, 
> LUCENE-6908.patch
>
>
> The {{.testGeoRelations}} method doesn't exactly test the behavior of 
> GeoPoint*Query as its using the BKD split technique (instead of quad cell 
> division) to divide the space on each pass. For "large" distance queries this 
> can create a lot of irregular rectangles producing large radial distortion 
> error when using the cartesian approximation methods provided by 
> {{GeoUtils}}. This issue improves the accuracy of GeoUtils cartesian 
> approximation methods on irregular rectangles without having to cut over to 
> an expensive oblate geometry approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8326) PKIAuthenticationPlugin doesn't report any errors in case of stale or wrong keys and returns garbage

2015-12-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068351#comment-15068351
 ] 

Noble Paul commented on SOLR-8326:
--

This is a different error . It happens because the request was received after 5 
seconds of sending it or your server times are not in sync.

I guess we should add a prop to increase the key timeout

> PKIAuthenticationPlugin doesn't report any errors in case of stale or wrong 
> keys and returns garbage
> 
>
> Key: SOLR-8326
> URL: https://issues.apache.org/jira/browse/SOLR-8326
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3, 5.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Blocker
> Fix For: 5.3.2, 5.4, Trunk
>
> Attachments: SOLR-8326.patch, SOLR-8326.patch, SOLR-8326.patch
>
>
> This was reported on the mailing list:
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg115921.html
> I tested it out as follows to confirm that adding a 'read' rule causes 
> replication to break. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 275 - Failure!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/275/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [SolrCore, MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor]
at __randomizedtesting.SeedInfo.seed([5B2C987B281D5A5C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:229)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=8045, name=searcherExecutor-3083-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=8045, name=searcherExecutor-3083-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at 

[jira] [Commented] (LUCENE-6945) factor out TestCorePlus(Queries|Extensions)Parser from TestParser

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068287#comment-15068287
 ] 

ASF subversion and git services commented on LUCENE-6945:
-

Commit 1721416 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1721416 ]

LUCENE-6945: factor out TestCorePlus(Queries|Extensions)Parser from TestParser 
(merge in revision 1721381 from trunk)

> factor out TestCorePlus(Queries|Extensions)Parser from TestParser
> -
>
> Key: LUCENE-6945
> URL: https://issues.apache.org/jira/browse/LUCENE-6945
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6945.patch
>
>
> Tests for the xml query parser in SOLR-839 for example could then be 
> extending the {{TestParser}} or {{TestCorePlusQueriesParser}} or 
> {{TestCorePlusExtensionsParser}} depending on requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8326) PKIAuthenticationPlugin doesn't report any errors in case of stale or wrong keys and returns garbage

2015-12-22 Thread Nirmala Venkatraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068290#comment-15068290
 ] 

Nirmala Venkatraman commented on SOLR-8326:
---

Anshum/Noble,
Iam still seeing intermittent PKIAuth invalid key errors in solr.log  while 
indexing is running against our solrcloud with basic auth enabled

2015-12-22 14:39:42.685 ERROR (qtp201069753-644) [c:collection52 s:shard1 
r:core_node2 x:collection52_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.706 ERROR (qtp201069753-1121) [   ] 
o.a.s.s.PKIAuthenticationPlugin Invalid key
2015-12-22 14:39:42.705 ERROR (qtp201069753-481) [   ] 
o.a.s.s.PKIAuthenticationPlugin Invalid key
2015-12-22 14:39:42.698 ERROR (qtp201069753-1224) [c:collection52 s:shard1 
r:core_node2 x:collection52_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.697 ERROR (qtp201069753-577) [c:collection17 s:shard1 
r:core_node2 x:collection17_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.691 ERROR (qtp201069753-1062) [c:collection52 s:shard1 
r:core_node2 x:collection52_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 14:39:42.685 ERROR (qtp201069753-1063) [c:collection27 s:shard1 
r:core_node1 x:collection27_shard1_replica2] o.a.s.s.PKIAuthenticationPlugin 
Invalid key
2015-12-22 15:04:10.247 ERROR (qtp201069753-1045) [c:collection23 s:shard1 
r:core_node1 x:collection23_shard1_replica1] o.a.s.s.PKIAuthenticationPlugin 
Invalid key

In Access/request logs on the same solr server, I see update requests coming 
from other solr servers returning a 401 
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection42_shard1_replica2/update?update.distrib=TOLEADER=http%3A%2F%2Fsgdsolar1.swg.usma.ibm.com%3A8983%2Fsolr%2Fcollection42_shard1_replica1%2F=javabin=2
 HTTP/1.1" 401 386
9.32.182.60 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection40/update?_route_=Q049c2dkbWFpbDI5L089U0dfVVMx20106052!=true
 HTTP/1.1" 401 370
9.32.179.190 - - [22/Dec/2015:14:39:42 +] "GET 
/solr/collection59/get?_route_=Q049c2dkbWFpbDI5L089U0dfVVMx20106072!=Q049c2dkbWFpbDI5L089U0dfVVMx20106072!354405B096A7252500257DF2006B4EBB,Q049c2dkbWFpbDI5L089U0dfVVMx20106072!E05CD420388D090200257DF2006B4F0C,Q049c2dkbWFpbDI5L089U0dfVVMx20106072!0C64A415C05985FD00257DF2006B4EE5,Q049c2dkbWFpbDI5L089U0dfVVMx20106072!CB209D64E6CFD95700257DF2006B4F58,Q049c2dkbWFpbDI5L089U0dfVVMx20106072!416F4C73022EFA1200257DF2006B4F33=unid,sequence,folderunid=xml=10
 HTTP/1.1" 401 367
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection40/update?_route_=Q049c2dkbWFpbDI2L089U0dfVVMx20105988!=true
 HTTP/1.1" 401 370
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection29_shard1_replica1/update?update.distrib=TOLEADER=http%3A%2F%2Fsgdsolar1.swg.usma.ibm.com%3A8983%2Fsolr%2Fcollection29_shard1_replica2%2F=javabin=2
 HTTP/1.1" 401 386
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection9_shard1_replica1/update?update.distrib=TOLEADER=http%3A%2F%2Fsgdsolar1.swg.usma.ibm.com%3A8983%2Fsolr%2Fcollection9_shard1_replica2%2F=javabin=2
 HTTP/1.1" 401 385
9.32.182.53 - - [22/Dec/2015:14:39:42 +] "POST 
/solr/collection52_shard1_replica2/update?update.distrib=TOLEADER=http%3A%2F%2Fsgdsolar1.swg.usma.ibm.com%3A8983%2Fsolr%2Fcollection52_shard1_replica1%2F=javabin=2
 HTTP/1.1" 401 386
9.32.179.191 - - [22/Dec/2015:15:04:10 +] "POST 
/solr/collection59/update?_route_=Q049c2dkbWFpbDI4L089U0dfVVMx20106007!=true
 HTTP/1.1" 401 370

Should this be treated as a new bug/issue?

> PKIAuthenticationPlugin doesn't report any errors in case of stale or wrong 
> keys and returns garbage
> 
>
> Key: SOLR-8326
> URL: https://issues.apache.org/jira/browse/SOLR-8326
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3, 5.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Blocker
> Fix For: 5.3.2, 5.4, Trunk
>
> Attachments: SOLR-8326.patch, SOLR-8326.patch, SOLR-8326.patch
>
>
> This was reported on the mailing list:
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg115921.html
> I tested it out as follows to confirm that adding a 'read' rule causes 
> replication to break. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8326) PKIAuthenticationPlugin doesn't report any errors in case of stale or wrong keys and returns garbage

2015-12-22 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8326:
-
Attachment: pkiauth_ttl.patch

adding a TTL system property to PKIAuthPlugin

> PKIAuthenticationPlugin doesn't report any errors in case of stale or wrong 
> keys and returns garbage
> 
>
> Key: SOLR-8326
> URL: https://issues.apache.org/jira/browse/SOLR-8326
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3, 5.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Blocker
> Fix For: 5.3.2, 5.4, Trunk
>
> Attachments: SOLR-8326.patch, SOLR-8326.patch, SOLR-8326.patch, 
> pkiauth_ttl.patch
>
>
> This was reported on the mailing list:
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg115921.html
> I tested it out as follows to confirm that adding a 'read' rule causes 
> replication to break. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15288 - Failure!

2015-12-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15288/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.lucene.search.TestDimensionalRangeQuery.testAllDimensionalDocsWereDeletedAndThenMergedAgain

Error Message:
must index at least one point

Stack Trace:
java.lang.IllegalStateException: must index at least one point
at 
__randomizedtesting.SeedInfo.seed([AAC8CFA944E142D1:1C9E6E14877E764C]:0)
at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:742)
at 
org.apache.lucene.codecs.simpletext.SimpleTextDimensionalWriter.writeField(SimpleTextDimensionalWriter.java:160)
at 
org.apache.lucene.codecs.DimensionalWriter.mergeOneField(DimensionalWriter.java:44)
at 
org.apache.lucene.codecs.DimensionalWriter.merge(DimensionalWriter.java:106)
at 
org.apache.lucene.index.SegmentMerger.mergeDimensionalValues(SegmentMerger.java:168)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:117)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4062)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3642)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1917)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1750)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1707)
at 
org.apache.lucene.search.TestDimensionalRangeQuery.testAllDimensionalDocsWereDeletedAndThenMergedAgain(TestDimensionalRangeQuery.java:1003)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (LUCENE-6919) Change the Scorer API to expose an iterator instead of extending DocIdSetIterator

2015-12-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068393#comment-15068393
 ] 

ASF subversion and git services commented on LUCENE-6919:
-

Commit 1721433 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1721433 ]

LUCENE-6919: In TestConjunctionDISI.java removed now redundant cast to Scorer.

> Change the Scorer API to expose an iterator instead of extending 
> DocIdSetIterator
> -
>
> Key: LUCENE-6919
> URL: https://issues.apache.org/jira/browse/LUCENE-6919
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6919.patch, LUCENE-6919.patch, LUCENE-6919.patch
>
>
> I was working on trying to address the performance regression on LUCENE-6815 
> but this is hard to do without introducing specialization of 
> DisjunctionScorer which I'd like to avoid at all costs.
> I think the performance regression would be easy to address without 
> specialization if Scorers were changed to return an iterator instead of 
> extending DocIdSetIterator. So conceptually the API would move from
> {code}
> class Scorer extends DocIdSetIterator {
> }
> {code}
> to
> {code}
> class Scorer {
>   DocIdSetIterator iterator();
> }
> {code}
> This would help me because then if none of the sub clauses support two-phase 
> iteration, DisjunctionScorer could directly return the approximation as an 
> iterator instead of having to check if twoPhase == null at every iteration.
> Such an approach could also help remove some method calls. For instance 
> TermScorer.nextDoc calls PostingsEnum.nextDoc but with this change 
> TermScorer.iterator() could return the PostingsEnum and TermScorer would not 
> even appear in stack traces when scoring. I hacked a patch to see how much 
> that would help and luceneutil seems to like the change:
> {noformat}
> TaskQPS baseline  StdDev   QPS patch  StdDev  
>   Pct diff
>   Fuzzy1   88.54 (15.7%)   86.73 (16.6%)   
> -2.0% ( -29% -   35%)
>   AndHighLow  698.98  (4.1%)  691.11  (5.1%)   
> -1.1% (  -9% -8%)
>   Fuzzy2   26.47 (11.2%)   26.28 (10.3%)   
> -0.7% ( -19% -   23%)
>  MedSpanNear  141.03  (3.3%)  140.51  (3.2%)   
> -0.4% (  -6% -6%)
>   HighPhrase   60.66  (2.6%)   60.48  (3.3%)   
> -0.3% (  -5% -5%)
>  LowSpanNear   29.25  (2.4%)   29.21  (2.1%)   
> -0.1% (  -4% -4%)
>MedPhrase   28.32  (1.9%)   28.28  (2.0%)   
> -0.1% (  -3% -3%)
>LowPhrase   17.31  (2.1%)   17.29  (2.6%)   
> -0.1% (  -4% -4%)
> HighSloppyPhrase   10.93  (6.0%)   10.92  (6.0%)   
> -0.1% ( -11% -   12%)
>  MedSloppyPhrase   72.21  (2.2%)   72.27  (1.8%)
> 0.1% (  -3% -4%)
>  Respell   57.35  (3.2%)   57.41  (3.4%)
> 0.1% (  -6% -6%)
> HighSpanNear   26.71  (3.0%)   26.75  (2.5%)
> 0.1% (  -5% -5%)
> OrNotHighLow  803.46  (3.4%)  807.03  (4.2%)
> 0.4% (  -6% -8%)
>  LowSloppyPhrase   88.02  (3.4%)   88.77  (2.5%)
> 0.8% (  -4% -7%)
> OrNotHighMed  200.45  (2.7%)  203.83  (2.5%)
> 1.7% (  -3% -7%)
>   OrHighHigh   38.98  (7.9%)   40.30  (6.6%)
> 3.4% ( -10% -   19%)
> HighTerm   92.53  (5.3%)   95.94  (5.8%)
> 3.7% (  -7% -   15%)
>OrHighMed   53.80  (7.7%)   55.79  (6.6%)
> 3.7% (  -9% -   19%)
>   AndHighMed  266.69  (1.7%)  277.15  (2.5%)
> 3.9% (   0% -8%)
>  Prefix3   44.68  (5.4%)   46.60  (7.0%)
> 4.3% (  -7% -   17%)
>  MedTerm  261.52  (4.9%)  273.52  (5.4%)
> 4.6% (  -5% -   15%)
> Wildcard   42.39  (6.1%)   44.35  (7.8%)
> 4.6% (  -8% -   19%)
>   IntNRQ   10.46  (7.0%)   10.99  (9.5%)
> 5.0% ( -10% -   23%)
>OrNotHighHigh   67.15  (4.6%)   70.65  (4.5%)
> 5.2% (  -3% -   15%)
>OrHighNotHigh   43.07  (5.1%)   45.36  (5.4%)
> 5.3% (  -4% -   16%)
>OrHighLow   64.19  (6.4%)   67.72  (5.5%)
> 5.5% (  -6% -   18%)
>  AndHighHigh   64.17  (2.3%)   

[jira] [Resolved] (SOLR-8454) Improve logging by ZkStateReader and clean up dead code

2015-12-22 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved SOLR-8454.
--
Resolution: Fixed

Thanks [~anshum]. Committed to trunk and 5x.

> Improve logging by ZkStateReader and clean up dead code
> ---
>
> Key: SOLR-8454
> URL: https://issues.apache.org/jira/browse/SOLR-8454
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8454.patch, SOLR-8454.patch, SOLR-8454.patch, 
> SOLR-8454.patch
>
>
> Improve logging output by ZkStateReader, by adding the following:
> * Use LOG.foo() with parameters properly (i.e. not concatenating strings w/ +)
> * Surround parameters with [], to help readability, especially w/ empty values
> * Add missing string messages, where I felt a message will clarify
> * Convert some try-catch to a try-multicatch and improve output log message
> Also, clean up dead code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >