Re: Lucene/Solr git mirror will soon turn off

2015-12-17 Thread Dawid Weiss
> The question I had (I am sure a very dumb one): WHY do we care about history
preserved perfectly in Git?

For me it's for sentimental, archival and task-challenge reasons. Robert's
requirement is that git praise/blame/log works and on a given file and
shows its true history of changes. Everyone has his own reasons I guess. If
the initial clone is small enough then I see no problem in keeping the
history if we can preserve it.

Dawid



On Thu, Dec 17, 2015 at 4:52 AM, david.w.smi...@gmail.com <
david.w.smi...@gmail.com> wrote:

> +1 totally agree.  Any way; the bloat should largely be the binaries &
> unrelated projects, not code (small text files).
>
> On Wed, Dec 16, 2015 at 10:36 PM Doug Turnbull <
> dturnb...@opensourceconnections.com> wrote:
>
>> In defense of more history immediately available--it is often far more
>> useful to poke around code history/run blame to figure out some code than
>> by taking it at face value. Putting this in a secondary place like
>> Apache SVN repo IMO reduces the readability of the code itself. This is
>> doubly true for new developers that won't know about Apache's SVN. And
>> Lucene can be quite intricate code. Further in my own work poking around in
>> github mirrors I frequently hit the current cutoff. Which is one reason I
>> stopped using them for anything but the casual investigation.
>>
>> I'm not totally against a cutoff point, but I'd advocate for exhausting
>> other options first, such as trimming out unrelated projects, binaries, etc.
>>
>> -Doug
>>
>>
>> On Wednesday, December 16, 2015, Shawn Heisey 
>> wrote:
>>
>>> On 12/16/2015 5:53 PM, Alexandre Rafalovitch wrote:
>>> > On 16 December 2015 at 00:44, Dawid Weiss 
>>> wrote:
>>> >> 4) The size of JARs is really not an issue. The entire SVN repo I
>>> mirrored
>>> >> locally (including empty interim commits to cater for svn:mergeinfos)
>>> is 4G.
>>> >> If you strip the stuff like javadocs and side projects (Nutch, Tika,
>>> Mahout)
>>> >> then I bet the entire history can fit in 1G total. Of course
>>> stripping JARs
>>> >> is also doable.
>>> > I think this answered one of the issues. So, this is not something to
>>> focus on.
>>> >
>>> > The question I had (I am sure a very dumb one): WHY do we care about
>>> > history preserved perfectly in Git? Because that seems to be the real
>>> > bottleneck now. Does anybody still checks out an intermediate commit
>>> > in Solr 1.4 branch?
>>>
>>> I do not think we need every bit of history -- at least in the primary
>>> read/write repository.  I wonder how much of a size difference there
>>> would be between tossing all history before 5.0 and tossing all history
>>> before the ivy transition was completed.
>>>
>>> In the interests of reducing the size and download time of a clone
>>> operation, I definitely think we should trim history in the main repo to
>>> some arbitrary point, as long as the full history is available
>>> elsewhere.  It's my understanding that it will remain in svn.apache.org
>>> (possibly forever), and I think we could also create "historical"
>>> read-only git repos.
>>>
>>> Almost every time I am working on the code, I only care about the stable
>>> branch and trunk.  Sometimes I will check out an older 4.x tag so I can
>>> see the exact code referenced by a stacktrace in a user's error message,
>>> but when this is required, I am willing to go to an entirely different
>>> repository and chew up bandwidth/disk resourcesto obtain it, and I do
>>> not care whether it is git or svn.  As time marches on, fewer people
>>> will have reasons to look at the historical record.
>>>
>>> Thanks,
>>> Shawn
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>> --
>> *Doug Turnbull **| *Search Relevance Consultant | OpenSource Connections
>> , LLC | 240.476.9983
>> Author: Relevant Search 
>> This e-mail and all contents, including attachments, is considered to be
>> Company Confidential unless explicitly stated otherwise, regardless
>> of whether attachments are marked as such.
>>
>> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5478 - Still Failing!

2015-12-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5478/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.search.AnalyticsMergeStrategyTest.test

Error Message:
Error from server at https://127.0.0.1:62243//collection1: 
java.util.concurrent.ExecutionException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:62243//collection1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:62243//collection1: 
java.util.concurrent.ExecutionException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:62243//collection1
at 
__randomizedtesting.SeedInfo.seed([FBAD4B4FA63F38D2:73F9749508C3552A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:562)
at 
org.apache.solr.search.AnalyticsMergeStrategyTest.test(AnalyticsMergeStrategyTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-17 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8423:
---
Attachment: SOLR-8423.patch

Patch without a test. I'll add a test for this and also need to find a better 
param for supporting deletion without cleaning up of the instance directory 
(like right now). In this patch, it's called _safedelete_.

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-17 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061776#comment-15061776
 ] 

Shai Erera commented on SOLR-8423:
--

So with this patch, the default is that we delete the index + instance + data 
directory, but if the user wants, he can request a _safedelete_ and the 
behavior goes back to what it is today? I'm fine with that.

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-17 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061804#comment-15061804
 ] 

Anshum Gupta commented on SOLR-8423:


I just wanted to keep it easy for users and I think adding a single flag is an 
easier option.
Also, we want to set both those to true and I personally feel that 
deleteInstanceDir/deleteDataDir being set to true is a little less intuitive.

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-17 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061813#comment-15061813
 ] 

Shalin Shekhar Mangar commented on SOLR-8423:
-

This is easy for the users. No need to remember what a new vague sounding 
parameter does. deleteInstanceDir/deleteDataDir is as explicit as it gets. 

What the hell is a safedelete anyway?
# We don't accidentally delete other's data?
# We don't delete our data?
# We ensure that data is actually deleted and cannot be recovered?

If you really want to keep things simple for users, do what deletereplica 
already does i.e. deletes instance dir, data dir, index automatically as we did 
in SOLR-6072. No switch necessary to control that. If you want to implement a 
way to control what gets deleted, implement it for both deleteshard and 
deletereplica and even better, just have deleteshard internally call 
deletereplica and avoid the code duplication.

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-17 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061798#comment-15061798
 ] 

Shalin Shekhar Mangar commented on SOLR-8423:
-

Why not just use the existing deleteInstanceDir and deleteDataDir core admin 
params (and default them to true)? It is not obvious to me what 'safedelete' 
means.

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8434) Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin

2015-12-17 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8434:
-
Affects Version/s: 5.3.1

> Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin 
> ---
>
> Key: SOLR-8434
> URL: https://issues.apache.org/jira/browse/SOLR-8434
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.3.1
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> I should be able to specify the role as {{*}} which would mean there should 
> be some user principal to access this resource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15228 - Still Failing!

2015-12-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15228/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.CustomCollectionTest.test

Error Message:
Error from server at http://127.0.0.1:54729/tgnxr/j/implicitcollwithShardField: 
non ok status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:54729/tgnxr/j/implicitcollwithShardField: non 
ok status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([832B0A8575077A48:B7F355FDBFB17B0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI(CustomCollectionTest.java:298)
at 
org.apache.solr.cloud.CustomCollectionTest.test(CustomCollectionTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062140#comment-15062140
 ] 

ASF subversion and git services commented on SOLR-8433:
---

Commit 1720563 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1720563 ]

SOLR-8433: IterativeMergeStrategy test failures due to SSL errors on Windows

> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>[junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
> path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>[junit4]   2>  at 
> sun.security.validator.Validator.validate(Validator.java:260)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
>[junit4]   2>  ... 29 more
>[junit4]   2> Caused by: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> 

[jira] [Commented] (SOLR-8431) Parent shard cannot be deleted after shard splitting

2015-12-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062081#comment-15062081
 ] 

Joel Bernstein commented on SOLR-8431:
--

This is linked to SOLR-8125. I don't this is connected to Streaming or SQL so 
I'm going delete the link.

> Parent shard cannot be deleted after shard splitting
> 
>
> Key: SOLR-8431
> URL: https://issues.apache.org/jira/browse/SOLR-8431
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.1
>Reporter: Tan Lay How
>Priority: Blocker
>
> I performed shard splitting on 2 of the shards out of 3 with the async 
> request id. The shard splitting task is failed when it trying to attach the 
> replica for the splitted shard, but i found out on both of the leader 
> splitted shards, document is splitted correctly (total numFound of both 
> shards = total numFound of parent shard)
> So, i proceed to manually change the clusterstate.json, splitted shards 
> changed status from construction to active, and parent shards changed from 
> active to inactive. I also manually attach the replica for the splitted 
> shards, then only remove the parent shards by unload the parent shards core.
> Here the problems come, when i issue a commit in SolrCloud, both of the 
> parent return back in the solr cloud cloud graph and clusterstate.json with 
> node status = down & shard status = active.
> 1) I bring up the parent shards node again and try with another unload core 
> again, but every time i issue a commit, it will back into the graph and 
> clusterstate with node status = down & shard status = active.
> 2) So my second attempt is to delete the shards with collection API using 
> /admin/collections?action=DELETESHARD
> I got the error as below :
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Cannot 
> >unload non-existent core
> Operation deleteshard caused exception: org.apache.solr.common.SolrException: 
> Could not fully remove collection: >candidates shard: candidates_shard2 at 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:364)
>  at 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)
>  at 
> org.apache.solr.handler.admin.CollectionsHandler.handleDeleteShardAction(CollectionsHandler.java:563)
>  at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:176)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:267)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
>  at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
>  at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
>  at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
>  at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
>  at 
> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:612)
>  at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170) 
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) 
> at org.apache.catalina.valves.AutoLoginValve.invoke(AutoLoginValve.java:67) 
> at 
> org.apache.catalina.valves.RequestFilterValve.process(RequestFilterValve.java:304)
>  at 
> org.apache.catalina.valves.RemoteAddrValve.invoke(RemoteAddrValve.java:82) at 
> org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:683) at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950) at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
>  at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421) 
> at 
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1070)
>  at 
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
>  at 
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at 
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
>  at java.lang.Thread.run(Thread.java:745)
> 3) My third attempt is the delete the replica using the collection API 
> admin/collections?action=DELETEREPLICA
> I got the error as below :
> 

[jira] [Updated] (SOLR-8279) Add a new SolrCloud test that stops and starts the cluster while indexing data with fault injection.

2015-12-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8279:
--
Attachment: SOLR-8279.patch

> Add a new SolrCloud test that stops and starts the cluster while indexing 
> data with fault injection.
> 
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8429) add a flag blockUnauthenticated to BasicAutPlugin

2015-12-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062022#comment-15062022
 ] 

Noble Paul commented on SOLR-8429:
--

If {{"blockUnauthenticated":true}} is set , you don't have the choice of 
allowing any path without authentication

However you can do the following . create a permission called {{all}} ( 
SOLR-8428 ) and then explicitly open up the path {{/solr/foo/select}} using a 
wild card role {{role:"*"}} ( SOLR-8434 ). The rules would look like the 
follows 

{code}
{
"authorization" :{
"permissions":[
{"name": "foo-read",
"collection": "foo",
"path": "/select",
"role": null},
{"name":"all" ,
"role": "*"}]}}
{code}

> add a flag blockUnauthenticated to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8190) Implement Closeable on TupleStream

2015-12-17 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062152#comment-15062152
 ] 

Kevin Risden commented on SOLR-8190:


So I put more thought into this last night and since TupleStream has an open 
method it makes try-with-resources not really applicable. In the case here, it 
will call close twice as implemented. try-with-resources can't be pushed into 
getTuples since try-with-resources doesn't work with a open method.

Thinking about this brought up the following thoughts:
* What should happen when open is called twice?
* What should happen when close is called twice?
* What should happen when close is called without open being called?
* Are there places in the code where open/close is called without a 
try/finally? Will that cause issues?
* Are there places in the code where TupleStream.open is called without a 
related close call?

There are no checks currently to see if a stream has already been opened or 
closed. This is what is causing the different NPE exceptions like in SOLR-8191.

For this ticket, I think just implementing Closeable on TupleStream and not 
changing the tests is appropriate. The above items should be addressed though. 
This will make the patch smaller and the tests can be improved in followup 
JIRAS.

[~joel.bernstein]/[~gerlowskija] - Thoughts on the above?

> Implement Closeable on TupleStream
> --
>
> Key: SOLR-8190
> URL: https://issues.apache.org/jira/browse/SOLR-8190
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8190.patch, SOLR-8190.patch
>
>
> Implementing Closeable on TupleStream provides the ability to use 
> try-with-resources 
> (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)
>  in tests and in practice. This prevents TupleStreams from being left open 
> when there is an error in the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062150#comment-15062150
 ] 

Joel Bernstein commented on SOLR-8433:
--

Odd, the link above to revision is returning an error and my changes have'nt 
shown up in svn.

Below is the local response from the commit:

Sending
solr/core/src/test/org/apache/solr/search/AnalyticsMergeStrategyTest.java
Transmitting file data .
Committed revision 1720563.



> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>[junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
> path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>[junit4]   2>  at 
> sun.security.validator.Validator.validate(Validator.java:260)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
>[junit4]   2>  ... 29 more
>[junit4]   2> Caused by: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>

[jira] [Comment Edited] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062150#comment-15062150
 ] 

Joel Bernstein edited comment on SOLR-8433 at 12/17/15 2:58 PM:


Odd, the link above to the revision is returning an error and my changes 
have'nt shown up in svn.

Below is the local response from the commit:

Sending
solr/core/src/test/org/apache/solr/search/AnalyticsMergeStrategyTest.java
Transmitting file data .
Committed revision 1720563.




was (Author: joel.bernstein):
Odd, the link above to revision is returning an error and my changes have'nt 
shown up in svn.

Below is the local response from the commit:

Sending
solr/core/src/test/org/apache/solr/search/AnalyticsMergeStrategyTest.java
Transmitting file data .
Committed revision 1720563.



> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>[junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
> path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>[junit4]   2>  at 
> sun.security.validator.Validator.validate(Validator.java:260)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
>[junit4]   2>  at 
> 

[jira] [Created] (SOLR-8435) Long update times Solr 5.3.1

2015-12-17 Thread Kenny Knecht (JIRA)
Kenny Knecht created SOLR-8435:
--

 Summary: Long update times Solr 5.3.1
 Key: SOLR-8435
 URL: https://issues.apache.org/jira/browse/SOLR-8435
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 5.3.1
 Environment: Ubuntu server 128Gb
Reporter: Kenny Knecht
 Fix For: 5.2.1


We have 2 128GB ubuntu servers in solr cloud config. We update by curling json 
files of 20,000 documents. In 5.2.1 this consistently takes between 19 and 24 
seconds. In 5.3.1 most times this takes 20s but in about 20% of the files this 
takes much longer: up to 500s! Which files seems to be quite random. Is this a 
known bug? any workaround? fixed in 5.4?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6926) Take matchCost into account for MUST_NOT clauses

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062015#comment-15062015
 ] 

ASF subversion and git services commented on LUCENE-6926:
-

Commit 1720544 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1720544 ]

LUCENE-6926: Take the match cost into account for MUST_NOT clauses.

> Take matchCost into account for MUST_NOT clauses
> 
>
> Key: LUCENE-6926
> URL: https://issues.apache.org/jira/browse/LUCENE-6926
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6926.patch, LUCENE-6926.patch
>
>
> ReqExclScorer potentially has two TwoPhaseIterators to check: the one for the 
> positive clause and the one for the negative clause. It should leverage the 
> match cost API to check the least costly one first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6926) Take matchCost into account for MUST_NOT clauses

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062035#comment-15062035
 ] 

ASF subversion and git services commented on LUCENE-6926:
-

Commit 1720548 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720548 ]

LUCENE-6926: Take the match cost into account for MUST_NOT clauses.

> Take matchCost into account for MUST_NOT clauses
> 
>
> Key: LUCENE-6926
> URL: https://issues.apache.org/jira/browse/LUCENE-6926
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6926.patch, LUCENE-6926.patch
>
>
> ReqExclScorer potentially has two TwoPhaseIterators to check: the one for the 
> positive clause and the one for the negative clause. It should leverage the 
> match cost API to check the least costly one first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3.2 bug fix release

2015-12-17 Thread Jan Høydahl
If there is a 5.3.2 release, we should probably also backport this one:

SOLR-8269 : Upgrade 
commons-collections to 3.2.2. This fixes a known serialization vulnerability

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 17. des. 2015 kl. 07.35 skrev Anshum Gupta :
> 
> Yes, there was already a 5.3.2 version in JIRA. I will start back-porting 
> stuff to the lucene_solr_5_3 branch later in the day today.
> 
> 
> On Thu, Dec 17, 2015 at 11:35 AM, Noble Paul  > wrote:
> Agree with Shawn here.
> 
> If a company has already done the work to upgrade their systems to
> 5.3.1 , they would rather have a bug fix for the old version .
> 
> So anshum, is there a 5.3.2 version created in JIRA? can we start
> tagging issues to that release so that we can have a definitive list
> of bugs to be backported
> 
> On Thu, Dec 17, 2015 at 10:27 AM, Anshum Gupta  > wrote:
> > Thanks for explaining it so well Shawn :)
> >
> > Yes, that's pretty much the reason why it makes sense to have a 5.3.2
> > release.
> >
> > We might even need a 5.4.1 after that as there are more security bug fixes
> > that are getting committed as the feature is being tried by users and bugs
> > are being reported.
> >
> > On Thu, Dec 17, 2015 at 3:28 AM, Shawn Heisey  > > wrote:
> >>
> >> On 12/16/2015 2:15 PM, Upayavira wrote:
> >> > Why don't people just upgrade to 5.4? Why do we need another release in
> >> > the 5.3.x range?
> >>
> >> I am using a third-party custom Solr plugin.  The latest version of that
> >> plugin (which I have on my dev server) has only been certified to work
> >> with Solr 5.3.x.  There's a chance that it won't work with 5.4, so I
> >> cannot use that version yet.  If I happen to need any of the fixes that
> >> are being backported, an official 5.3.2 release would allow me to use
> >> official binaries, which will make my managers much more comfortable
> >> than a version that I compile myself.
> >>
> >> Additionally, the IT change policies in place for many businesses
> >> require a huge amount of QA work for software upgrades, but those
> >> policies may be relaxed for hotfixes and upgrades that are *only*
> >> bugfixes.  For users operating under those policies, a bugfix release
> >> will allow them to fix bugs immediately, rather than spend several weeks
> >> validating a new minor release.
> >>
> >> There is a huge amount of interest in the new security features in
> >> 5.3.x, functionality that has a number of critical problems.  Lots of
> >> users who need those features have already deployed 5.3.1.  Many of the
> >> critical problems are fixed in 5.4, and these are the fixes that Anshum
> >> wants to make available in 5.3.2.  If a user is in either of the
> >> situations that I outlined above, upgrading to 5.4 may be unrealistic.
> >>
> >> Thanks,
> >> Shawn
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> >> 
> >> For additional commands, e-mail: dev-h...@lucene.apache.org 
> >> 
> >>
> >
> >
> >
> > --
> > Anshum Gupta
> 
> 
> 
> --
> -
> Noble Paul
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
> 
> 
> 
> 
> 
> -- 
> Anshum Gupta



[jira] [Updated] (SOLR-7341) xjoin - join data from external sources

2015-12-17 Thread Tom Winch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Winch updated SOLR-7341:

Attachment: SOLR-7341.patch-5_3

Patch for SOLR 5.3

> xjoin - join data from external sources
> ---
>
> Key: SOLR-7341
> URL: https://issues.apache.org/jira/browse/SOLR-7341
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.10.3
>Reporter: Tom Winch
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-7341.patch, SOLR-7341.patch, SOLR-7341.patch, 
> SOLR-7341.patch, SOLR-7341.patch, SOLR-7341.patch, SOLR-7341.patch-5_3, 
> SOLR-7341.patch-trunk, SOLR-7341.patch-trunk, SOLR-7341.patch-trunk
>
>
> h2. XJoin
> The "xjoin" SOLR contrib allows external results to be joined with SOLR 
> results in a query and the SOLR result set to be filtered by the results of 
> an external query. Values from the external results are made available in the 
> SOLR results and may also be used to boost the scores of corresponding 
> documents during the search. The contrib consists of the Java classes 
> XJoinSearchComponent, XJoinValueSourceParser and XJoinQParserPlugin (and 
> associated classes), which must be configured in solrconfig.xml, and the 
> interfaces XJoinResultsFactory and XJoinResults, which are implemented by the 
> user to provide the link between SOLR and the external results source. 
> External results and SOLR documents are matched via a single configurable 
> attribute (the "join field"). The contrib JAR solr-xjoin-4.10.3.jar contains 
> these classes and interfaces and should be included in SOLR's class path from 
> solrconfig.xml, as should a JAR containing the user implementations of the 
> previously mentioned interfaces. For example:
> {code:xml}
> 
>   ..
>   
>/>
>   ..
>   
>   
>   ..
> 
> {code}
> h2. Java classes and interfaces
> h3. XJoinResultsFactory
> The user implementation of this interface is responsible for connecting to an 
> external source to perform a query (or otherwise collect results). Parameters 
> with prefix ".external." are passed from the SOLR query URL 
> to pararameterise the search. The interface has the following methods:
> * void init(NamedList args) - this is called during SOLR initialisation, and 
> passed parameters from the search component configuration (see below)
> * XJoinResults getResults(SolrParams params) - this is called during a SOLR 
> search to generate external results, and is passed parameters from the SOLR 
> query URL (as above)
> For example, the implementation might perform queries of an external source 
> based on the 'q' SOLR query URL parameter (in full,  name>.external.q).
> h3. XJoinResults
> A user implementation of this interface is returned by the getResults() 
> method of the XJoinResultsFactory implementation. It has methods:
> * Object getResult(String joinId) - this should return a particular result 
> given the value of the join attribute
> * Iterable getJoinIds() - this should return an ordered (ascending) 
> list of the join attribute values for all results of the external search
> h3. XJoinSearchComponent
> This is the central Java class of the contrib. It is a SOLR search component, 
> configured in solrconfig.xml and included in one or more SOLR request 
> handlers. There is one XJoin search component per external source, and each 
> has two main responsibilities:
> * Before the SOLR search, it connects to the external source and retrieves 
> results, storing them in the SOLR request context
> * After the SOLR search, it matches SOLR document in the results set and 
> external results via the join field, adding attributes from the external 
> results to documents in the SOLR results set
> It takes the following initialisation parameters:
> * factoryClass - this specifies the user-supplied class implementing 
> XJoinResultsFactory, used to generate external results
> * joinField - this specifies the attribute on which to join between SOLR 
> documents and external results
> * external - this parameter set is passed to configure the 
> XJoinResultsFactory implementation
> For example, in solrconfig.xml:
> {code:xml}
>  class="org.apache.solr.search.xjoin.XJoinSearchComponent">
>   test.TestXJoinResultsFactory
>   id
>   
> 1,2,3
>   
> 
> {code}
> Here, the search component instantiates a new TextXJoinResultsFactory during 
> initialisation, and passes it the "values" parameter (1, 2, 3) to configure 
> it. To properly use the XJoinSearchComponent in a request handler, it must be 
> included at the start and end of the component list, and may be configured 
> with the following query parameters:
> * results - a comma-separated list of attributes from the XJoinResults 
> implementation (created by the factory at search time) to be included in the 
> SOLR results

[jira] [Commented] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062159#comment-15062159
 ] 

Joel Bernstein commented on SOLR-8433:
--

Ok, the commit has now shown up in svn. This was a delay that I haven't seen 
before.

> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>[junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
> path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>[junit4]   2>  at 
> sun.security.validator.Validator.validate(Validator.java:260)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
>[junit4]   2>  ... 29 more
>[junit4]   2> Caused by: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:146)
>[junit4]   2>  at 
> 

[jira] [Commented] (SOLR-8429) add a flag blockUnauthenticated to BasicAutPlugin

2015-12-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062042#comment-15062042
 ] 

Jan Høydahl commented on SOLR-8429:
---

Cool. This workaround would require blockUnauthenticated to be false, right?

Just a thought: If the new flag {{blockUnauthenticated}} is not explicitly 
defined in config, could the default be smart and depend on whether an 
Authorization plugin is active or not? There is no use in BasicAuthPlugin alone 
without this enabled... Pseudo:

{code}
// Default to true if no authz configured
boolean blockUnauthenticated = config.get("blockUnauthenticated", 
!hasAuthorizationPlugin());
{code}

Then we would continue to omit the flag in example configs, and document it for 
those who rather want to block using the flag instead of an all permission. 
That way we'd get back compat as well, not?

> add a flag blockUnauthenticated to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8371:
--
Attachment: SOLR-8371.patch

Updated patch, fixed to work correctly.

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8434) Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062052#comment-15062052
 ] 

ASF subversion and git services commented on SOLR-8434:
---

Commit 1720551 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720551 ]

SOLR-8434: Add wildcard support to role, to match any role in 
RuleBasedAuthorizationPlugin

> Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin 
> ---
>
> Key: SOLR-8434
> URL: https://issues.apache.org/jira/browse/SOLR-8434
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.3.1
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.5, Trunk
>
>
> I should be able to specify the role as {{*}} which would mean there should 
> be some user principal to access this resource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8434) Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin

2015-12-17 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-8434.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin 
> ---
>
> Key: SOLR-8434
> URL: https://issues.apache.org/jira/browse/SOLR-8434
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.3.1
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.5, Trunk
>
>
> I should be able to specify the role as {{*}} which would mean there should 
> be some user principal to access this resource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-17 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8433:
-
Description: 
The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
failures are occurring during the callbacks to the shards introduced in 
SOLR-6398.

{code}
  

[junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target
   [junit4]   2>at 
sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
   [junit4]   2>at 
sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
   [junit4]   2>at 
sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
   [junit4]   2>at 
sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
   [junit4]   2>at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
   [junit4]   2>at 
sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
   [junit4]   2>at 
sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
   [junit4]   2>at 
sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
   [junit4]   2>at 
sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
   [junit4]   2>at 
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
   [junit4]   2>at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
   [junit4]   2>at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
   [junit4]   2>at 
org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
   [junit4]   2>at 
org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
   [junit4]   2>at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
   [junit4]   2>at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
   [junit4]   2>at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
   [junit4]   2>at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
   [junit4]   2>at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
   [junit4]   2>... 11 more
   [junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target
   [junit4]   2>at 
sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
   [junit4]   2>at 
sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
   [junit4]   2>at 
sun.security.validator.Validator.validate(Validator.java:260)
   [junit4]   2>at 
sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
   [junit4]   2>at 
sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
   [junit4]   2>at 
sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
   [junit4]   2>at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
   [junit4]   2>... 29 more
   [junit4]   2> Caused by: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target
   [junit4]   2>at 
sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:146)
   [junit4]   2>at 
sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
   [junit4]   2>at 
java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
   [junit4]   2>at 
sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
   [junit4]   2>... 35 more
   [junit4]   2> 
{code}

  was:
The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
failures are occurring during the callbacks to the shards introduced in 
SOLR-6398.



{code}
  [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 

[jira] [Commented] (SOLR-7341) xjoin - join data from external sources

2015-12-17 Thread Tom Winch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062077#comment-15062077
 ] 

Tom Winch commented on SOLR-7341:
-

Done

> xjoin - join data from external sources
> ---
>
> Key: SOLR-7341
> URL: https://issues.apache.org/jira/browse/SOLR-7341
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.10.3
>Reporter: Tom Winch
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-7341.patch, SOLR-7341.patch, SOLR-7341.patch, 
> SOLR-7341.patch, SOLR-7341.patch, SOLR-7341.patch, SOLR-7341.patch-5_3, 
> SOLR-7341.patch-trunk, SOLR-7341.patch-trunk, SOLR-7341.patch-trunk
>
>
> h2. XJoin
> The "xjoin" SOLR contrib allows external results to be joined with SOLR 
> results in a query and the SOLR result set to be filtered by the results of 
> an external query. Values from the external results are made available in the 
> SOLR results and may also be used to boost the scores of corresponding 
> documents during the search. The contrib consists of the Java classes 
> XJoinSearchComponent, XJoinValueSourceParser and XJoinQParserPlugin (and 
> associated classes), which must be configured in solrconfig.xml, and the 
> interfaces XJoinResultsFactory and XJoinResults, which are implemented by the 
> user to provide the link between SOLR and the external results source. 
> External results and SOLR documents are matched via a single configurable 
> attribute (the "join field"). The contrib JAR solr-xjoin-4.10.3.jar contains 
> these classes and interfaces and should be included in SOLR's class path from 
> solrconfig.xml, as should a JAR containing the user implementations of the 
> previously mentioned interfaces. For example:
> {code:xml}
> 
>   ..
>   
>/>
>   ..
>   
>   
>   ..
> 
> {code}
> h2. Java classes and interfaces
> h3. XJoinResultsFactory
> The user implementation of this interface is responsible for connecting to an 
> external source to perform a query (or otherwise collect results). Parameters 
> with prefix ".external." are passed from the SOLR query URL 
> to pararameterise the search. The interface has the following methods:
> * void init(NamedList args) - this is called during SOLR initialisation, and 
> passed parameters from the search component configuration (see below)
> * XJoinResults getResults(SolrParams params) - this is called during a SOLR 
> search to generate external results, and is passed parameters from the SOLR 
> query URL (as above)
> For example, the implementation might perform queries of an external source 
> based on the 'q' SOLR query URL parameter (in full,  name>.external.q).
> h3. XJoinResults
> A user implementation of this interface is returned by the getResults() 
> method of the XJoinResultsFactory implementation. It has methods:
> * Object getResult(String joinId) - this should return a particular result 
> given the value of the join attribute
> * Iterable getJoinIds() - this should return an ordered (ascending) 
> list of the join attribute values for all results of the external search
> h3. XJoinSearchComponent
> This is the central Java class of the contrib. It is a SOLR search component, 
> configured in solrconfig.xml and included in one or more SOLR request 
> handlers. There is one XJoin search component per external source, and each 
> has two main responsibilities:
> * Before the SOLR search, it connects to the external source and retrieves 
> results, storing them in the SOLR request context
> * After the SOLR search, it matches SOLR document in the results set and 
> external results via the join field, adding attributes from the external 
> results to documents in the SOLR results set
> It takes the following initialisation parameters:
> * factoryClass - this specifies the user-supplied class implementing 
> XJoinResultsFactory, used to generate external results
> * joinField - this specifies the attribute on which to join between SOLR 
> documents and external results
> * external - this parameter set is passed to configure the 
> XJoinResultsFactory implementation
> For example, in solrconfig.xml:
> {code:xml}
>  class="org.apache.solr.search.xjoin.XJoinSearchComponent">
>   test.TestXJoinResultsFactory
>   id
>   
> 1,2,3
>   
> 
> {code}
> Here, the search component instantiates a new TextXJoinResultsFactory during 
> initialisation, and passes it the "values" parameter (1, 2, 3) to configure 
> it. To properly use the XJoinSearchComponent in a request handler, it must be 
> included at the start and end of the component list, and may be configured 
> with the following query parameters:
> * results - a comma-separated list of attributes from the XJoinResults 
> implementation (created by the factory at search time) to be included in the 
> SOLR results
> * fl - 

[jira] [Commented] (SOLR-8125) Umbrella ticket for Streaming and SQL issues

2015-12-17 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061996#comment-15061996
 ] 

Jason Gerlowski commented on SOLR-8125:
---

Looking to help out with some of the streaming/sql work linked to on this JIRA. 
 Trying to get more familiar with this part of the code.  Is there anything 
that stands out as what-should-be-worked-on-next?  Not sure if there's any sort 
of priority attached to the subtasks for this umbrella issue.

If not, I plan on taking a stab at SOLR-7535 (Add UpdateStream API), as it 
seems like a good way to dive in.  Happy to take suggestions if anyone thinks 
that it'd be better to work on something else first though.

> Umbrella ticket for Streaming and SQL issues
> 
>
> Key: SOLR-8125
> URL: https://issues.apache.org/jira/browse/SOLR-8125
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Joel Bernstein
>
> This is an umbrella ticket for tracking issues around the *Streaming API*, 
> *Streaming Expressions* and *Parallel SQL*.
> Issues can be linked to this ticket and discussions about the road map can 
> also happen on this ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2015-12-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062003#comment-15062003
 ] 

Mark Miller commented on SOLR-8423:
---

bq. deleteInstanceDir and deleteDataDir 

Probably best to be consistent in naming these params - especially given the 
behavior is the same. At least on 5x. On 6x, would still be nice to be 
consistent, but we could change how it works if we had ideas to improve it.

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8434) Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin

2015-12-17 Thread Noble Paul (JIRA)
Noble Paul created SOLR-8434:


 Summary: Add a wildcard role, to match any role in 
RuleBasedAuthorizationPlugin 
 Key: SOLR-8434
 URL: https://issues.apache.org/jira/browse/SOLR-8434
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul


I should be able to specify the role as {{*}} which would mean there should be 
some user principal to access this resource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8125) Umbrella ticket for Streaming and SQL issues

2015-12-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062033#comment-15062033
 ] 

Joel Bernstein commented on SOLR-8125:
--

SOLR-7535 is definitely an important one. SOLR-7525 is also important and just 
needs a few more tests, including parallel tests.

> Umbrella ticket for Streaming and SQL issues
> 
>
> Key: SOLR-8125
> URL: https://issues.apache.org/jira/browse/SOLR-8125
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Joel Bernstein
>
> This is an umbrella ticket for tracking issues around the *Streaming API*, 
> *Streaming Expressions* and *Parallel SQL*.
> Issues can be linked to this ticket and discussions about the road map can 
> also happen on this ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8434) Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062051#comment-15062051
 ] 

ASF subversion and git services commented on SOLR-8434:
---

Commit 1720550 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1720550 ]

SOLR-8434: Add wildcard support to role, to match any role in 
RuleBasedAuthorizationPlugin

> Add a wildcard role, to match any role in RuleBasedAuthorizationPlugin 
> ---
>
> Key: SOLR-8434
> URL: https://issues.apache.org/jira/browse/SOLR-8434
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.3.1
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> I should be able to specify the role as {{*}} which would mean there should 
> be some user principal to access this resource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 884 - Still Failing

2015-12-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/884/

3 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=68621, name=Thread-61214, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=68621, name=Thread-61214, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:51558/collection1
at __randomizedtesting.SeedInfo.seed([D7FEB351BC788D6B]:0)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:645)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:51558/collection1
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:587)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:643)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
... 5 more


FAILED:  org.apache.solr.cloud.RollingRestartTest.test

Error Message:
Unable to restart (#4): CloudJettyRunner 
[url=https://127.0.0.1:45463/collection1]

Stack Trace:
java.lang.AssertionError: Unable to restart (#4): CloudJettyRunner 
[url=https://127.0.0.1:45463/collection1]
at 
__randomizedtesting.SeedInfo.seed([D7FEB351BC788D6B:5FAA8C8B1284E093]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.RollingRestartTest.restartWithRolesTest(RollingRestartTest.java:104)
at 
org.apache.solr.cloud.RollingRestartTest.test(RollingRestartTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 

[jira] [Commented] (SOLR-8415) Provide command to switch between non/secure mode in ZK

2015-12-17 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062257#comment-15062257
 ] 

Mike Drob commented on SOLR-8415:
-

Thanks Mark! That page looks reasonable.

Proposed text, to go after "Example Usages":
{panel}
h3. Swapping ACL Schemes
Over the lifetime of operating your Solr cluster, you may decide to move from a 
unsecured ZK to a secured instance. Changing the configured {{zkACLProvider}} 
in {{solr.xml}} will ensure that newly created nodes are secure, but will not 
protect the already existing data. To modify all existing ACLs, you can use 
{{ZkCLI -cmd resetacl}}.

To change the ACLs this way, you must specify the following VM properties: 
{{-DzkACLProvider=... -DzkCredentialsProvider=...}}.
* The Credential Provider must be one that has admin privileges on the nodes. 
If starting with an unsecure configuration, this may be omitted.
* The ACL Provider will be used to compute the new ACLs. When creating an 
unsecure configuration, this may be omitted.
* To swap from one secure setup to a new secure setup, such as when changing 
the password, it ma be necessary to use an unsecure intermediate step.
{panel}

> Provide command to switch between non/secure mode in ZK
> ---
>
> Key: SOLR-8415
> URL: https://issues.apache.org/jira/browse/SOLR-8415
> Project: Solr
>  Issue Type: Improvement
>  Components: security, SolrCloud
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8415.patch, SOLR-8415.patch
>
>
> We have the ability to run both with and without zk acls, but we don't have a 
> great way to switch between the two modes. Most common use case, I imagine, 
> would be upgrading from an old version that did not support this to a new 
> version that does, and wanting to protect all of the existing content in ZK, 
> but it is conceivable that a user might want to remove ACLs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6939) BlendedInfixSuggester to support exponential reciprocal BlenderType

2015-12-17 Thread Arcadius Ahouansou (JIRA)
Arcadius Ahouansou created LUCENE-6939:
--

 Summary: BlendedInfixSuggester to support exponential reciprocal 
BlenderType
 Key: LUCENE-6939
 URL: https://issues.apache.org/jira/browse/LUCENE-6939
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spellchecker
Affects Versions: 5.4
Reporter: Arcadius Ahouansou
Priority: Minor


The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
- {{BlenderType.POSITION_LINEAR}} and 
- {{BlenderType.POSITION_RECIPROCAL}} .

These are used to score documents based on the position of the matched token 
i.e the closer is the matched term to the beginning, the higher score you get.

In some use cases, we need a more aggressive scoring based on the position.
That's where the exponential reciprocal comes into play 
i.e {{coef = 1/Math.pow(position+1, exponent) }}where the {{exponent}} is a 
configurable variable.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6939) BlendedInfixSuggester to support exponential reciprocal BlenderType

2015-12-17 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated LUCENE-6939:
---
Description: 
The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
- {{BlenderType.POSITION_LINEAR}} and 
- {{BlenderType.POSITION_RECIPROCAL}} .

These are used to score documents based on the position of the matched token 
i.e the closer is the matched term to the beginning, the higher score you get.

In some use cases, we need a more aggressive scoring based on the position.
That's where the exponential reciprocal comes into play 
i.e 
{{coef = 1/Math.pow(position+1, exponent)}}
where the {{exponent}} is a configurable variable.


  was:
The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
- {{BlenderType.POSITION_LINEAR}} and 
- {{BlenderType.POSITION_RECIPROCAL}} .

These are used to score documents based on the position of the matched token 
i.e the closer is the matched term to the beginning, the higher score you get.

In some use cases, we need a more aggressive scoring based on the position.
That's where the exponential reciprocal comes into play 
i.e {{coef = 1/Math.pow(position+1, exponent) }}where the {{exponent}} is a 
configurable variable.



> BlendedInfixSuggester to support exponential reciprocal BlenderType
> ---
>
> Key: LUCENE-6939
> URL: https://issues.apache.org/jira/browse/LUCENE-6939
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spellchecker
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
>
> The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
> - {{BlenderType.POSITION_LINEAR}} and 
> - {{BlenderType.POSITION_RECIPROCAL}} .
> These are used to score documents based on the position of the matched token 
> i.e the closer is the matched term to the beginning, the higher score you get.
> In some use cases, we need a more aggressive scoring based on the position.
> That's where the exponential reciprocal comes into play 
> i.e 
> {{coef = 1/Math.pow(position+1, exponent)}}
> where the {{exponent}} is a configurable variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8371:
--
Attachment: SOLR-8371.patch

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, 
> SOLR-8371.patch, SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-17 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062197#comment-15062197
 ] 

Michael Sun commented on SOLR-8416:
---

Just upload an updated patch for discussion. Here is the changes

1. add a property to set max wait time
2. add a property to decide if it waits for all shard leaders to be active or 
all replicas
3. fix issues in [~markrmil...@gmail.com]'s review except for the following one.

bq. Should probably check if the replicas node is listed under live nodes as 
well as if it's active?
[~markrmil...@gmail.com] Can you give me more details about it? Thanks.


> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-17 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8416:
--
Attachment: SOLR-8416.patch

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8371:
--
Attachment: SOLR-8371.patch

And another pass to make parts that were async, async again.

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, 
> SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8435) Long update times Solr 5.3.1

2015-12-17 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062250#comment-15062250
 ] 

Ishan Chattopadhyaya commented on SOLR-8435:


There could be several reasons. Maybe some GC pauses are observed at that 
moment?
Are both the two setups, for 5.2.1 and 5.3.1, absolutely same?

Also, I think such questions are better answered at the solr-users list.

> Long update times Solr 5.3.1
> 
>
> Key: SOLR-8435
> URL: https://issues.apache.org/jira/browse/SOLR-8435
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 5.3.1
> Environment: Ubuntu server 128Gb
>Reporter: Kenny Knecht
> Fix For: 5.2.1
>
>
> We have 2 128GB ubuntu servers in solr cloud config. We update by curling 
> json files of 20,000 documents. In 5.2.1 this consistently takes between 19 
> and 24 seconds. In 5.3.1 most times this takes 20s but in about 20% of the 
> files this takes much longer: up to 500s! Which files seems to be quite 
> random. Is this a known bug? any workaround? fixed in 5.4?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062336#comment-15062336
 ] 

Mark Miller commented on SOLR-8371:
---

Still doing test runs to see if anything random falls out, but I think the 
latest patch is good.

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, 
> SOLR-8371.patch, SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6939) BlendedInfixSuggester to support exponential reciprocal BlenderType

2015-12-17 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated LUCENE-6939:
---
Attachment: LUCENE_6939.patch


Hello [~mikemccand]
Would you mind having a look at this initial patch when you have the chance?

I am keen to making changes if needed especially regarding the backward 
compatibility part of the BlenderType.

Thank you very much.

> BlendedInfixSuggester to support exponential reciprocal BlenderType
> ---
>
> Key: LUCENE-6939
> URL: https://issues.apache.org/jira/browse/LUCENE-6939
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spellchecker
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Attachments: LUCENE_6939.patch
>
>
> The orignal BlendedInfixSuggester introduced in LUCENE-5354 has support for:
> - {{BlenderType.POSITION_LINEAR}} and 
> - {{BlenderType.POSITION_RECIPROCAL}} .
> These are used to score documents based on the position of the matched token 
> i.e the closer is the matched term to the beginning, the higher score you get.
> In some use cases, we need a more aggressive scoring based on the position.
> That's where the exponential reciprocal comes into play 
> i.e 
> {{coef = 1/Math.pow(position+1, exponent)}}
> where the {{exponent}} is a configurable variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8415) Provide command to switch between non/secure mode in ZK

2015-12-17 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062382#comment-15062382
 ] 

Mike Drob commented on SOLR-8415:
-

Also, we would add resetacl to 
https://cwiki.apache.org/confluence/display/solr/Command+Line+Utilities

> Provide command to switch between non/secure mode in ZK
> ---
>
> Key: SOLR-8415
> URL: https://issues.apache.org/jira/browse/SOLR-8415
> Project: Solr
>  Issue Type: Improvement
>  Components: security, SolrCloud
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8415.patch, SOLR-8415.patch
>
>
> We have the ability to run both with and without zk acls, but we don't have a 
> great way to switch between the two modes. Most common use case, I imagine, 
> would be upgrading from an old version that did not support this to a new 
> version that does, and wanting to protect all of the existing content in ZK, 
> but it is conceivable that a user might want to remove ACLs as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5479 - Still Failing!

2015-12-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5479/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.search.AnalyticsMergeStrategyTest.test

Error Message:
Error from server at http://127.0.0.1:53727/fieh/gr/collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:53727/fieh/gr/collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.
at 
__randomizedtesting.SeedInfo.seed([14E83C3E660109B6:9CBC03E4C8FD644E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:562)
at 
org.apache.solr.search.AnalyticsMergeStrategyTest.test(AnalyticsMergeStrategyTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-12-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062510#comment-15062510
 ] 

Yonik Seeley commented on SOLR-8230:


Is this patch for trunk?  I'm getting failures when I try to apply it.

> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch, 
> SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.
> Here is an example of telemetry returned from query. 
> Query
> {code}
> curl http://localhost:8228/solr/films/select -d 
> 'q=*:*=json=true=true={
> top_genre: {
>   type:terms,
>   field:genre,
>   numBucket:true,
>   limit:2,
>   facet: {
> top_director: {
> type:terms,
> field:directed_by,
> numBuckets:true,
> limit:2
> },
> first_release: {
> type:terms,
> field:initial_release_date,
> sort:{index:asc},
> numBuckets:true,
> limit:2
> }
>   }
> }
> }'
> {code}
> Telemetry returned (inside debug part)
> {code}
> "facet-trace":{
>   "processor":"FacetQueryProcessor",
>   "elapse":1,
>   "query":null,
>   "sub-facet":[{
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":1,
>   "field":"genre",
>   "limit":2,
>   "sub-facet":[{
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2}]}]},
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7867) implicit sharded, facet grouping problem with multivalued string field starting with digits

2015-12-17 Thread Vishnu Mishra (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062403#comment-15062403
 ] 

Vishnu Mishra commented on SOLR-7867:
-

We are using Solr 5.3.1 and facing the same issue with group.facet. Any 
progress?

> implicit sharded, facet grouping problem with multivalued string field 
> starting with digits
> ---
>
> Key: SOLR-7867
> URL: https://issues.apache.org/jira/browse/SOLR-7867
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, SolrCloud
>Affects Versions: 5.2
> Environment: 3.13.0-48-generic #80-Ubuntu SMP x86_64 GNU/Linux
> java version "1.7.0_80"
> Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
>Reporter: Umut Erogul
>  Labels: docValues, facet, group, sharding
> Attachments: DocValuesException.PNG, ErrorReadingDocValues.PNG
>
>
> related parts @ schema.xml:
> {code} docValues="true" multiValued="true"/>
>  docValues="true"/>{code}
> every document has valid author_s and keyword_ss fields;
> we can make successful facet group queries on single node, single collection, 
> solr-4.9.0 server
> {code}
> q: *:* fq: keyword_ss:3m
> facet=true=keyword_ss=true=author_s=true
> {code}
> when querying on solr-5.2.0 server with implicit sharded environment with:
> {code}
>  required="true"/>{code}
> with example shard names; affinity1 affinity2 affinity3 affinity4
> the same query with same documents gets:
> {code}
> ERROR - 2015-08-04 08:15:15.222; [document affinity3 core_node32 
> document_affinity3_replica2] org.apache.solr.common.SolrException; 
> org.apache.solr.common.SolrException: Exception during facet.field: keyword_ss
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:632)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:617)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:571)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:642)
> ...
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException
> at 
> org.apache.lucene.codecs.lucene50.Lucene50DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene50DocValuesProducer.java:1008)
> at 
> org.apache.lucene.codecs.lucene50.Lucene50DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.next(Lucene50DocValuesProducer.java:1026)
> at 
> org.apache.lucene.search.grouping.term.TermGroupFacetCollector$MV$SegmentResult.nextTerm(TermGroupFacetCollector.java:373)
> at 
> org.apache.lucene.search.grouping.AbstractGroupFacetCollector.mergeSegmentResults(AbstractGroupFacetCollector.java:91)
> at 
> org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:541)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:463)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:386)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:626)
> ... 33 more
> {code}
> all the problematic queries are caused by strings starting with digits; 
> ("3m", "8 saniye", "2 broke girls", "1v1y")
> there are some strings that the query works like ("24", "90+", "45 dakika")
> we do not observe the problem when querying with 
> -keyword_ss:(0-9)*
> updating the problematic documents (a small subset of keyword_ss:(0-9)*), 
> fixes the query, 
> but we cannot find an easy solution to find the problematic documents
> there is around 400m docs; seperated at 28 shards; 
> -keyword_ss:(0-9)* matches %97 of documents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new test fault injection approach and a new SolrCloud test that stops and starts the cluster while indexing data and with random faults.

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062482#comment-15062482
 ] 

ASF subversion and git services commented on SOLR-8279:
---

Commit 1720624 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1720624 ]

SOLR-8279: Close factories in unrelated test.

> Add a new test fault injection approach and a new SolrCloud test that stops 
> and starts the cluster while indexing data and with random faults.
> --
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new test fault injection approach and a new SolrCloud test that stops and starts the cluster while indexing data and with random faults.

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062346#comment-15062346
 ] 

ASF subversion and git services commented on SOLR-8279:
---

Commit 1720613 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1720613 ]

SOLR-8279: Add a new test fault injection approach and a new SolrCloud test 
that stops and starts the cluster while indexing data and with random faults.

> Add a new test fault injection approach and a new SolrCloud test that stops 
> and starts the cluster while indexing data and with random faults.
> --
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new test fault injection approach and a new SolrCloud test that stops and starts the cluster while indexing data and with random faults.

2015-12-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062350#comment-15062350
 ] 

Mark Miller commented on SOLR-8279:
---

There is the commit to trunk. Reviews welcome. This was a bit of a beast to get 
done in a way that could be run as part of the normal test framework, coming 
from my original just hack together a test I can run approach, but I think I've 
now got a great base for adding more failure / fault injection tests.

> Add a new test fault injection approach and a new SolrCloud test that stops 
> and starts the cluster while indexing data and with random faults.
> --
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062370#comment-15062370
 ] 

Mark Miller commented on SOLR-8416:
---

bq. Should probably check if the replicas node is listed under live nodes as 
well as if it's active?
bq. Can you give me more details about it?

For technical reasons, the actual state of a replica is a combination of 
whether it's ephemeral live node exists in zookeeper and the state listed in 
the cluster state. We make a best effort on shutdown to publish DOWN for all 
the states, but it's simply best effort and crashes and other probably common 
things can mean any state is in the cluster state. You can really only count on 
it being an accurate state if you also check that the node is live. The 
ClusterState object has a helper method for this if I remember right. 

> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15230 - Failure!

2015-12-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15230/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.search.AnalyticsMergeStrategyTest.test

Error Message:
Error from server at http://127.0.0.1:58032//collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58032//collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.
at 
__randomizedtesting.SeedInfo.seed([96E0128CCCAF644A:1EB42D56625309B2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:562)
at 
org.apache.solr.search.AnalyticsMergeStrategyTest.test(AnalyticsMergeStrategyTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8279) Add a new test fault injection approach and a new SolrCloud test that stops and starts the cluster while indexing data and with random faults.

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062522#comment-15062522
 ] 

ASF subversion and git services commented on SOLR-8279:
---

Commit 1720627 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1720627 ]

SOLR-8279: end searcher tracking before object release tracker.

> Add a new test fault injection approach and a new SolrCloud test that stops 
> and starts the cluster while indexing data and with random faults.
> --
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2015-12-17 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer reassigned SOLR-4280:


Assignee: James Dyer

> spellcheck.maxResultsForSuggest based on filter query results
> -
>
> Key: SOLR-4280
> URL: https://issues.apache.org/jira/browse/SOLR-4280
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Markus Jelsma
>Assignee: James Dyer
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
> SOLR-4280-trunk.patch, SOLR-4280.patch, SOLR-4280.patch
>
>
> spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
> able to take a ratio and calculate that against the maximum number of results 
> the filter queries return.
> At least in our case this would certainly add a lot of value. >99% of our 
> end-users search within one or more filters of which one is always unique. 
> The number of documents for each of those unique filters varies significantly 
> ranging from 300 to 3.000.000 documents in which they search. The 
> maxResultsForSuggest is set to a reasonable low value so it kind of works 
> fine but sometimes leads to undesired suggestions for a large subcorpus that 
> has more misspellings.
> Spun off from SOLR-4278.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk-9-ea+95) - Build # 14933 - Failure!

2015-12-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14933/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=10642, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=10641, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=10644, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=10645, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=10643, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=10642, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 

[jira] [Updated] (SOLR-8279) Add a new test fault injection approach and a new SolrCloud test that stops and starts the cluster while indexing data and with random faults.

2015-12-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8279:
--
Summary: Add a new test fault injection approach and a new SolrCloud test 
that stops and starts the cluster while indexing data and with random faults.  
(was: Add a new SolrCloud test that stops and starts the cluster while indexing 
data with fault injection.)

> Add a new test fault injection approach and a new SolrCloud test that stops 
> and starts the cluster while indexing data and with random faults.
> --
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2015-12-17 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-4280:
-
Attachment: SOLR-4280.patch

Clean-up patch with slightly better testing, javadoc.  Once I can run tests & 
precommit on it, I will commit this.

> spellcheck.maxResultsForSuggest based on filter query results
> -
>
> Key: SOLR-4280
> URL: https://issues.apache.org/jira/browse/SOLR-4280
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Markus Jelsma
>Assignee: James Dyer
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
> SOLR-4280-trunk.patch, SOLR-4280.patch, SOLR-4280.patch, SOLR-4280.patch
>
>
> spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
> able to take a ratio and calculate that against the maximum number of results 
> the filter queries return.
> At least in our case this would certainly add a lot of value. >99% of our 
> end-users search within one or more filters of which one is always unique. 
> The number of documents for each of those unique filters varies significantly 
> ranging from 300 to 3.000.000 documents in which they search. The 
> maxResultsForSuggest is set to a reasonable low value so it kind of works 
> fine but sometimes leads to undesired suggestions for a large subcorpus that 
> has more misspellings.
> Spun off from SOLR-4278.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8190) Implement Closeable on TupleStream

2015-12-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062376#comment-15062376
 ] 

Joel Bernstein commented on SOLR-8190:
--

The /stream and /sql handler should be calling open() and close() in the 
majority of situations. There are also situations where the Streams themselves 
may open() and close() internal streams. For example the hashJoin stream may 
open a stream read it into a hashtable and then close the stream. 

But if people want to work directly with the Streaming API, rather the sending 
a Streaming Expression to the /stream handler, it would be nice to add some 
robustness to how open and close are handled. Probably we should through 
exceptions in the first three cases you mention.

> Implement Closeable on TupleStream
> --
>
> Key: SOLR-8190
> URL: https://issues.apache.org/jira/browse/SOLR-8190
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8190.patch, SOLR-8190.patch
>
>
> Implementing Closeable on TupleStream provides the ability to use 
> try-with-resources 
> (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)
>  in tests and in practice. This prevents TupleStreams from being left open 
> when there is an error in the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8436) Realtime-get should support filters

2015-12-17 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-8436:
--

 Summary: Realtime-get should support filters
 Key: SOLR-8436
 URL: https://issues.apache.org/jira/browse/SOLR-8436
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.4
Reporter: Yonik Seeley


RTG currently ignores filters.  There are probably other use-cases for RTG and 
filters, but one that comes to mind is security filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062652#comment-15062652
 ] 

ASF subversion and git services commented on SOLR-4280:
---

Commit 1720637 from jd...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720637 ]

SOLR-4280: Allow specifying "spellcheck.maxResultsForSuggest" as a percentage 
of filter query results

> spellcheck.maxResultsForSuggest based on filter query results
> -
>
> Key: SOLR-4280
> URL: https://issues.apache.org/jira/browse/SOLR-4280
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Markus Jelsma
>Assignee: James Dyer
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
> SOLR-4280-trunk.patch, SOLR-4280.patch, SOLR-4280.patch, SOLR-4280.patch
>
>
> spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
> able to take a ratio and calculate that against the maximum number of results 
> the filter queries return.
> At least in our case this would certainly add a lot of value. >99% of our 
> end-users search within one or more filters of which one is always unique. 
> The number of documents for each of those unique filters varies significantly 
> ranging from 300 to 3.000.000 documents in which they search. The 
> maxResultsForSuggest is set to a reasonable low value so it kind of works 
> fine but sometimes leads to undesired suggestions for a large subcorpus that 
> has more misspellings.
> Spun off from SOLR-4278.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8208) DocTransformer executes sub-queries

2015-12-17 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062731#comment-15062731
 ] 

Mikhail Khludnev commented on SOLR-8208:


bq. What should we do when from field have multiple values?

I prefer to forget about it, until we have real life challenge from someone. 
So, far support  single value fields.   

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new test fault injection approach and a new SolrCloud test that stops and starts the cluster while indexing data and with random faults.

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062581#comment-15062581
 ] 

ASF subversion and git services commented on SOLR-8279:
---

Commit 1720631 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1720631 ]

SOLR-8279: Do not fail tests due to searcher tracking - just use that for 
waiting and use ObjectReleaseTracker for the fail since it has more detailed 
info.

> Add a new test fault injection approach and a new SolrCloud test that stops 
> and starts the cluster while indexing data and with random faults.
> --
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8437) Remove outdated RAMDirectory comment from example solrconfigs

2015-12-17 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-8437:
---

 Summary: Remove outdated RAMDirectory comment from example 
solrconfigs
 Key: SOLR-8437
 URL: https://issues.apache.org/jira/browse/SOLR-8437
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Varun Thacker
Priority: Minor
 Fix For: 5.5, Trunk


There is a comment here in the solrconfig.xml file -

{code}
   solr.RAMDirectoryFactory is memory based, not
   persistent, and doesn't work with replication.
{code}

This is outdated after SOLR-3911 . I tried recovering a replica manually as 
well when they were using RAMDirectoryFactory and it worked just fine.

So we should just get rid of that comment from all the example configs shipped 
with solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) Solr collection creation API should return after all cores are alive

2015-12-17 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062659#comment-15062659
 ] 

Michael Sun commented on SOLR-8416:
---

Here is an updated patch which includes checking for live nodes. Thanks 
[~markrmil...@gmail.com] for suggestion.


> Solr collection creation API should return after all cores are alive 
> -
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
> Attachments: SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8208) DocTransformer executes sub-queries

2015-12-17 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062727#comment-15062727
 ] 

Mikhail Khludnev commented on SOLR-8208:


bq. Do you also support fromIndex - that is, executing the query against 
another core or collection? That would be the killer feature.

great idea. Let me spawn a sub-task. 



> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8371:
--
Attachment: SOLR-8371.patch

New patch. We have actually been using the update executor for recovery threads 
- those threads can actually lead to IO (in tests im mostly seeing it as jmx 
getStats calls on close) and we now interrupt the update executor. I've made a 
new 'recoveryExecutor' to handle the main Recovery threads.

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, 
> SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-12-17 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062575#comment-15062575
 ] 

Michael Sun commented on SOLR-8230:
---

[~yo...@apache.org] The patch is on trunk.


> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch, 
> SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.
> Here is an example of telemetry returned from query. 
> Query
> {code}
> curl http://localhost:8228/solr/films/select -d 
> 'q=*:*=json=true=true={
> top_genre: {
>   type:terms,
>   field:genre,
>   numBucket:true,
>   limit:2,
>   facet: {
> top_director: {
> type:terms,
> field:directed_by,
> numBuckets:true,
> limit:2
> },
> first_release: {
> type:terms,
> field:initial_release_date,
> sort:{index:asc},
> numBuckets:true,
> limit:2
> }
>   }
> }
> }'
> {code}
> Telemetry returned (inside debug part)
> {code}
> "facet-trace":{
>   "processor":"FacetQueryProcessor",
>   "elapse":1,
>   "query":null,
>   "sub-facet":[{
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":1,
>   "field":"genre",
>   "limit":2,
>   "sub-facet":[{
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2}]}]},
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8364) SpellCheckComponentTest occasionally fails

2015-12-17 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062662#comment-15062662
 ] 

James Dyer commented on SOLR-8364:
--

This failure [Policeman 
#2948|http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2948/console] 
also had the warning about 2 on-deck searchers.

> SpellCheckComponentTest occasionally fails
> --
>
> Key: SOLR-8364
> URL: https://issues.apache.org/jira/browse/SOLR-8364
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: Trunk
>Reporter: James Dyer
>Priority: Minor
>
> This failure did not reproduce for me in Linux or Windows with the same seed.
> {quote}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5439/
> : Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
> : 
> : 1 tests failed.
> : FAILED:  org.apache.solr.handler.component.SpellCheckComponentTest.test
> : 
> : Error Message:
> : List size mismatch @ spellcheck/suggestions
> : 
> : Stack Trace:
> : java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2015-12-17 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062716#comment-15062716
 ] 

Markus Jelsma commented on SOLR-4280:
-

Great work James! Many thanks!

> spellcheck.maxResultsForSuggest based on filter query results
> -
>
> Key: SOLR-4280
> URL: https://issues.apache.org/jira/browse/SOLR-4280
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Markus Jelsma
>Assignee: James Dyer
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
> SOLR-4280-trunk.patch, SOLR-4280.patch, SOLR-4280.patch, SOLR-4280.patch
>
>
> spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
> able to take a ratio and calculate that against the maximum number of results 
> the filter queries return.
> At least in our case this would certainly add a lot of value. >99% of our 
> end-users search within one or more filters of which one is always unique. 
> The number of documents for each of those unique filters varies significantly 
> ranging from 300 to 3.000.000 documents in which they search. The 
> maxResultsForSuggest is set to a reasonable low value so it kind of works 
> fine but sometimes leads to undesired suggestions for a large subcorpus that 
> has more misspellings.
> Spun off from SOLR-4278.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8429) add a flag blockUnauthenticated to BasicAutPlugin

2015-12-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062556#comment-15062556
 ] 

Noble Paul commented on SOLR-8429:
--

bq.Cool. This workaround would require blockUnauthenticated to be false, right?

yes


bq.Just a thought: If the new flag blockUnauthenticated is not explicitly 
defined in config, could the default be smart and depend on whether an 
Authorization plugin is active or not?

I'm kinda against any rule which requires a user to read documentation to 
understand. The rule of thumb is if a user looks at the {{security.json}} he 
should have enough idea on what could happen. 


> add a flag blockUnauthenticated to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15231 - Still Failing!

2015-12-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15231/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:60429/_i/pn/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:60429/_i/pn/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([EA8660D609B2C572:62D25F0CA74EA88A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:638)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062645#comment-15062645
 ] 

ASF subversion and git services commented on SOLR-4280:
---

Commit 1720636 from jd...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1720636 ]

SOLR-4280: Allow specifying "spellcheck.maxResultsForSuggest" as a percentage 
of filter query results

> spellcheck.maxResultsForSuggest based on filter query results
> -
>
> Key: SOLR-4280
> URL: https://issues.apache.org/jira/browse/SOLR-4280
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Markus Jelsma
>Assignee: James Dyer
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
> SOLR-4280-trunk.patch, SOLR-4280.patch, SOLR-4280.patch, SOLR-4280.patch
>
>
> spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
> able to take a ratio and calculate that against the maximum number of results 
> the filter queries return.
> At least in our case this would certainly add a lot of value. >99% of our 
> end-users search within one or more filters of which one is always unique. 
> The number of documents for each of those unique filters varies significantly 
> ranging from 300 to 3.000.000 documents in which they search. The 
> maxResultsForSuggest is set to a reasonable low value so it kind of works 
> fine but sometimes leads to undesired suggestions for a large subcorpus that 
> has more misspellings.
> Spun off from SOLR-4278.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new test fault injection approach and a new SolrCloud test that stops and starts the cluster while indexing data and with random faults.

2015-12-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062683#comment-15062683
 ] 

Mark Miller commented on SOLR-8279:
---

I'll give Jenkins some time before backporting this to 5x.

> Add a new test fault injection approach and a new SolrCloud test that stops 
> and starts the cluster while indexing data and with random faults.
> --
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8438) fromIndex= param for [subquery ] DocTransformer

2015-12-17 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-8438:
--

 Summary: fromIndex= param for [subquery ] DocTransformer 
 Key: SOLR-8438
 URL: https://issues.apache.org/jira/browse/SOLR-8438
 Project: Solr
  Issue Type: Sub-task
Reporter: Mikhail Khludnev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-17 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062738#comment-15062738
 ] 

Paul Elschot commented on LUCENE-6922:
--

The patch is commitable, with the usual guarantees :)

Do not run this script from an svn working copy or a git working tree, it might 
overwrite itself there. Copy it to another place first.

I'm planning to make the script also start a new branch from the earliest 
available revision of an svn branch.
The idea is to try this for some svn branches which need their history in git, 
see LUCENE-6933.

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-12-17 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062593#comment-15062593
 ] 

Michael Sun commented on SOLR-8230:
---

Attach an updated patch generated from trunk up to date.


> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch, 
> SOLR-8230.patch, SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.
> Here is an example of telemetry returned from query. 
> Query
> {code}
> curl http://localhost:8228/solr/films/select -d 
> 'q=*:*=json=true=true={
> top_genre: {
>   type:terms,
>   field:genre,
>   numBucket:true,
>   limit:2,
>   facet: {
> top_director: {
> type:terms,
> field:directed_by,
> numBuckets:true,
> limit:2
> },
> first_release: {
> type:terms,
> field:initial_release_date,
> sort:{index:asc},
> numBuckets:true,
> limit:2
> }
>   }
> }
> }'
> {code}
> Telemetry returned (inside debug part)
> {code}
> "facet-trace":{
>   "processor":"FacetQueryProcessor",
>   "elapse":1,
>   "query":null,
>   "sub-facet":[{
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":1,
>   "field":"genre",
>   "limit":2,
>   "sub-facet":[{
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2}]}]},
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8437) Remove outdated RAMDirectory comment from example solrconfigs

2015-12-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062676#comment-15062676
 ] 

Mark Miller commented on SOLR-8437:
---

It's still memory based and not persistent. It also mainly exists for faster 
tests. I don't know that we should really promote it.

> Remove outdated RAMDirectory comment from example solrconfigs
> -
>
> Key: SOLR-8437
> URL: https://issues.apache.org/jira/browse/SOLR-8437
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: 5.5, Trunk
>
>
> There is a comment here in the solrconfig.xml file -
> {code}
>solr.RAMDirectoryFactory is memory based, not
>persistent, and doesn't work with replication.
> {code}
> This is outdated after SOLR-3911 . I tried recovering a replica manually as 
> well when they were using RAMDirectoryFactory and it worked just fine.
> So we should just get rid of that comment from all the example configs 
> shipped with solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062886#comment-15062886
 ] 

ASF subversion and git services commented on SOLR-8433:
---

Commit 1720673 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1720673 ]

SOLR-8433: Adding logging

> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>[junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
> path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>[junit4]   2>  at 
> sun.security.validator.Validator.validate(Validator.java:260)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
>[junit4]   2>  ... 29 more
>[junit4]   2> Caused by: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:146)
>[junit4]   2>  

[jira] [Updated] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2015-12-17 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-4280:
-
Fix Version/s: (was: 4.9)
   (was: Trunk)
   5.5

> spellcheck.maxResultsForSuggest based on filter query results
> -
>
> Key: SOLR-4280
> URL: https://issues.apache.org/jira/browse/SOLR-4280
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Markus Jelsma
>Assignee: James Dyer
> Fix For: 5.5
>
> Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
> SOLR-4280-trunk.patch, SOLR-4280.patch, SOLR-4280.patch, SOLR-4280.patch
>
>
> spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
> able to take a ratio and calculate that against the maximum number of results 
> the filter queries return.
> At least in our case this would certainly add a lot of value. >99% of our 
> end-users search within one or more filters of which one is always unique. 
> The number of documents for each of those unique filters varies significantly 
> ranging from 300 to 3.000.000 documents in which they search. The 
> maxResultsForSuggest is set to a reasonable low value so it kind of works 
> fine but sometimes leads to undesired suggestions for a large subcorpus that 
> has more misspellings.
> Spun off from SOLR-4278.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8371:
--
Attachment: SOLR-8371.patch

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, 
> SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, 
> SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2015-12-17 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-4280.
--
Resolution: Fixed

And thanks to you, Markus, for actually developing the code for this.

> spellcheck.maxResultsForSuggest based on filter query results
> -
>
> Key: SOLR-4280
> URL: https://issues.apache.org/jira/browse/SOLR-4280
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Markus Jelsma
>Assignee: James Dyer
> Fix For: 5.5
>
> Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
> SOLR-4280-trunk.patch, SOLR-4280.patch, SOLR-4280.patch, SOLR-4280.patch
>
>
> spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
> able to take a ratio and calculate that against the maximum number of results 
> the filter queries return.
> At least in our case this would certainly add a lot of value. >99% of our 
> end-users search within one or more filters of which one is always unique. 
> The number of documents for each of those unique filters varies significantly 
> ranging from 300 to 3.000.000 documents in which they search. The 
> maxResultsForSuggest is set to a reasonable low value so it kind of works 
> fine but sometimes leads to undesired suggestions for a large subcorpus that 
> has more misspellings.
> Spun off from SOLR-4278.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8190) Implement Closeable on TupleStream

2015-12-17 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062818#comment-15062818
 ] 

Jason Gerlowski commented on SOLR-8190:
---

Yeah, I guess it doesn't make a ton of sense to push try-with-finally when it 
doesn't really work with {{TupleStream}}s API.

I agree with you and Joel, it makes sense to catch these special cases and 
throw an {{IllegalStateException}} or something similar.

> Implement Closeable on TupleStream
> --
>
> Key: SOLR-8190
> URL: https://issues.apache.org/jira/browse/SOLR-8190
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8190.patch, SOLR-8190.patch
>
>
> Implementing Closeable on TupleStream provides the ability to use 
> try-with-resources 
> (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)
>  in tests and in practice. This prevents TupleStreams from being left open 
> when there is an error in the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8015) HdfsLock may fail to close a FileSystem instance if it cannot immediately obtain an index lock.

2015-12-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8015:
--
Attachment: SOLR-8015.patch

Patch with change.

> HdfsLock may fail to close a FileSystem instance if it cannot immediately 
> obtain an index lock.
> ---
>
> Key: SOLR-8015
> URL: https://issues.apache.org/jira/browse/SOLR-8015
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.5
>
> Attachments: SOLR-8015.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8371:
--
Attachment: SOLR-8371.patch

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, 
> SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062980#comment-15062980
 ] 

ASF subversion and git services commented on LUCENE-6922:
-

Commit 1720686 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1720686 ]

LUCENE-6922: latest version of svn to git mirror workaround script, from Paul 
Elschot

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062873#comment-15062873
 ] 

Joel Bernstein edited comment on SOLR-8433 at 12/17/15 9:36 PM:


A new error has cropped up with the AnalyticsMergeStrategyTest since the last 
commit. I'm going to add some logging output to see if I can track down the 
issue. The stack trace is below:

{code}

Caused by: java.lang.IllegalStateException: Scheme 'http' not registered.
   [junit4]   2>at 
org.apache.http.conn.scheme.SchemeRegistry.getScheme(SchemeRegistry.java:74)
   [junit4]   2>at 
org.apache.http.impl.conn.ProxySelectorRoutePlanner.determineRoute(ProxySelectorRoutePlanner.java:140)
   [junit4]   2>at 
org.apache.http.impl.client.DefaultRequestDirector.determineRoute(DefaultRequestDirector.java:762)
   [junit4]   2>at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:381)
   [junit4]   2>at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
   [junit4]   2>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
   [junit4]   2>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
   [junit4]   2>at 
org.apache.solr.handler.component.IterativeMergeStrategy$CallBack.call(IterativeMergeStrategy.java:105)
   [junit4]   2>at 
org.apache.solr.handler.component.IterativeMergeStrategy$CallBack.call(IterativeMergeStrategy.java:81)
   [junit4]   2>at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   [junit4]   2>... 1 more

{code}


was (Author: joel.bernstein):
A new error has cropped with the AnalyticsMergeStrategyTest since the last 
commit. I'm going to add some logging output to see if I can track down the 
issue. The stack trace is below:

{code}

Caused by: java.lang.IllegalStateException: Scheme 'http' not registered.
   [junit4]   2>at 
org.apache.http.conn.scheme.SchemeRegistry.getScheme(SchemeRegistry.java:74)
   [junit4]   2>at 
org.apache.http.impl.conn.ProxySelectorRoutePlanner.determineRoute(ProxySelectorRoutePlanner.java:140)
   [junit4]   2>at 
org.apache.http.impl.client.DefaultRequestDirector.determineRoute(DefaultRequestDirector.java:762)
   [junit4]   2>at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:381)
   [junit4]   2>at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
   [junit4]   2>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
   [junit4]   2>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
   [junit4]   2>at 
org.apache.solr.handler.component.IterativeMergeStrategy$CallBack.call(IterativeMergeStrategy.java:105)
   [junit4]   2>at 
org.apache.solr.handler.component.IterativeMergeStrategy$CallBack.call(IterativeMergeStrategy.java:81)
   [junit4]   2>at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]   2>at 

[jira] [Commented] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062873#comment-15062873
 ] 

Joel Bernstein commented on SOLR-8433:
--

A new error has cropped with the AnalyticsMergeStrategyTest since the last 
commit. I'm going to add some logging output to see if I can track down the 
issue. The stack trace is below:

{code}

Caused by: java.lang.IllegalStateException: Scheme 'http' not registered.
   [junit4]   2>at 
org.apache.http.conn.scheme.SchemeRegistry.getScheme(SchemeRegistry.java:74)
   [junit4]   2>at 
org.apache.http.impl.conn.ProxySelectorRoutePlanner.determineRoute(ProxySelectorRoutePlanner.java:140)
   [junit4]   2>at 
org.apache.http.impl.client.DefaultRequestDirector.determineRoute(DefaultRequestDirector.java:762)
   [junit4]   2>at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:381)
   [junit4]   2>at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
   [junit4]   2>at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
   [junit4]   2>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
   [junit4]   2>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
   [junit4]   2>at 
org.apache.solr.handler.component.IterativeMergeStrategy$CallBack.call(IterativeMergeStrategy.java:105)
   [junit4]   2>at 
org.apache.solr.handler.component.IterativeMergeStrategy$CallBack.call(IterativeMergeStrategy.java:81)
   [junit4]   2>at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   [junit4]   2>... 1 more

{code}

> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> 

[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062981#comment-15062981
 ] 

ASF subversion and git services commented on LUCENE-6922:
-

Commit 1720687 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720687 ]

LUCENE-6922: latest version of svn to git mirror workaround script, from Paul 
Elschot

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6922) Improve svn to git workaround script

2015-12-17 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062983#comment-15062983
 ] 

Michael McCandless commented on LUCENE-6922:


OK I committed the last patch (from yesterday) ... thanks 
[~paul.elsc...@xs4all.nl]!

> Improve svn to git workaround script
> 
>
> Key: LUCENE-6922
> URL: https://issues.apache.org/jira/browse/LUCENE-6922
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: -tools
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: svnBranchToGit.py, svnBranchToGit.py
>
>
> As the git-svn mirror for Lucene/Solr will be turned off near the end of 
> 2015, try and improve the workaround script to become more usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8433) IterativeMergeStrategy test failures due to SSL errors on Windows

2015-12-17 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062982#comment-15062982
 ] 

Shalin Shekhar Mangar commented on SOLR-8433:
-

That particular error happens when trying to make an HTTP request when SSL is 
enabled. When the test framework enables SSL, it removes the HTTP scheme to 
ensure that no one is trying to access any URL over HTTP.

> IterativeMergeStrategy test failures due to SSL errors  on Windows
> --
>
> Key: SOLR-8433
> URL: https://issues.apache.org/jira/browse/SOLR-8433
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> The AnalyticsMergeStrageyTest is failing on Windows with SSL errors. The 
> failures are occurring during the callbacks to the shards introduced in 
> SOLR-6398.
> {code}
>   
> [junit4]   2> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>[junit4]   2>  at 
> sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
>[junit4]   2>  at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:543)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:409)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
>[junit4]   2>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
>[junit4]   2>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
>[junit4]   2>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
>[junit4]   2>  ... 11 more
>[junit4]   2> Caused by: sun.security.validator.ValidatorException: PKIX 
> path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>[junit4]   2>  at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>[junit4]   2>  at 
> sun.security.validator.Validator.validate(Validator.java:260)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
>[junit4]   2>  at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
>[junit4]   2>  at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
>[junit4]   2>  ... 29 more
>[junit4]   2> Caused by: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
>[junit4]   2>  at 
> 

Managed Resource Unit Test Failures

2015-12-17 Thread Michael Nilsson
I'm working on publishing a patch against trunk, adding a learning to rank
contrib module.  For some reason, our unit tests that hit our config
managed resources no longer seem to be recognizing the config/managed
endpoint, but they were ok in 4.10.  I've pasted the code with the small
test case below.  Anyone have an idea of why the ManagedResource doesn't
seem to be registered?

Essentially my test just assertJQ("/config/managed",
"/responseHeader/status==0"), and its @BeforeClass init() sets everything
up the exact same way that SolrRestletTestBase.java does, except putting a
/config/* instead of /schema/*, and uses my solrconfig which has a
searchComponent that registers the managed resource.  The managed resource
is just a dummy, and the only function in the component that does something
is the inform(SolrCore) method.


Error:

HTTP ERROR: 404
Problem accessing /solr/collection1/config/managed. Reason:
*Can not find: /solr/collection1/config/managed*
Powered by Jetty://




TestManaged.java:
public class TestManaged extends RestTestBase {

  @BeforeClass
  public static void init() throws Exception {
String solrconfig = "solrconfig-testend.xml";
String schema = "schema-testend.xml";

Path tempDir = createTempDir();
Path coresDir = tempDir.resolve("cores");

System.setProperty("coreRootDirectory", coresDir.toString());
System.setProperty("configSetBaseDir", TEST_HOME());

final SortedMap extraServlets = new TreeMap<>();
final ServletHolder solrSchemaRestApi = new
ServletHolder("SolrSchemaRestApi", ServerServlet.class);
solrSchemaRestApi.setInitParameter("org.restlet.application",
"org.apache.solr.rest.SolrSchemaRestApi");
//extraServlets.put(solrSchemaRestApi, "/schema/*");  // '/schema/*'
matches '/schema', '/schema/', and '/schema/whatever...'
*extraServlets.put(solrSchemaRestApi, "/config/*");*  // '/schema/*'
matches '/schema', '/schema/', and '/schema/whatever...'

Properties props = new Properties();
props.setProperty("name", DEFAULT_TEST_CORENAME);
*props.setProperty("config", solrconfig);*
*props.setProperty("schema", schema);*
props.setProperty("configSet", "collection1");

writeCoreProperties(coresDir.resolve("core"), props,
"SolrRestletTestBase");
createJettyAndHarness(TEST_HOME(),* solrconfig, schema, *"/solr", true,
extraServlets);
  }


  @Test
  public void testRestManagerEndpoints() throws Exception {
String request = "/config/managed";
*assertJQ(request, "/responseHeader/status==0");*
  }

}




solrconfig-test.xml:
(Contains the SolrCoreAware searchComponent that registers the resource)

...
*  *

  
  

  json
  id


  managedComponent

  
...



ManagedComponent.java:
public class ManagedComponent extends SearchComponent implements
SolrCoreAware {

  public void inform(SolrCore core) {
*core.getRestManager().addManagedResource("/config/test",
ManagedStore.class);*
  }

  public void prepare(ResponseBuilder rb) throws IOException {}
  public void process(ResponseBuilder rb) throws IOException {}
  public String getDescription() { return null; }
}



ManagedStore.java:  (It is just a dummy class for the test)
public class ManagedStore extends ManagedResource implements
ManagedResource.ChildResourceSupport {

  public ManagedStore(String resourceId, SolrResourceLoader loader,
StorageIO storageIO) throws SolrException {
super(resourceId, loader, storageIO);
  }

  protected void onManagedDataLoadedFromStorage(NamedList
managedInitArgs, Object managedData) throws SolrException { }

  public Object applyUpdatesToManagedData(Object updates) { return "HELLO
UPDATES"; }

  public void doDeleteChild(BaseSolrResource endpoint, String childId) { }

  public void doGet(BaseSolrResource endpoint, String childId) {
SolrQueryResponse response = endpoint.getSolrResponse();
response.add("TEST", "HELLO GET");
  }

}


[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 706 - Failure

2015-12-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/706/

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/y_wx/hw", "path":"/test1", 
"httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":"X val"}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/y_wx/hw",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":"X val"}
at 
__randomizedtesting.SeedInfo.seed([3A9F14446E2780E:DBE4DC13B13FDDAE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8279) Add a new test fault injection approach and a new SolrCloud test that stops and starts the cluster while indexing data and with random faults.

2015-12-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062850#comment-15062850
 ] 

Mark Miller commented on SOLR-8279:
---

SOLR-8371 is just a really good improvement in general, but it also is useful 
for this fault injection testing. A lot of faults in this test when I first 
started working on it is how I refreshed on how bad SOLR-8371 was now - I 
always knew it was an issue, but the min time between recoveries that we put it 
in made it much worse.

> Add a new test fault injection approach and a new SolrCloud test that stops 
> and starts the cluster while indexing data and with random faults.
> --
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8208) DocTransformer executes sub-queries

2015-12-17 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063358#comment-15063358
 ] 

Cao Manh Dat edited comment on SOLR-8208 at 12/18/15 6:30 AM:
--

Thanks Mikhail, It will make thing more easier. I also consider about 
distributing the sub-queries, so I'm trying to do this (execute the subquery 
through solrCore)
{code}
SolrCore solrCore = subQueryRequest.getCore();
SolrQueryResponse response = new SolrQueryResponse();
solrCore.execute(solrCore.getRequestHandler(null), subQueryRequest, response);
DocsStreamer docsStreamer = new DocsStreamer((ResultContext) 
response.getValues().get("response"));
{code}
But i'm afraid that it will mess up the logic inside {{SolrCore.execute}}


was (Author: caomanhdat):
Thanks Mikhail, It will make thing more easier. I also consider about 
distributing the sub-queries, so I'm trying to do this (execute the subquery 
through solrCore)
{code}
SolrCore solrCore = subQueryRequest.getCore();
SolrQueryResponse response = new SolrQueryResponse();
solrCore.execute(solrCore.getRequestHandler(null), subQueryRequest, response);
DocsStreamer docsStreamer = new DocsStreamer((ResultContext) 
response.getValues().get("response"));
{code}
But i'm afraid that it will messy the logic inside {{SolrCore.execute}}

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8435) Long update times Solr 5.3.1

2015-12-17 Thread Kenny Knecht (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063655#comment-15063655
 ] 

Kenny Knecht commented on SOLR-8435:


Thanks for the swift reply.
These setups were exactly the same, yes. In both cases we started from a fresh 
setup at aws (2 instances of r3.4xlarge) an empty core, sharded over two 
machine. 3 separate ZK machines.
So that is why I have posted this as a bug. Beginning 2016 we will do some more 
testing...

> Long update times Solr 5.3.1
> 
>
> Key: SOLR-8435
> URL: https://issues.apache.org/jira/browse/SOLR-8435
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 5.3.1
> Environment: Ubuntu server 128Gb
>Reporter: Kenny Knecht
> Fix For: 5.2.1
>
>
> We have 2 128GB ubuntu servers in solr cloud config. We update by curling 
> json files of 20,000 documents. In 5.2.1 this consistently takes between 19 
> and 24 seconds. In 5.3.1 most times this takes 20s but in about 20% of the 
> files this takes much longer: up to 500s! Which files seems to be quite 
> random. Is this a known bug? any workaround? fixed in 5.4?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15235 - Still Failing!

2015-12-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15235/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=1090, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=1086, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=1088, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=1089, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=1087, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=1090, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 

[jira] [Updated] (SOLR-8429) add a flag blockUnknown to BasicAutPlugin

2015-12-17 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8429:
-
Description: 
If authentication is setup with BasicAuthPlugin, it let's all requests go 
through if no credentials are passed. This was done to have minimal impact for 
users who only wishes to protect a few end points (say , collection admin and 
core admin only)

We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
to go in 

the users can create the first security.json with that
{code}

{code}

  was:
If authentication is setup with BasicAuthPlugin, it let's all requests go 
through if no credentials are passed. This was done to have minimal impact for 
users who only wishes to protect a few end points (say , collection admin and 
core admin only)

We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
to go in 


> add a flag blockUnknown to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 
> the users can create the first security.json with that
> {code}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8429) add a flag blockUnknown to BasicAutPlugin

2015-12-17 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8429:
-
Summary: add a flag blockUnknown to BasicAutPlugin  (was: add a flag 
blockUnauthenticated to BasicAutPlugin)

> add a flag blockUnknown to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8429) add a flag blockUnknown to BasicAutPlugin

2015-12-17 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8429:
-
Description: 
If authentication is setup with BasicAuthPlugin, it let's all requests go 
through if no credentials are passed. This was done to have minimal impact for 
users who only wishes to protect a few end points (say , collection admin and 
core admin only)

We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
to go in 

the users can create the first security.json with that flag
{code}
server/scripts/cloud-scripts/zkcli.sh -z localhost:9983 -cmd put /security.json 
'{"authentication": {"class": "solr.BasicAuthPlugin", 
"blockUnknown": true,
"credentials": {"solr": "orwp2Ghgj39lmnrZOTm7Qtre1VqHFDfwAEzr0ApbN3Y= 
Ju5osoAqOX8iafhWpPP01E5P+sg8tK8tHON7rCYZRRw="}}}'
{code}
or add the flag later
using the command

{code}
curl  http://localhost:8983/solr/admin/authentication -H 
'Content-type:application/json' -d  '{ 
{set-property:{blockUnknown:true}
}'
{code}

  was:
If authentication is setup with BasicAuthPlugin, it let's all requests go 
through if no credentials are passed. This was done to have minimal impact for 
users who only wishes to protect a few end points (say , collection admin and 
core admin only)

We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
to go in 

the users can create the first security.json with that
{code}

{code}


> add a flag blockUnknown to BasicAutPlugin
> -
>
> Key: SOLR-8429
> URL: https://issues.apache.org/jira/browse/SOLR-8429
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If authentication is setup with BasicAuthPlugin, it let's all requests go 
> through if no credentials are passed. This was done to have minimal impact 
> for users who only wishes to protect a few end points (say , collection admin 
> and core admin only)
> We can add a flag to {{BasicAuthPlugin}} to allow only authenticated requests 
> to go in 
> the users can create the first security.json with that flag
> {code}
> server/scripts/cloud-scripts/zkcli.sh -z localhost:9983 -cmd put 
> /security.json '{"authentication": {"class": "solr.BasicAuthPlugin", 
> "blockUnknown": true,
> "credentials": {"solr": "orwp2Ghgj39lmnrZOTm7Qtre1VqHFDfwAEzr0ApbN3Y= 
> Ju5osoAqOX8iafhWpPP01E5P+sg8tK8tHON7rCYZRRw="}}}'
> {code}
> or add the flag later
> using the command
> {code}
> curl  http://localhost:8983/solr/admin/authentication -H 
> 'Content-type:application/json' -d  '{ 
> {set-property:{blockUnknown:true}
> }'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >