[jira] [Comment Edited] (SOLR-12638) Support atomic updates of nested/child documents for nested-enabled schema

2019-04-09 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814045#comment-16814045
 ] 

mosh edited comment on SOLR-12638 at 4/10/19 5:17 AM:
--

After inspecting the test results it seems like the only failure was due to 
[org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest|https://builds.apache.org/job/PreCommit-SOLR-Build/367/testReport/junit.framework/TestSuite/org_apache_solr_update_processor_CategoryRoutedAliasUpdateProcessorTest_2/]
 reaching max GC overhead.
{code:java}
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=39740, name=Connection evictor, state=RUNNABLE, 
group=TGRP-CategoryRoutedAliasUpdateProcessorTest]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded{code}
This issue is probably unrelated to this patch, since SOLR-13370 was opened to 
address this issue.
[~dsmiley],
WDTY?


was (Author: moshebla):
After inspecting the test results it seems like the only failure was due to 
[org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest|https://builds.apache.org/job/PreCommit-SOLR-Build/367/testReport/org.apache.solr.update.processor/CategoryRoutedAliasUpdateProcessorTest/testSliceRouting/]
 reaching max GC overhead.
{code:java}
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=39740, name=Connection evictor, state=RUNNABLE, 
group=TGRP-CategoryRoutedAliasUpdateProcessorTest]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded{code}
This issue is probably unrelated to this patch, since SOLR-13370 was opened to 
address this issue.
[~dsmiley],
WDTY?

> Support atomic updates of nested/child documents for nested-enabled schema
> --
>
> Key: SOLR-12638
> URL: https://issues.apache.org/jira/browse/SOLR-12638
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12638-delete-old-block-no-commit.patch, 
> SOLR-12638-nocommit.patch, SOLR-12638.patch, SOLR-12638.patch
>
>  Time Spent: 13h 50m
>  Remaining Estimate: 0h
>
> I have been toying with the thought of using this transformer in conjunction 
> with NestedUpdateProcessor and AtomicUpdate to allow SOLR to completely 
> re-index the entire nested structure. This is just a thought, I am still 
> thinking about implementation details. Hopefully I will be able to post a 
> more concrete proposal soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12638) Support atomic updates of nested/child documents for nested-enabled schema

2019-04-09 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814045#comment-16814045
 ] 

mosh edited comment on SOLR-12638 at 4/10/19 5:16 AM:
--

After inspecting the test results it seems like the only failure was due to 
[org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest|https://builds.apache.org/job/PreCommit-SOLR-Build/367/testReport/org.apache.solr.update.processor/CategoryRoutedAliasUpdateProcessorTest/testSliceRouting/]
 reaching max GC overhead.
{code:java}
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=39740, name=Connection evictor, state=RUNNABLE, 
group=TGRP-CategoryRoutedAliasUpdateProcessorTest]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded{code}
This issue is probably unrelated to this patch, since SOLR-13370 was opened to 
address this issue.
[~dsmiley],
WDTY?


was (Author: moshebla):
After inspecting the test results it seems like the only failure was due to 
[org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testSliceRouting|https://builds.apache.org/job/PreCommit-SOLR-Build/367/testReport/org.apache.solr.update.processor/CategoryRoutedAliasUpdateProcessorTest/testSliceRouting/]
 reaching max GC overhead.
{code:java}
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=39740, name=Connection evictor, state=RUNNABLE, 
group=TGRP-CategoryRoutedAliasUpdateProcessorTest]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded{code}
This issue is probably unrelated to this patch, since SOLR-13370 was opened to 
address this issue.
[~dsmiley],
WDTY?

> Support atomic updates of nested/child documents for nested-enabled schema
> --
>
> Key: SOLR-12638
> URL: https://issues.apache.org/jira/browse/SOLR-12638
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12638-delete-old-block-no-commit.patch, 
> SOLR-12638-nocommit.patch, SOLR-12638.patch, SOLR-12638.patch
>
>  Time Spent: 13h 50m
>  Remaining Estimate: 0h
>
> I have been toying with the thought of using this transformer in conjunction 
> with NestedUpdateProcessor and AtomicUpdate to allow SOLR to completely 
> re-index the entire nested structure. This is just a thought, I am still 
> thinking about implementation details. Hopefully I will be able to post a 
> more concrete proposal soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12638) Support atomic updates of nested/child documents for nested-enabled schema

2019-04-09 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814045#comment-16814045
 ] 

mosh commented on SOLR-12638:
-

After inspecting the test results it seems like the only failure was due to 
[org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testSliceRouting|https://builds.apache.org/job/PreCommit-SOLR-Build/367/testReport/org.apache.solr.update.processor/CategoryRoutedAliasUpdateProcessorTest/testSliceRouting/]
 reaching max GC overhead.
{code:java}
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=39740, name=Connection evictor, state=RUNNABLE, 
group=TGRP-CategoryRoutedAliasUpdateProcessorTest]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded{code}
This issue is probably unrelated to this patch, since SOLR-13370 was opened to 
address this issue.
[~dsmiley],
WDTY?

> Support atomic updates of nested/child documents for nested-enabled schema
> --
>
> Key: SOLR-12638
> URL: https://issues.apache.org/jira/browse/SOLR-12638
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12638-delete-old-block-no-commit.patch, 
> SOLR-12638-nocommit.patch, SOLR-12638.patch, SOLR-12638.patch
>
>  Time Spent: 13h 50m
>  Remaining Estimate: 0h
>
> I have been toying with the thought of using this transformer in conjunction 
> with NestedUpdateProcessor and AtomicUpdate to allow SOLR to completely 
> re-index the entire nested structure. This is just a thought, I am still 
> thinking about implementation details. Hopefully I will be able to post a 
> more concrete proposal soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814032#comment-16814032
 ] 

AntonyJohnson commented on SOLR-13383:
--

[~Goodman]Understood,
Thanks for taking care of this case.

> auto-scaling not working in solr7.4 - autoaddreplica
> 
>
> Key: SOLR-13383
> URL: https://issues.apache.org/jira/browse/SOLR-13383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Affects Versions: 7.4
> Environment: Production
>Reporter: AntonyJohnson
>Priority: Blocker
>  Labels: performance
> Fix For: 7.4.1
>
> Attachments: 3collections.PNG
>
>
> We're able to create new server via auto-scaling for our solr cluster 7.4 
> version. But the newly created server is not adding in our solr cluster 
> automatically. Is there any settings or configurations we need to add in 
> order to add the replica automatically in cluster for any collections.
> *commands used:* 
> {code:java}
> curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
> {code}
> Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its 
> not getting added in solr 7.4 version. The same scenario is working fine in 
> solr 7.5 version.
> *scaling policy used:*
> {code:java}
> 1)
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
>  "set-cluster-policy" : [{
>  "replica" : "1",
>  "shard" : "#EACH",
>  "node" : "#ANY",
>  }]
> }'
> 2)###Node Added Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_added_trigger",
> "event" : "nodeAdded",
> "waitFor" : "5s",
> "preferredOperation": "ADDREPLICA",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> 3)###Node Lost Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_lost_trigger",
> "event" : "nodeLost",
> "waitFor" : "5s",
> "preferredOperation": "DELETENODE",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> {code}
> Note the same policy(2,3) not working in 7.4
> *Errors:*
> {code:java}
> [Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_added_trigger",
> >   "event" : "nodeAdded",
> >   "waitFor" : "5s",
> >   "preferredOperation": "ADDREPLICA",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> "status":400,
> "QTime":5},
>   "result":"failure",
>   "WARNING":"This response format is experimental.  It is likely to change in 
> the future.",
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
>   "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
> "details":[{
> "set-trigger":{
>   "name":"node_added_trigger",
>   "event":"nodeAdded",
>   "waitFor":"5s",
>   "preferredOperation":"ADDREPLICA",
>   "enabled":true,
>   "actions":[{
>   "name":"compute_plan",
>   "class":"solr.ComputePlanAction"},
> {
>   "name":"execute_plan",
>   "class":"solr.ExecutePlanAction"}]},
> "errorMessages":["Error validating trigger config node_added_trigger: 
> TriggerValidationException{name=node_added_trigger, 
> details='{preferredOperation=unknown property}'}"]}],
> "msg":"Error in command payload",
> "code":400}}
> [Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
> [Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_lost_trigger",
> >   "event" : "nodeLost",
> >   "waitFor" : "5s",
> >   "preferredOperation": "DELETENODE",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   

[JENKINS] Lucene-Solr-Tests-master - Build # 3257 - Unstable

2019-04-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3257/

1 tests failed.
FAILED:  org.apache.solr.client.solrj.request.TestV2Request.testCloudSolrClient

Error Message:
Error from server at https://127.0.0.1:41874/solr: 4 out of 5 the property 
overlay to be of version 0 within 30 seconds! Failed cores: 
[https://127.0.0.1:41874/solr/test_shard2_replica_n4/, 
https://127.0.0.1:43986/solr/test_shard2_replica_n6/, 
https://127.0.0.1:41239/solr/test_shard1_replica_n1/, 
https://127.0.0.1:42595/solr/test_shard1_replica_n2/]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at https://127.0.0.1:41874/solr: 4 out of 5 the property 
overlay to be of version 0 within 30 seconds! Failed cores: 
[https://127.0.0.1:41874/solr/test_shard2_replica_n4/, 
https://127.0.0.1:43986/solr/test_shard2_replica_n6/, 
https://127.0.0.1:41239/solr/test_shard1_replica_n1/, 
https://127.0.0.1:42595/solr/test_shard1_replica_n2/]
at 
__randomizedtesting.SeedInfo.seed([1BA199CBA29ED70B:8257127B072D5F60]:0)
at 
org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteExecutionException.create(BaseHttpSolrClient.java:65)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:626)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1274)
at 
org.apache.solr.client.solrj.request.TestV2Request.assertSuccess(TestV2Request.java:55)
at 
org.apache.solr.client.solrj.request.TestV2Request.doTest(TestV2Request.java:102)
at 
org.apache.solr.client.solrj.request.TestV2Request.testCloudSolrClient(TestV2Request.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 68 - Unstable

2019-04-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/68/

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:36080/cw_/u/collection1: Server error 
writing document id 255 to the index

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:36080/cw_/u/collection1: Server error writing 
document id 255 to the index
at 
__randomizedtesting.SeedInfo.seed([7953E3FA6BE6649E:F107DC20C51A0966]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.BaseDistributedSearchTestCase.add(BaseDistributedSearchTestCase.java:576)
at 
org.apache.solr.cloud.BasicDistributedZkTest.testUpdateProcessorsRunOnlyOnce(BasicDistributedZkTest.java:746)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:424)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-13386) Remove race in OverseerTaskQueue#remove that can result in the Overseer causing a Zookeeper call spin spike.

2019-04-09 Thread Mark Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-13386:
---
Description: If the data call hits NoNodeException, it will throw and the 
Overseer work queue processor will catch it and loop and repeat, which causes 
major zk getData / NoNode call traffic or other such things.  (was: If the 
getData call hits NoNodeException, it will throw and the Overseer work queue 
processor will catch it and loop and repeat, which causes major zk getData / 
NoNode call traffic or other such things.)

> Remove race in OverseerTaskQueue#remove that can result in the Overseer 
> causing a Zookeeper call spin spike.
> 
>
> Key: SOLR-13386
> URL: https://issues.apache.org/jira/browse/SOLR-13386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.7.2, 8.1
>
>
> If the data call hits NoNodeException, it will throw and the Overseer work 
> queue processor will catch it and loop and repeat, which causes major zk 
> getData / NoNode call traffic or other such things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13386) Remove race in OverseerTaskQueue#remove that can result in the Overseer causing a Zookeeper call spin spike.

2019-04-09 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814018#comment-16814018
 ] 

Mark Miller commented on SOLR-13386:


We just need to catch the NoNodeException setData can throw and treat it the 
same as exists returning false (NOOP). I've been reviewing for any similar case 
to fix, but have not spotted anything yet.

{noformat}
  /**
   * Remove the event and save the response into the other path.
   */
  public void remove(QueueEvent event) throws KeeperException,
  InterruptedException {
Timer.Context time = stats.time(dir + "_remove_event");
try {
  String path = event.getId();
  String responsePath = dir + "/" + RESPONSE_PREFIX
  + path.substring(path.lastIndexOf("-") + 1);
  if (zookeeper.exists(responsePath, true)) {
zookeeper.setData(responsePath, event.getBytes(), true);
  } else {
log.info("Response ZK path: " + responsePath + " doesn't exist."
+ "  Requestor may have disconnected from ZooKeeper");
  }
  try {
zookeeper.delete(path, -1, true);
  } catch (KeeperException.NoNodeException ignored) {
  }
} finally {
  time.stop();
}
  }
{noformat}

> Remove race in OverseerTaskQueue#remove that can result in the Overseer 
> causing a Zookeeper call spin spike.
> 
>
> Key: SOLR-13386
> URL: https://issues.apache.org/jira/browse/SOLR-13386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.7.2, 8.1
>
>
> If the getData call hits NoNodeException, it will throw and the Overseer work 
> queue processor will catch it and loop and repeat, which causes major zk 
> getData / NoNode call traffic or other such things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13386) Remove race in OverseerTaskQueue#remove that can result in the Overseer causing a Zookeeper call spin spike.

2019-04-09 Thread Mark Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-13386:
---
Description: If the getData call hits NoNodeException, it will throw and 
the Overseer work queue processor will catch it and loop and repeat, which 
causes major zk getData / NoNode call traffic or other such things.  (was: If 
the getData call hits NoNodeException, it will throw and the Overseer work 
queue processor will catch it and loop and repeat, which causes major zk exist 
call traffic or other such things.)

> Remove race in OverseerTaskQueue#remove that can result in the Overseer 
> causing a Zookeeper call spin spike.
> 
>
> Key: SOLR-13386
> URL: https://issues.apache.org/jira/browse/SOLR-13386
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.7.2, 8.1
>
>
> If the getData call hits NoNodeException, it will throw and the Overseer work 
> queue processor will catch it and loop and repeat, which causes major zk 
> getData / NoNode call traffic or other such things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 23892 - Unstable!

2019-04-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23892/
Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic

Error Message:
{} expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: {} expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([87EC05548F990920:2C16184150458F0E]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic(SolrRrdBackendFactoryTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  org.apache.solr.security.JWTAuthPluginIntegrationTest.testMetrics

Error Message:
Server returned HTTP response code: 401 for URL: 
http://127.0.0.1:36391/solr/jwtColl/query?q=*:*

Stack Trace:
java.io.IOException: Server 

[jira] [Created] (SOLR-13386) Remove race in OverseerTaskQueue#remove that can result in the Overseer causing a Zookeeper call spin spike.

2019-04-09 Thread Mark Miller (JIRA)
Mark Miller created SOLR-13386:
--

 Summary: Remove race in OverseerTaskQueue#remove that can result 
in the Overseer causing a Zookeeper call spin spike.
 Key: SOLR-13386
 URL: https://issues.apache.org/jira/browse/SOLR-13386
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 7.7.2, 8.1


If the getData call hits NoNodeException, it will throw and the Overseer work 
queue processor will catch it and loop and repeat, which causes major zk exist 
call traffic or other such things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13381) Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a PointField facet

2019-04-09 Thread Haochao Zhuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haochao Zhuang updated SOLR-13381:
--
Attachment: SOLR-13381.patch

> Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a 
> PointField facet
> --
>
> Key: SOLR-13381
> URL: https://issues.apache.org/jira/browse/SOLR-13381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 7.0, 7.6, 7.7, 7.7.1
> Environment: solr, solrcloud
>Reporter: Zhu JiaJun
>Priority: Major
> Attachments: SOLR-13381.patch
>
>
> Hey,
> I got an "Unexpected docvalues type SORTED_NUMERIC" exception when I perform 
> group facet on an IntPointField. Debugging into the source code, the cause is 
> that internally the docvalue type for PointField is "NUMERIC" (single value) 
> or "SORTED_NUMERIC" (multi value), while the TermGroupFacetCollector class 
> requires the facet field must have a "SORTED" or "SOTRTED_SET" docvalue type: 
> [https://github.com/apache/lucene-solr/blob/2480b74887eff01f729d62a57b415d772f947c91/lucene/grouping/src/java/org/apache/lucene/search/grouping/TermGroupFacetCollector.java#L313]
>  
> When I change schema for all int field to TrieIntField, the group facet then 
> work. Since internally the docvalue type for TrieField is SORTED (single 
> value) or SORTED_SET (multi value).
> Regarding that the "TrieField" is depreciated in Solr7, please help on this 
> grouping facet issue for PointField. I also commented this issue in SOLR-7495.
>  
> In addtional, all place of "${solr.tests.IntegerFieldType}" in the unit test 
> files seems to be using the "TrieintField", if change to "IntPointField", 
> some unit tests will fail, for example: 
> [https://github.com/apache/lucene-solr/blob/3de0b3671998cc9bc723d10f1b31ce48cbd4fa64/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L417]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13381) Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a PointField facet

2019-04-09 Thread Haochao Zhuang (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814002#comment-16814002
 ] 

Haochao Zhuang commented on SOLR-13381:
---

I fixed it on 8x. But it could cause other problems.

 

 

 

> Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a 
> PointField facet
> --
>
> Key: SOLR-13381
> URL: https://issues.apache.org/jira/browse/SOLR-13381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 7.0, 7.6, 7.7, 7.7.1
> Environment: solr, solrcloud
>Reporter: Zhu JiaJun
>Priority: Major
> Attachments: SOLR-13381.patch
>
>
> Hey,
> I got an "Unexpected docvalues type SORTED_NUMERIC" exception when I perform 
> group facet on an IntPointField. Debugging into the source code, the cause is 
> that internally the docvalue type for PointField is "NUMERIC" (single value) 
> or "SORTED_NUMERIC" (multi value), while the TermGroupFacetCollector class 
> requires the facet field must have a "SORTED" or "SOTRTED_SET" docvalue type: 
> [https://github.com/apache/lucene-solr/blob/2480b74887eff01f729d62a57b415d772f947c91/lucene/grouping/src/java/org/apache/lucene/search/grouping/TermGroupFacetCollector.java#L313]
>  
> When I change schema for all int field to TrieIntField, the group facet then 
> work. Since internally the docvalue type for TrieField is SORTED (single 
> value) or SORTED_SET (multi value).
> Regarding that the "TrieField" is depreciated in Solr7, please help on this 
> grouping facet issue for PointField. I also commented this issue in SOLR-7495.
>  
> In addtional, all place of "${solr.tests.IntegerFieldType}" in the unit test 
> files seems to be using the "TrieintField", if change to "IntPointField", 
> some unit tests will fail, for example: 
> [https://github.com/apache/lucene-solr/blob/3de0b3671998cc9bc723d10f1b31ce48cbd4fa64/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L417]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 329 - Unstable

2019-04-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/329/

1 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.addReplicaTest

Error Message:
Error from server at http://127.0.0.1:40230/solr: ADDREPLICA failed to create 
replica

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:40230/solr: ADDREPLICA failed to create replica
at 
__randomizedtesting.SeedInfo.seed([390FBE4EA7175352:AAAFE968291F1883]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.addReplicaTest(CollectionsAPIDistributedZkTest.java:640)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2321 - Unstable!

2019-04-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2321/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.TestCollectionAPI.test

Error Message:
Could not find a healthy node to handle the request.

Stack Trace:
org.apache.solr.common.SolrException: Could not find a healthy node to handle 
the request.
at 
__randomizedtesting.SeedInfo.seed([C5DA7482FF9D74B:840998928105BAB3]:0)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1049)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:504)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:460)
at 
org.apache.solr.cloud.api.collections.TestCollectionAPI.testClusterStateMigration(TestCollectionAPI.java:833)
at 
org.apache.solr.cloud.api.collections.TestCollectionAPI.test(TestCollectionAPI.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8346) Upgrade Zookeeper to version 3.5.x

2019-04-09 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813881#comment-16813881
 ] 

Erick Erickson commented on SOLR-8346:
--

This patch has a lot of mechanical changes and does _not_ incorporate any of 
the changes from the previous patch (frankly I didn't notice it when I started 
to be truthful).

I did this against zookeeper 3.5.4-beta since 3.5.5 isn't out yet. Unless I'm 
missing something, there hasn't been a formal release of 3.5 at all so far...

There are some nocommits, particularly in ZkTesServer.

I don't particularly intend to pursue this, just checkpointing. Anyone who has 
a better grasp of how to integrate the previous patch and whether the changes 
in ZkTestServer make sense, please chime in.

Apart from the nocommits I added back, precommit passes.

 

These tests fail on a single run:

[junit4] - 
org.apache.solr.cloud.OutOfBoxZkACLAndCredentialsProvidersTest.testOpenACLUnsafeAllover
 [junit4] - org.apache.solr.cloud.TestStressLiveNodes (suite)
 [junit4] - 
org.apache.solr.handler.admin.ZookeeperStatusHandlerTest.monitorZookeeper
 [junit4] - org.apache.solr.handler.TestSQLHandler (suite)

> Upgrade Zookeeper to version 3.5.x
> --
>
> Key: SOLR-8346
> URL: https://issues.apache.org/jira/browse/SOLR-8346
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Jan Høydahl
>Priority: Major
>  Labels: security, zookeeper
> Attachments: SOLR-8346.patch, SOLR_8346.patch
>
>
> Investigate upgrading ZooKeeper to 3.5.x, once released. Primary motivation 
> for this is SSL support. Currently a 3.5.4-beta is released (2018-05-17).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8346) Upgrade Zookeeper to version 3.5.x

2019-04-09 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-8346:
-
Attachment: SOLR-8346.patch

> Upgrade Zookeeper to version 3.5.x
> --
>
> Key: SOLR-8346
> URL: https://issues.apache.org/jira/browse/SOLR-8346
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Jan Høydahl
>Priority: Major
>  Labels: security, zookeeper
> Attachments: SOLR-8346.patch, SOLR_8346.patch
>
>
> Investigate upgrading ZooKeeper to 3.5.x, once released. Primary motivation 
> for this is SSL support. Currently a 3.5.4-beta is released (2018-05-17).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13377) NestableJsonFacet ClassCastException in SolrJ 7.6+

2019-04-09 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-13377.

Resolution: Duplicate

Hi Owen, thanks for reporting.  We're already tracking (and working on this 
issue) under SOLR-13318, so I'm going to close this copy out as a duplicate.  
Feel free to continue the discussion over there.

> NestableJsonFacet ClassCastException in SolrJ 7.6+
> --
>
> Key: SOLR-13377
> URL: https://issues.apache.org/jira/browse/SOLR-13377
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.6, 7.7, 7.7.1
>Reporter: Owen Clarke
>Priority: Major
> Attachments: SOLR-13377.patch
>
>
> Identified by Gerald Bonfiglio on the mailing list: 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201903.mbox/%3C2B629E99E8E563409788D895FACA8198AA754A96%40mbx031-w1-co-4.exch031.domain.local%3E]
> I have also encountered this issue where NestableJsonFacet occasionally 
> receives the facet count as a Long value but blindly casts to int when 
> assigning to domainCount, causing a ClassCastExcption. I have a fix to 
> instead check if the count value is instanceof Number, then use 
> Number.longValue() to safely unbox to long.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 72 - Unstable!

2019-04-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/72/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudHttp2SolrClientTest

Error Message:
ObjectTracker found 3 object(s) that were not released!!! 
[MockDirectoryWrapper, SolrCore, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:509)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:351) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:424) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1193)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1063)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1245)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1155)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:99)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:779)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:976)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1245)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1155)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)   expected null, but 
was:(SolrCore.java:1063)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1245)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1155)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 

[JENKINS] Lucene-Solr-repro - Build # 3134 - Unstable

2019-04-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3134/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1815/consoleText

[repro] Revision: 2533fd1edeb5cccfd835dadcd288f98722b5

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=HdfsAutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=E7F6FE4FA351EA14 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=da-DK -Dtests.timezone=Asia/Yerevan -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsBasicDistributedZk2Test 
-Dtests.method=test -Dtests.seed=E7F6FE4FA351EA14 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-MT -Dtests.timezone=America/Bahia_Banderas 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
4a931998038377112285a2a26e08452be6e2da39
[repro] git fetch
[repro] git checkout 2533fd1edeb5cccfd835dadcd288f98722b5

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsBasicDistributedZk2Test
[repro]   HdfsAutoAddReplicasIntegrationTest
[repro] ant compile-test

[...truncated 3564 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.HdfsBasicDistributedZk2Test|*.HdfsAutoAddReplicasIntegrationTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=E7F6FE4FA351EA14 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-MT -Dtests.timezone=America/Bahia_Banderas 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 20033 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest
[repro]   4/5 failed: org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test
[repro] git checkout 4a931998038377112285a2a26e08452be6e2da39

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13376) Multi-node race condition to create/remove nodeLost markers

2019-04-09 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813718#comment-16813718
 ] 

Andrzej Bialecki  commented on SOLR-13376:
--

[~hossman] - this patch changes the {{OverseerTriggerThread}} so that it does 
not remove markers once it's done init-ing all triggers, only marks them 
"inactive". This kills two birds with one stone - it prevents straggler nodes 
from re-creating these markers, and it allows triggers to avoid processing them 
multiple times (on multiple Overseer leader changes). It also speeds up removal 
of markers in {{InactiveMarkersPlanAction}}.

I also added some Ref Guide documentation about the maintenance trigger. I'd 
appreciate a review.

> Multi-node race condition to create/remove nodeLost markers
> ---
>
> Key: SOLR-13376
> URL: https://issues.apache.org/jira/browse/SOLR-13376
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13376.patch
>
>
> NodeMarkersRegistrationTest.testNodeMarkersRegistration is frequently failing 
> on jenkins builds in the same spot, with a similar looking logs.
> Although i haven't been able to reproduce these failures locally, I am fairly 
> confident that the problem is a race condition bug that exists between 
> when/how a new Overseer will process & clean up "nodeLost" marker's in ZK, 
> with how other nodes may (mistakenly) re-create those markers in their 
> liveNodes listener.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13376) Multi-node race condition to create/remove nodeLost markers

2019-04-09 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13376:
-
Attachment: SOLR-13376.patch

> Multi-node race condition to create/remove nodeLost markers
> ---
>
> Key: SOLR-13376
> URL: https://issues.apache.org/jira/browse/SOLR-13376
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13376.patch
>
>
> NodeMarkersRegistrationTest.testNodeMarkersRegistration is frequently failing 
> on jenkins builds in the same spot, with a similar looking logs.
> Although i haven't been able to reproduce these failures locally, I am fairly 
> confident that the problem is a race condition bug that exists between 
> when/how a new Overseer will process & clean up "nodeLost" marker's in ZK, 
> with how other nodes may (mistakenly) re-create those markers in their 
> liveNodes listener.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-04-09 Thread GitBox
diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip 
second grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r273634722
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java
 ##
 @@ -34,17 +41,37 @@
 /**
  * Implementation for transforming {@link SearchGroup} into a {@link 
NamedList} structure and visa versa.
  */
-public class SearchGroupsResultTransformer implements 
ShardResultTransformer, Map> {
+public abstract class SearchGroupsResultTransformer implements 
ShardResultTransformer, Map> {
 
 Review comment:
   https://github.com/bloomberg/lucene-solr/pull/231 sketches the idea, what do 
you think? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tomoko Uchida as Lucene/Solr committer

2019-04-09 Thread Ahmet Arslan
Congralations Tomoko!




On Tuesday, April 9, 2019, 8:48:03 PM GMT+3, Robert Muir  
wrote: 





Welcome!

On Mon, Apr 8, 2019 at 11:21 AM Uwe Schindler  wrote:
>
> Hi all,
>
> Please join me in welcoming Tomoko Uchida as the latest Lucene/Solr committer!
>
> She has been working on https://issues.apache.org/jira/browse/LUCENE-2562 for 
> several years with awesome progress and finally we got the fantastic Luke as 
> a branch on ASF JIRA: 
> https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=shortlog;h=refs/heads/jira/lucene-2562-luke-swing-3
> Looking forward to the first release of Apache Lucene 8.1 with Luke bundled 
> in the distribution. I will take care of merging it to master and 8.x 
> branches together with her once she got the ASF account.
>
> Tomoko also helped with the Japanese and Korean Analyzers.
>
> Congratulations and Welcome, Tomoko! Tomoko, it's traditional for you to 
> introduce yourself with a brief bio.
>
> Uwe & Robert (who nominated Tomoko)
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> https://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13366) AutoScalingConfig 'Invalid stage name' warnings after upgrade

2019-04-09 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813663#comment-16813663
 ] 

Christine Poerschke commented on SOLR-13366:


{quote}... my theory is that the listener got auto-created (with the WAITING 
stage) when the cloud was running pre-7.2.0 code and then after upgrading the 
warnings start to appear.
{quote}
Have been able to confirm that theory, and tested the patch in the process.

A follow-up thing I guess is if/how/when the invalid stage being WARN-ed about 
can be easily removed after upgrading.

> AutoScalingConfig 'Invalid stage name' warnings after upgrade
> -
>
> Key: SOLR-13366
> URL: https://issues.apache.org/jira/browse/SOLR-13366
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13366.patch
>
>
> I noticed WARNings like this in some of our logs:
> {code:java}
> ... OverseerAutoScalingTriggerThread ... o.a.s.c.s.c.a.AutoScalingConfig 
> Invalid stage name '.auto_add_replicas.system' in listener config, skipping: 
> {beforeAction=[], afterAction=[], trigger=.auto_add_replicas, stage=[WAITING, 
> STARTED, ABORTED, SUCCEEDED, FAILED, BEFORE_ACTION, AFTER_ACTION], 
> class=org.apache.solr.cloud.autoscaling.SystemLogListener}
> {code}
> After some detective work I think I've tracked this down to 7.1.0 
> [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java]
>  having a {{WAITING}} stage and that stage having been removed in 7.2.0 
> [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java]
>  via the SOLR-11320 changes. Haven't tried to reproduce it but my theory is 
> that the listener got auto-created (with the {{WAITING}} stage) when the 
> cloud was running pre-7.2.0 code and then after upgrading the warnings start 
> to appear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13366) AutoScalingConfig 'Invalid stage name' warnings after upgrade

2019-04-09 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-13366:
--

Assignee: Christine Poerschke

> AutoScalingConfig 'Invalid stage name' warnings after upgrade
> -
>
> Key: SOLR-13366
> URL: https://issues.apache.org/jira/browse/SOLR-13366
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13366.patch
>
>
> I noticed WARNings like this in some of our logs:
> {code:java}
> ... OverseerAutoScalingTriggerThread ... o.a.s.c.s.c.a.AutoScalingConfig 
> Invalid stage name '.auto_add_replicas.system' in listener config, skipping: 
> {beforeAction=[], afterAction=[], trigger=.auto_add_replicas, stage=[WAITING, 
> STARTED, ABORTED, SUCCEEDED, FAILED, BEFORE_ACTION, AFTER_ACTION], 
> class=org.apache.solr.cloud.autoscaling.SystemLogListener}
> {code}
> After some detective work I think I've tracked this down to 7.1.0 
> [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java]
>  having a {{WAITING}} stage and that stage having been removed in 7.2.0 
> [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java]
>  via the SOLR-11320 changes. Haven't tried to reproduce it but my theory is 
> that the listener got auto-created (with the {{WAITING}} stage) when the 
> cloud was running pre-7.2.0 code and then after upgrading the warnings start 
> to appear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8346) Upgrade Zookeeper to version 3.5.x

2019-04-09 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813643#comment-16813643
 ] 

Erick Erickson commented on SOLR-8346:
--

I'll take a quick whack at trying 3.5.4-beta, just to get a dimension on the 
problem.

> Upgrade Zookeeper to version 3.5.x
> --
>
> Key: SOLR-8346
> URL: https://issues.apache.org/jira/browse/SOLR-8346
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Jan Høydahl
>Priority: Major
>  Labels: security, zookeeper
> Attachments: SOLR_8346.patch
>
>
> Investigate upgrading ZooKeeper to 3.5.x, once released. Primary motivation 
> for this is SSL support. Currently a 3.5.4-beta is released (2018-05-17).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813590#comment-16813590
 ] 

Uwe Schindler commented on LUCENE-8738:
---

Sorry, forget about it. I will try to fix it in 8.x branch in a separate issue. 
The problem is:
- If you have branch 8.x checked out (or the current master still on java 8) 
and you build the javadocs with Java 8, all works fine.
- If you then switch without "ant clean" to Java 11 and rebuild code/javadocs, 
it fails, as the Java 11 javadocs tool looks for the new package-list name when 
referring to other Lucene modules.

So my wish would be to make javadoc tool in Lucene 8.x produce the same 
package-list files on all JVMs. If you compile and build javadocs on Lucene 8.x 
with Java 11 it produces element-list and fails. But that's more an issue of 
8.x only.

This issue was getting on my nerves yesterday. But sorry for the hassle here, 
it just looked like the same problem...

> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813565#comment-16813565
 ] 

Adrien Grand commented on LUCENE-8738:
--

Sorry Uwe, I don't understand what you are suggesting.

> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813553#comment-16813553
 ] 

ASF subversion and git services commented on LUCENE-8738:
-

Commit 744b375f2739112ca8326e036839f0b283065aba in lucene-solr's branch 
refs/heads/jira/LUCENE-8738 from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=744b375 ]

LUCENE-8738: Make documentation-lint pass.


> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JCC strange behaviour

2019-04-09 Thread Andi Vajda
Thank you !

Andi..

> On Apr 9, 2019, at 00:25, Petrus Hyvönen  wrote:
> 
> Hi,
> 
> I did a test case and filed an issue for it in the JIRA, test case uploaded
> there.
> 
> Best Regards
> /Petrus
> 
> 
> On Mon, Apr 8, 2019 at 9:00 AM Petrus Hyvönen 
> wrote:
> 
>> Hi Andi,
>> 
>> I will write a small test case for this.
>> 
>> I think the best solution would be to sort the comparisons of types by
>> inheritance within the same number of parameters, but not sure how complex
>> this could end up. Maybe the IsInstanceOf method could be used to check if
>> there are such dependencies between the input parameters and then put the
>> ones lowest "level" first in the test. As I understand this would only be
>> needed at compile time, unless there are some dynamic types that will make
>> it very complex.
>> 
>> Another alternative i guess would be to use very strict typing rules so it
>> really needs to be the same class of the types but that may affect how a
>> library is used alot.
>> 
>> With Best Regards
>> /Petrus
>> 
>> 
>>> On Sun, Apr 7, 2019 at 10:50 PM Andi Vajda  wrote:
>>> 
>>> 
 On Sun, 7 Apr 2019, Petrus Hyvönen wrote:
 
 Hi,
 
 I am getting inconsistent results and think I may be onto something. I
>>> am
 doing builds both locally and in a online build server, where there
>>> should
 be no old versions around. But both locally and there the results
>>> between
 different runs varies even on same platforms. Sometimes I get the right
 return type and for some builds and the same function returns its
>>> parents
 for other build (same platform).
 
 I have looked at the generated code and there seems to be a difference
>>> in
 the order of the parameter checking from the working vs non-working (no
>>> big
 statistical ground however).
 
 For the working one, the child class type is checked before the parent.
 Could it be that the parseArgs function gets a positive match when
 comparing child with the parent?
>>> 
>>> I think I understand what's going on but I'm not sure and having a small
>>> piece of code to reproduce the problem would help.
>>> 
>>> Namely, when you have more than one overload for a given method that is
>>> valid for a given call then the order in which they're considered is
>>> probably (I didn't check) in the order the method signatures were
>>> returned
>>> by Java during the JCC compile. JCC sorts the overloads by number of args
>>> but then *does not* sort them by, say, lowest subclass in the tree first.
>>> This could be tricky, or even undecidable (or make things slower at
>>> runtime
>>> by trying to find the lowest match based on your python call - at the
>>> moment, the first valid match is returned).
>>> 
>>> It is a bug for it to be random, however, so I think I should be able to
>>> get
>>> JCC to always succeed or fail in the same way, and not let the order of
>>> methods as returned by Java during compilation by JCC determine the
>>> behaviour.
>>> 
>>> I could be wrong about this (the order returned by Java could be
>>> deterministic and well chosen and later broken by JCC, for example, I
>>> didn't
>>> yet check).
>>> 
>>> You can help in getting this resolved by creating a small example that
>>> exhibits this bug (multiple overloads of a method that differ only in
>>> which
>>> piece of a subclass tree is used).
>>> 
>>> You can also workaround this problem by using different method names to
>>> avoid such confusion with overloads (if that is indeed the problem).
>>> 
>>> Andi..
>>> 
 
 Faulty return type (returns PVCoordinates type on
>>> TimeStampedPVCoordinates
 input)
 
 
 
   ::org::orekit::utils::PVCoordinates a0((jobject) NULL);
 
   ::org::orekit::utils::PVCoordinates result((jobject) NULL);
 
 
 
   if (!parseArgs(args, "k",
 ::org::orekit::utils::PVCoordinates::initializeClass, ))
 
   {
 
 OBJ_CALL(result = self->object.transformPVCoordinates(a0));
 
 return ::org::orekit::utils::t_PVCoordinates::wrap_Object
 (result);
 
   }
 
 }
 
 {
 
   ::org::orekit::utils::TimeStampedPVCoordinates a0((jobject)
>>> NULL
 );
 
   ::org::orekit::utils::TimeStampedPVCoordinates
>>> result((jobject)
 NULL);
 
 
 
   if (!parseArgs(args, "k",
 ::org::orekit::utils::TimeStampedPVCoordinates::initializeClass, ))
 
   {
 
 OBJ_CALL(result =
>>> self->object.transformPVCoordinates(a0)); //
 Här är det det räknas ut
 
 return ::org::orekit::utils::t_TimeStampedPVCoordinates::
 wrap_Object(result);
 
   }
 
 }
 
 
 
 And corresponding code in a build that works: (TimeStampedPVCoordinates
>>> as
 parameter gets return type 

[jira] [Commented] (SOLR-13385) Upgrade dependency jackson-databind in solr package contrib/prometheus-exporter/lib

2019-04-09 Thread DW (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813532#comment-16813532
 ] 

DW commented on SOLR-13385:
---

Great to hear. Thanks.

> Upgrade dependency jackson-databind in solr package 
> contrib/prometheus-exporter/lib
> ---
>
> Key: SOLR-13385
> URL: https://issues.apache.org/jira/browse/SOLR-13385
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6, 8.0.1
>Reporter: DW
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> The current used jackson-databind in 
> /contrib/prometheus-exporter/lib/jackson-databind-2.9.6.jar has known 
> Security Vulnerabilities record. Please upgrade to 2.9.8+. Thanks.
>  
> Please let me know if you would like detailed CVE records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813517#comment-16813517
 ] 

Uwe Schindler commented on LUCENE-8738:
---

bq. Ok, maybe we should do the same on 8.x, so switching branches and jvms 
don't bring you trouble all the time.

Sorry this does not affcet 8.x as it's about the downloaded files from the 
Oracle web page. The issue I am talking about is the same for cross module 
references inside Lucene. If you compile in Lucene 8.x it produces files with a 
different name in Java 8 or Java 11. So if you switch JDKs while working, you 
get errors. Maybe we can also tell javac to produce the output files with a 
consistent name.

> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread Richard (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813518#comment-16813518
 ] 

Richard commented on SOLR-13383:


Hi [~antojohn]
Sorry I'm having a bit of difficulty understanding, but I *think* I know what 
you're after.

Theory:
You want to have as many replicas as there are nodes? For example, you 
originally started with 3 nodes, and your shard had 3 replicas, but, when you 
added 2 more nodes, you still had 3 replicas, and not 5, but you want 5?

Solution:
I think the replication factor you're setting when creating the collection 
could be the issue.

{panel:title=SolrCloud Autoscaling Automatically Adding Replicas}
Solr provides a way to automatically add replicas for a collection when the 
number of active replicas drops below the replication factor specified at the 
time of the creation of the collection.
{panel}
This goes back to you adding the {{=true}} to your collection 
creation, which should work if you re-enable that _(sorry for the confusion, I 
have better understanding of your problem now)_.

But if your replication factor is set to 3, and you have 3 healthy replicas, 
and you add 2 more nodes into your cluster. Solr will see this as no actions 
needed.

It also looks like that you can not add {{"preferredOperation": 
"ADDREPLICA"}} to your triggers. I've had a look at the source code:

[v7.4 
NodeAddedTrigger|https://github.com/apache/lucene-solr/blob/branch_7_4/solr/core/src/java/org/apache/solr/cloud/autoscaling/NodeAddedTrigger.java]
[v7.5 
NodeAddedTrigger|https://github.com/apache/lucene-solr/blob/branch_7_5/solr/core/src/java/org/apache/solr/cloud/autoscaling/NodeAddedTrigger.java]

The main difference to pay attention to is the following _(around line 39)_
{code}
- v7.4 , + v7.5
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

+ import static org.apache.solr.common.params.AutoScalingParams.PREFERRED_OP;
{code}
So in v7.5 they introduce this variable {{PREFERRED_OP}} which allows you to 
implement {{ADDREPLICA}} which you tried to do previously. 

Because of this I don't think it's necessarily a bug of v7.4 more just a 
feature that is present in v7.5+

> auto-scaling not working in solr7.4 - autoaddreplica
> 
>
> Key: SOLR-13383
> URL: https://issues.apache.org/jira/browse/SOLR-13383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Affects Versions: 7.4
> Environment: Production
>Reporter: AntonyJohnson
>Priority: Blocker
>  Labels: performance
> Fix For: 7.4.1
>
> Attachments: 3collections.PNG
>
>
> We're able to create new server via auto-scaling for our solr cluster 7.4 
> version. But the newly created server is not adding in our solr cluster 
> automatically. Is there any settings or configurations we need to add in 
> order to add the replica automatically in cluster for any collections.
> *commands used:* 
> {code:java}
> curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
> {code}
> Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its 
> not getting added in solr 7.4 version. The same scenario is working fine in 
> solr 7.5 version.
> *scaling policy used:*
> {code:java}
> 1)
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
>  "set-cluster-policy" : [{
>  "replica" : "1",
>  "shard" : "#EACH",
>  "node" : "#ANY",
>  }]
> }'
> 2)###Node Added Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_added_trigger",
> "event" : "nodeAdded",
> "waitFor" : "5s",
> "preferredOperation": "ADDREPLICA",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> 3)###Node Lost Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_lost_trigger",
> "event" : "nodeLost",
> "waitFor" : "5s",
> "preferredOperation": "DELETENODE",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> {code}
> Note the same policy(2,3) not working in 7.4
> *Errors:*
> {code:java}
> [Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_added_trigger",
> >   "event" : "nodeAdded",
> >   "waitFor" : "5s",
> >   "preferredOperation": "ADDREPLICA",

[jira] [Comment Edited] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813517#comment-16813517
 ] 

Uwe Schindler edited comment on LUCENE-8738 at 4/9/19 3:04 PM:
---

bq. Ok, maybe we should do the same on 8.x, so switching branches and jvms 
don't bring you trouble all the time.

Sorry this does not affcet 8.x as it's about the downloaded files from the 
Oracle web page. The issue I am talking about is the same for cross module 
references inside Lucene. If you compile in Lucene 8.x it produces files with a 
different name in Java 8 or Java 11. So if you switch JDKs while working, you 
get errors. Maybe we can also tell javadoc command to produce the output files 
with a consistent name.


was (Author: thetaphi):
bq. Ok, maybe we should do the same on 8.x, so switching branches and jvms 
don't bring you trouble all the time.

Sorry this does not affcet 8.x as it's about the downloaded files from the 
Oracle web page. The issue I am talking about is the same for cross module 
references inside Lucene. If you compile in Lucene 8.x it produces files with a 
different name in Java 8 or Java 11. So if you switch JDKs while working, you 
get errors. Maybe we can also tell javac to produce the output files with a 
consistent name.

> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13370) Investigate Memory Footprint of CategoryRoutedAliasUpdateProcessorTest

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813490#comment-16813490
 ] 

ASF subversion and git services commented on SOLR-13370:


Commit 4a931998038377112285a2a26e08452be6e2da39 in lucene-solr's branch 
refs/heads/master from Gus Heck
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4a93199 ]

SOLR-13370 - Trying reduction of cluster size, but not clear that
should help from local tests/profile, but maybe it helps in more memory
constrained build servers, this and prior version both beasted success
10 rounds of 5 on a machine with lots of memory.


> Investigate Memory Footprint of CategoryRoutedAliasUpdateProcessorTest
> --
>
> Key: SOLR-13370
> URL: https://issues.apache.org/jira/browse/SOLR-13370
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Priority: Major
>
> The test is failing too frequently, usually with OOM on the build servers. 
> This sub task will track changes/investigation/discussion to improve that 
> situation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8346) Upgrade Zookeeper to version 3.5.x

2019-04-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813467#comment-16813467
 ] 

Jan Høydahl commented on SOLR-8346:
---

v3.5.5 RC3 being voted upon now 
[https://lists.apache.org/thread.html/43daf77b7a60a0ed7773d7ec640738ddda65a4126b76c23a85b672c4@%3Cdev.zookeeper.apache.org%3E]
 :)

Who wants to start working on the upgrade?

> Upgrade Zookeeper to version 3.5.x
> --
>
> Key: SOLR-8346
> URL: https://issues.apache.org/jira/browse/SOLR-8346
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Jan Høydahl
>Priority: Major
>  Labels: security, zookeeper
> Attachments: SOLR_8346.patch
>
>
> Investigate upgrading ZooKeeper to 3.5.x, once released. Primary motivation 
> for this is SSL support. Currently a 3.5.4-beta is released (2018-05-17).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813461#comment-16813461
 ] 

Uwe Schindler commented on LUCENE-8738:
---

Ok, maybe we should do the same on 8.x, so switching branches and jvms don't 
bring you trouble all the time.

> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813448#comment-16813448
 ] 

ASF subversion and git services commented on LUCENE-8738:
-

Commit 541b77afe78b08190fbb8bfa2e091862d9ff08c8 in lucene-solr's branch 
refs/heads/jira/LUCENE-8738 from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=541b77a ]

LUCENE-8738: Keep the file called package-list locally to make links work.


> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813445#comment-16813445
 ] 

Adrien Grand commented on LUCENE-8738:
--

Apparently the issue can be worked around by calling the file package-list 
locally, even though it is supposed to be called element-list with the move to 
modules. I'll push a fix shortly.

> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12120) New plugin type AuditLoggerPlugin

2019-04-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813435#comment-16813435
 ] 

Jan Høydahl commented on SOLR-12120:


Committed a few minor improvements caused by testing with a new 3rd party audit 
plugin. If auditing fails, the request itself should not fail even in 
synchronous mode, but cause an ERROR level log and of course incrementing the 
ERROR metrics counter.

> New plugin type AuditLoggerPlugin
> -
>
> Key: SOLR-12120
> URL: https://issues.apache.org/jira/browse/SOLR-12120
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Solr needs a well defined plugin point to implement audit logging 
> functionality, which is independent from whatever {{AuthenticationPlugin}} or 
> {{AuthorizationPlugin}} are in use at the time.
> It seems reasonable to introduce a new plugin type {{AuditLoggerPlugin}}. It 
> could be configured in solr.xml or it could be a third type of plugin defined 
> in {{security.json}}, i.e.
> {code:java}
> {
>   "authentication" : { "class" : ... },
>   "authorization" : { "class" : ... },
>   "auditlogging" : { "class" : "x.y.MyAuditLogger", ... }
> }
> {code}
> We could then instrument SolrDispatchFilter to the audit plugin with an 
> AuditEvent at important points such as successful authentication:
> {code:java}
> auditLoggerPlugin.audit(new SolrAuditEvent(EventType.AUTHENTICATED, 
> request)); 
> {code}
>  We will mark the impl as {{@lucene.experimental}} in the first release to 
> let it settle as people write their own plugin implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813426#comment-16813426
 ] 

Uwe Schindler commented on LUCENE-8738:
---

bq. There seems to be issues with links to the standard API. I wonder that it 
might be related to the move from package-list to element-list.

I have seen this, too, javadocs builds break with a warning message and then it 
fails build. Is this what you are seeing? The problem only occurs if you have 
mixed javadoc builds. It helps to "ant clean" from root folder and start from 
scratch.

> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813428#comment-16813428
 ] 

Uwe Schindler commented on LUCENE-8738:
---

(this issue always existed also in Java 8 vs. Java 11 on current master branch)

> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12120) New plugin type AuditLoggerPlugin

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813424#comment-16813424
 ] 

ASF subversion and git services commented on SOLR-12120:


Commit 77a4604c39e192edd785d30c905c4b604b67a4e2 in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=77a4604 ]

SOLR-12120: Do not fail the main request if synchronous auditing fails, log 
ERROR
Document that sub classes should call super.close() or a new 
waitForQueueToDrain() before closing itself

(cherry picked from commit 3e628b562cb57349503e8ccdfe4909aedcbe78b2)


> New plugin type AuditLoggerPlugin
> -
>
> Key: SOLR-12120
> URL: https://issues.apache.org/jira/browse/SOLR-12120
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Solr needs a well defined plugin point to implement audit logging 
> functionality, which is independent from whatever {{AuthenticationPlugin}} or 
> {{AuthorizationPlugin}} are in use at the time.
> It seems reasonable to introduce a new plugin type {{AuditLoggerPlugin}}. It 
> could be configured in solr.xml or it could be a third type of plugin defined 
> in {{security.json}}, i.e.
> {code:java}
> {
>   "authentication" : { "class" : ... },
>   "authorization" : { "class" : ... },
>   "auditlogging" : { "class" : "x.y.MyAuditLogger", ... }
> }
> {code}
> We could then instrument SolrDispatchFilter to the audit plugin with an 
> AuditEvent at important points such as successful authentication:
> {code:java}
> auditLoggerPlugin.audit(new SolrAuditEvent(EventType.AUTHENTICATED, 
> request)); 
> {code}
>  We will mark the impl as {{@lucene.experimental}} in the first release to 
> let it settle as people write their own plugin implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12120) New plugin type AuditLoggerPlugin

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813418#comment-16813418
 ] 

ASF subversion and git services commented on SOLR-12120:


Commit 3e628b562cb57349503e8ccdfe4909aedcbe78b2 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3e628b5 ]

SOLR-12120: Do not fail the main request if synchronous auditing fails, log 
ERROR
Document that sub classes should call super.close() or a new 
waitForQueueToDrain() before closing itself


> New plugin type AuditLoggerPlugin
> -
>
> Key: SOLR-12120
> URL: https://issues.apache.org/jira/browse/SOLR-12120
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Solr needs a well defined plugin point to implement audit logging 
> functionality, which is independent from whatever {{AuthenticationPlugin}} or 
> {{AuthorizationPlugin}} are in use at the time.
> It seems reasonable to introduce a new plugin type {{AuditLoggerPlugin}}. It 
> could be configured in solr.xml or it could be a third type of plugin defined 
> in {{security.json}}, i.e.
> {code:java}
> {
>   "authentication" : { "class" : ... },
>   "authorization" : { "class" : ... },
>   "auditlogging" : { "class" : "x.y.MyAuditLogger", ... }
> }
> {code}
> We could then instrument SolrDispatchFilter to the audit plugin with an 
> AuditEvent at important points such as successful authentication:
> {code:java}
> auditLoggerPlugin.audit(new SolrAuditEvent(EventType.AUTHENTICATED, 
> request)); 
> {code}
>  We will mark the impl as {{@lucene.experimental}} in the first release to 
> let it settle as people write their own plugin implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tomoko Uchida as Lucene/Solr committer

2019-04-09 Thread Alexandre Rafalovitch
Welcome Tomoko,

Watching the Luke issue, I really felt your sense of patience and
collaboration. Awesome to have you as a committer.

Regards,
   Alex.

On Mon, 8 Apr 2019 at 11:21, Uwe Schindler  wrote:
>
> Hi all,
>
> Please join me in welcoming Tomoko Uchida as the latest Lucene/Solr committer!
>
> She has been working on https://issues.apache.org/jira/browse/LUCENE-2562 for 
> several years with awesome progress and finally we got the fantastic Luke as 
> a branch on ASF JIRA: 
> https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=shortlog;h=refs/heads/jira/lucene-2562-luke-swing-3
> Looking forward to the first release of Apache Lucene 8.1 with Luke bundled 
> in the distribution. I will take care of merging it to master and 8.x 
> branches together with her once she got the ASF account.
>
> Tomoko also helped with the Japanese and Korean Analyzers.
>
> Congratulations and Welcome, Tomoko! Tomoko, it's traditional for you to 
> introduce yourself with a brief bio.
>
> Uwe & Robert (who nominated Tomoko)
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> https://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13385) Upgrade dependency jackson-databind in solr package contrib/prometheus-exporter/lib

2019-04-09 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-13385.
-
   Resolution: Duplicate
 Assignee: Kevin Risden
Fix Version/s: master (9.0)
   8.1

Duplicate of SOLR-13112

> Upgrade dependency jackson-databind in solr package 
> contrib/prometheus-exporter/lib
> ---
>
> Key: SOLR-13385
> URL: https://issues.apache.org/jira/browse/SOLR-13385
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6, 8.0.1
>Reporter: DW
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> The current used jackson-databind in 
> /contrib/prometheus-exporter/lib/jackson-databind-2.9.6.jar has known 
> Security Vulnerabilities record. Please upgrade to 2.9.8+. Thanks.
>  
> Please let me know if you would like detailed CVE records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13368) SchemaManager failures when processing schema update requests

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813348#comment-16813348
 ] 

ASF subversion and git services commented on SOLR-13368:


Commit 0859be134db8abb5d7cf68dd49baa51acf8d0c44 in lucene-solr's branch 
refs/heads/jira/LUCENE-8738 from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0859be1 ]

SOLR-13368: Tentative fix for a race condition in managed schema initialization.


> SchemaManager failures when processing schema update requests
> -
>
> Key: SOLR-13368
> URL: https://issues.apache.org/jira/browse/SOLR-13368
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.0, 8.1, master (9.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13368.patch
>
>
> When sending a schema update requests occasionally {{SchemaManager}} produces 
> this error:
> {code:java}
> [junit4] 2> 508295 ERROR (qtp1376110895-5901) [n:127.0.0.1:48080_solr 
> c:.system s:shard1 r:core_node2 x:.system_shard1_replica_n1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> reading input String Can't find resource 'schema.xml' in classpath or 
> '/configs/.system', 
> cwd=/var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J5
> [junit4] 2> at 
> org.apache.solr.handler.SchemaHandler.handleRequestBody(SchemaHandler.java:94)
> [junit4] 2> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> [junit4] 2> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2566)
> [junit4] 2> at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
> [junit4] 2> at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
> [junit4] 2> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
> [junit4] 2> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
> [junit4] 2> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
> [junit4] 2> at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
> [junit4] 2> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
> [junit4] 2> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> [junit4] 2> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> [junit4] 2> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> [junit4] 2> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> [junit4] 2> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> [junit4] 2> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> [junit4] 2> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> [junit4] 2> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> [junit4] 2> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> [junit4] 2> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> [junit4] 2> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> [junit4] 2> at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)
> [junit4] 2> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> [junit4] 2> at org.eclipse.jetty.server.Server.handle(Server.java:502)
> [junit4] 2> at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
> [junit4] 2> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
> [junit4] 2> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
> [junit4] 2> at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
> [junit4] 2> at 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
> [junit4] 2> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
> [junit4] 2> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
> [junit4] 2> at java.lang.Thread.run(Thread.java:748)
> [junit4] 2> Caused by: org.apache.solr.core.SolrResourceNotFoundException: 
> Can't find resource 'schema.xml' in classpath or '/configs/.system', 
> 

[jira] [Commented] (SOLR-13376) Multi-node race condition to create/remove nodeLost markers

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813350#comment-16813350
 ] 

ASF subversion and git services commented on SOLR-13376:


Commit ab55b6386b701ec91afb92b269decd081f398ca8 in lucene-solr's branch 
refs/heads/jira/LUCENE-8738 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ab55b63 ]

SOLR-13376: Disable test until it can be re-written to reflect actual expected 
behavior of how/when node markers will be cleaned up


> Multi-node race condition to create/remove nodeLost markers
> ---
>
> Key: SOLR-13376
> URL: https://issues.apache.org/jira/browse/SOLR-13376
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> NodeMarkersRegistrationTest.testNodeMarkersRegistration is frequently failing 
> on jenkins builds in the same spot, with a similar looking logs.
> Although i haven't been able to reproduce these failures locally, I am fairly 
> confident that the problem is a race condition bug that exists between 
> when/how a new Overseer will process & clean up "nodeLost" marker's in ZK, 
> with how other nodes may (mistakenly) re-create those markers in their 
> liveNodes listener.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12809) Document recommended Java/Solr combinations

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813347#comment-16813347
 ] 

ASF subversion and git services commented on SOLR-12809:


Commit 7602f3c78eecc04f3b7beb511c43b6f276166874 in lucene-solr's branch 
refs/heads/jira/LUCENE-8738 from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7602f3c ]

SOLR-12809: Document recommended Java/Solr combinations


> Document recommended Java/Solr combinations
> ---
>
> Key: SOLR-12809
> URL: https://issues.apache.org/jira/browse/SOLR-12809
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-12809.patch, SOLR-12809.patch, SOLR-12809.patch, 
> SOLR-12809.patch, SOLR-12809.patch, SolrSystemRequirements.pdf, 
> SolrSystemRequirements.pdf, SolrSystemRequirements.pdf, 
> SolrSystemRequirements.pdf
>
>
> JDK 8 will be EOL early next year (except for "premier support"). JDK 9, 10 
> and 11 all have issues for Solr and Lucene IIUC.
> Also IIUC Oracle will start requiring commercial licenses for 11.
> This Jira is to discuss what we want to do going forward. Among the topics:
>  * Skip straight to 11, skipping 9 and 10? If so how to resolve current 
> issues?
>  * How much emphasis on OpenJDK .vs. Oracle's version
>  * What to do about dependencies that don't work (for whatever reason) with 
> the version of Java we go with?
>  * ???
> This may turn into an umbrella Jira with sub-tasks of course. Since JDK 11 
> has had a GA release, I'd also like to have a record of where the current 
> issues are to refer people to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13369) TriLevelCompositeIdRoutingTest failure: same route prefix maped to multiple shards

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813351#comment-16813351
 ] 

ASF subversion and git services commented on SOLR-13369:


Commit 2533fd1edeb5cccfd835dadcd288f98722b5 in lucene-solr's branch 
refs/heads/jira/LUCENE-8738 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2533fd1 ]

SOLR-13369: disable TriLevelCompositeIdRoutingTest until someone who actually 
understands how the /bits option is *suppose* to work can assess it to 
determine if the test is flawed or it there is a bug in the underlying 
CompositeIdRouter


> TriLevelCompositeIdRoutingTest failure: same route prefix maped to multiple 
> shards
> --
>
> Key: SOLR-13369
> URL: https://issues.apache.org/jira/browse/SOLR-13369
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: 
> TriLevelCompositeIdRoutingTest-failure-with-debug-log.txt, 
> thetaphi_Lucene-Solr-8.x-Linux_342.log.txt
>
>
> thetaphi's 8x jenkins job just identified a reproducing seed that causes 
> TriLevelCompositeIdRoutingTest to fail after detecting 2 docs with matching 
> route prefixes on different shards...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TriLevelCompositeIdRoutingTest -Dtests.method=test 
> -Dtests.seed=A6B6F0104FE6018F -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sr-Latn -Dtests.timezone=Pacific/Tongatapu 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 9.38s J0 | TriLevelCompositeIdRoutingTest.test <<<
>[junit4]> Throwable #1: org.junit.ComparisonFailure: routePrefix 
> app9/2!user32 found in multiple shards expected: but was:
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([A6B6F0104FE6018F:2EE2CFCAE11A6C77]:0)
>[junit4]>at 
> org.apache.solr.cloud.TriLevelCompositeIdRoutingTest.test(TriLevelCompositeIdRoutingTest.java:122)
> {noformat}
> It's possible this is just a bug I introduced in SOLR-13210 due to a 
> missunderstanding in how routePrefixes that use a bit mask (ie: {{/2}} in the 
> assertion failure) are expected to work -- but I thought i had that squared 
> away based on shalin's feedback in SOLR-13210



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813352#comment-16813352
 ] 

ASF subversion and git services commented on LUCENE-8738:
-

Commit e410a35e8cd49415ee4460266afa3d1eba65371e in lucene-solr's branch 
refs/heads/jira/LUCENE-8738 from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e410a35 ]

Merge branch 'master' into jira/LUCENE-8738


> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8477) Improve handling of inner disjunctions in intervals

2019-04-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813349#comment-16813349
 ] 

ASF subversion and git services commented on LUCENE-8477:
-

Commit c1222b57e940f108cb3f5b8f720a910a5fb35126 in lucene-solr's branch 
refs/heads/jira/LUCENE-8738 from Jim Ferenczi
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c1222b5 ]

LUCENE-8477: Restore public ctr for FilteredIntervalsSource


> Improve handling of inner disjunctions in intervals
> ---
>
> Key: LUCENE-8477
> URL: https://issues.apache.org/jira/browse/LUCENE-8477
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.1
>
> Attachments: LUCENE-8477.patch, LUCENE-8477.patch, LUCENE-8477.patch, 
> LUCENE-8477.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current implementation of the disjunction interval produced by 
> {{Intervals.or}} is a direct implementation of the OR operator from the Vigna 
> paper.  This produces minimal intervals, meaning that (a) is preferred over 
> (a b), and (b) also over (a b).  This has advantages when it comes to 
> counting intervals for scoring, but also has drawbacks when it comes to 
> matching.  For example, a phrase query for ((a OR (a b)) BLOCK (c)) will not 
> match the document (a b c), because (a) will be preferred over (a b), and (a 
> c) does not match.
> This ticket is to discuss the best way of dealing with disjunctions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813346#comment-16813346
 ] 

Adrien Grand commented on LUCENE-8738:
--

There seems to be issues with links to the standard API. I wonder that it might 
be related to the move from package-list to element-list.

> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tomoko Uchida as Lucene/Solr committer

2019-04-09 Thread Jason Gerlowski
Welcome Tomoko and congratulations!

On Tue, Apr 9, 2019 at 6:34 AM jim ferenczi  wrote:
>
> Welcome Tomoko!
>
> Le mar. 9 avr. 2019 à 12:19, Ishan Chattopadhyaya  
> a écrit :
>>
>> Yokoso, Tomoko-san! Congratulations..
>>
>> On Tue, Apr 9, 2019 at 2:28 PM Christine Poerschke (BLOOMBERG/ LONDON)
>>  wrote:
>> >
>> > Welcome Tomoko!
>> >
>> > From: dev@lucene.apache.org At: 04/08/19 16:20:59
>> > To: dev@lucene.apache.org, tomoko.uchida.1...@gmail.com
>> > Subject: Welcome Tomoko Uchida as Lucene/Solr committer
>> >
>> > Hi all,
>> >
>> > Please join me in welcoming Tomoko Uchida as the latest Lucene/Solr 
>> > committer!
>> >
>> > She has been working on https://issues.apache.org/jira/browse/LUCENE-2562 
>> > for
>> > several years with awesome progress and finally we got the fantastic Luke 
>> > as a
>> > branch on ASF JIRA:
>> > https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=shortlog;h=refs/heads/ji
>> > ra/lucene-2562-luke-swing-3
>> > Looking forward to the first release of Apache Lucene 8.1 with Luke 
>> > bundled in
>> > the distribution. I will take care of merging it to master and 8.x branches
>> > together with her once she got the ASF account.
>> >
>> > Tomoko also helped with the Japanese and Korean Analyzers.
>> >
>> > Congratulations and Welcome, Tomoko! Tomoko, it's traditional for you to
>> > introduce yourself with a brief bio.
>> >
>> > Uwe & Robert (who nominated Tomoko)
>> >
>> > -
>> > Uwe Schindler
>> > Achterdiek 19, D-28357 Bremen
>> > https://www.thetaphi.de
>> > eMail: u...@thetaphi.de
>> >
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1815 - Unstable

2019-04-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1815/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple2 Timeout waiting to see state for 
collection=testSimple2 
:DocCollection(testSimple2//collections/testSimple2/state.json/24)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node3/data/", 
  "base_url":"https://127.0.0.1:45622/solr;,   
"node_name":"127.0.0.1:45622_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node5":{   
"dataDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node5/data/", 
  "base_url":"https://127.0.0.1:36867/solr;,   
"node_name":"127.0.0.1:36867_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"down"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node7":{   
"dataDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node7/data/", 
  "base_url":"https://127.0.0.1:45622/solr;,   
"node_name":"127.0.0.1:45622_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node7/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node8":{   
"dataDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node8/data/", 
  "base_url":"https://127.0.0.1:36867/solr;,   
"node_name":"127.0.0.1:36867_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node8/data/tlog",
   "core":"testSimple2_shard2_replica_n6",   
"shared_storage":"true",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"true",   "nrtReplicas":"2",   "tlogReplicas":"0"} Live 
Nodes: [127.0.0.1:45622_solr, 127.0.0.1:46443_solr] Last available state: 
DocCollection(testSimple2//collections/testSimple2/state.json/24)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node3/data/", 
  "base_url":"https://127.0.0.1:45622/solr;,   
"node_name":"127.0.0.1:45622_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node5":{   
"dataDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node5/data/", 
  "base_url":"https://127.0.0.1:36867/solr;,   
"node_name":"127.0.0.1:36867_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"down"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node7":{   
"dataDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node7/data/", 
  "base_url":"https://127.0.0.1:45622/solr;,   
"node_name":"127.0.0.1:45622_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node7/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node8":{   
"dataDir":"hdfs://localhost:37247/solr_hdfs_home/testSimple2/core_node8/data/", 
  "base_url":"https://127.0.0.1:36867/solr;,   
"node_name":"127.0.0.1:36867_solr",   "type":"NRT",   
"force_set_state":"false",   

[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813298#comment-16813298
 ] 

Uwe Schindler commented on LUCENE-2562:
---

I also gave you more rights in Apache JIRA and aded you as the assignee for 
this issue.

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Tomoko Uchida
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, 
> Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, 
> luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, 
> lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-2562) Make Luke a Lucene/Solr Module

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813296#comment-16813296
 ] 

Uwe Schindler edited comment on LUCENE-2562 at 4/9/19 11:50 AM:


Hi [~Tomoko Uchida],
I prepared the branch to be squash-merged. Once you have your ASF account, I 
would propose that you do the last step on your own. Here is the steps to do it:

- Checkout the branch from ASF gitbox (or use your own one from Github), does 
not matter. Be sure to pull latest updates!!!
- Make sure to configure the GIT checkout to use your {{@apache) mail address 
for commits (it's also wise to add it as an alias to github, so Github and ASF 
bots can identify you as the same person). It may be wise to make the mail 
address the default for all Lucene checkouts you maintain. Also it's wise to 
have "origin" set to ASF gitbox and Github as alternative remote (e.g., 
"mocobeta")
- Switch back to master. Be sure to pull latest updatesfrom ASF gitbox!!!
- Run: {{git merge --squash jira/lucene-2562-luke-swing-3}}
- Commit the stuff with a useful commit message: {{git commit -m 'LUCENE-2562: 
Add Luke as a Lucene module'}}
- Push changes
- Switch to branch "branch_8x" (the 8.x development branch)
- Cherry-pick the previous commit from master branch, or alternatively do the 
same squashing merge.
- Push changes

Good luck - I am here for help.
Uwe


was (Author: thetaphi):
Hi [~Tomoko Uchida],
I prepared the branch to be squash-merged. Once you have your ASF account, I 
would propose that you do the last step on your own. Here is the steps to do it:

- Checkout the branch from ASF gitbox (or use your own one from Github), does 
not matter. Be sure to pull latest updates!!!
- Make sure to configure the GIT checkout to use your {{@apache) mail address 
for commits (it's also wise to add it as an alias to github, so Github and ASF 
bots can identify you as the same person). It may be wise to make the mail 
address the default for all Lucene checkouts you maintain. Also it's wise to 
have "origin" set to ASF gitbox and Github as alternative remote (e.g., 
"mocobeta")
- Switch back to master
- Run: {{git merge --squash jira/lucene-2562-luke-swing-3}}
- Commit the stuff with a useful commit message: {{git commit -m 'LUCENE-2562: 
Add Luke as a Lucene module'}}
- Push changes
- Switch to branch "branch_8x" (the 8.x development branch)
- Cherry-pick the previous commit from master branch, or alternatively do the 
same squashing merge.
- Push changes

Good luck - I am here for help.
Uwe

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Tomoko Uchida
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, 
> Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, 
> luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, 
> lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-2562) Make Luke a Lucene/Solr Module

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813296#comment-16813296
 ] 

Uwe Schindler edited comment on LUCENE-2562 at 4/9/19 11:49 AM:


Hi [~Tomoko Uchida],
I prepared the branch to be squash-merged. Once you have your ASF account, I 
would propose that you do the last step on your own. Here is the steps to do it:

- Checkout the branch from ASF gitbox (or use your own one from Github), does 
not matter. Be sure to pull latest updates!!!
- Make sure to configure the GIT checkout to use your {{@apache) mail address 
for commits (it's also wise to add it as an alias to github, so Github and ASF 
bots can identify you as the same person). It may be wise to make the mail 
address the default for all Lucene checkouts you maintain. Also it's wise to 
have "origin" set to ASF gitbox and Github as alternative remote (e.g., 
"mocobeta")
- Switch back to master
- Run: {{git merge --squash jira/lucene-2562-luke-swing-3}}
- Commit the stuff with a useful commit message: {{git commit -m 'LUCENE-2562: 
Add Luke as a Lucene module'}}
- Push changes
- Switch to branch "branch_8x" (the 8.x development branch)
- Cherry-pick the previous commit from master branch, or alternatively do the 
same squashing merge.
- Push changes

Good luck - I am here for help.
Uwe


was (Author: thetaphi):
Hi [~Tomoko Uchida],
I prepared the branch to be squash-merged. Once you have your ASF account, I 
would propose that you do the last step on your own. Here is the steps to do it:

- Checkout the branch from ASF gitbox (or use your own one from Github), does 
not matter.
- Make sure to configure the GIT checkout to use your {{@apache) mail address 
for commits (it's also wise to add it as an alias to github, so Github and ASF 
bots can identify you as the same person). It may be wise to make the mail 
address the default for all Lucene checkouts you maintain. Also it's wise to 
have "origin" set to ASF gitbox and Github as alternative remote (e.g., 
"mocobeta")
- Switch back to master
- Run: {{git merge --squash jira/lucene-2562-luke-swing-3}}
- Commit the stuff with a useful commit message: {{git commit -m 'LUCENE-2562: 
Add Luke as a Lucene module'}}
- Push changes
- Switch to branch "branch_8x" (the 8.x development branch)
- Cherry-pick the previous commit from master branch, or alternatively do the 
same squashing merge.
- Push changes

Good luck - I am here for help.
Uwe

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Tomoko Uchida
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, 
> Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, 
> luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, 
> lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2019-04-09 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813296#comment-16813296
 ] 

Uwe Schindler commented on LUCENE-2562:
---

Hi [~Tomoko Uchida],
I prepared the branch to be squash-merged. Once you have your ASF account, I 
would propose that you do the last step on your own. Here is the steps to do it:

- Checkout the branch from ASF gitbox (or use your own one from Github), does 
not matter.
- Make sure to configure the GIT checkout to use your {{@apache) mail address 
for commits (it's also wise to add it as an alias to github, so Github and ASF 
bots can identify you as the same person). It may be wise to make the mail 
address the default for all Lucene checkouts you maintain. Also it's wise to 
have "origin" set to ASF gitbox and Github as alternative remote (e.g., 
"mocobeta")
- Switch back to master
- Run: {{git merge --squash jira/lucene-2562-luke-swing-3}}
- Commit the stuff with a useful commit message: {{git commit -m 'LUCENE-2562: 
Add Luke as a Lucene module'}}
- Push changes
- Switch to branch "branch_8x" (the 8.x development branch)
- Cherry-pick the previous commit from master branch, or alternatively do the 
same squashing merge.
- Push changes

Good luck - I am here for help.
Uwe

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Tomoko Uchida
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, 
> Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, 
> luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, 
> lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13385) Upgrade dependency jackson-databind in solr package contrib/prometheus-exporter/lib

2019-04-09 Thread DW (JIRA)
DW created SOLR-13385:
-

 Summary: Upgrade dependency jackson-databind in solr package 
contrib/prometheus-exporter/lib
 Key: SOLR-13385
 URL: https://issues.apache.org/jira/browse/SOLR-13385
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.6, 8.0.1
Reporter: DW


The current used jackson-databind in 
/contrib/prometheus-exporter/lib/jackson-databind-2.9.6.jar has known Security 
Vulnerabilities record. Please upgrade to 2.9.8+. Thanks.

 

Please let me know if you would like detailed CVE records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-04-09 Thread GitBox
diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip 
second grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r273444073
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java
 ##
 @@ -34,17 +41,37 @@
 /**
  * Implementation for transforming {@link SearchGroup} into a {@link 
NamedList} structure and visa versa.
  */
-public class SearchGroupsResultTransformer implements 
ShardResultTransformer, Map> {
+public abstract class SearchGroupsResultTransformer implements 
ShardResultTransformer, Map> {
 
 Review comment:
   `SearchGroupsResultsTransformer` shouldn't change, it would just delegate 
the behaviour to a different class according to the `skip` flag. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8757) Better Segment To Thread Mapping Algorithm

2019-04-09 Thread Atri Sharma (JIRA)
Atri Sharma created LUCENE-8757:
---

 Summary: Better Segment To Thread Mapping Algorithm
 Key: LUCENE-8757
 URL: https://issues.apache.org/jira/browse/LUCENE-8757
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Atri Sharma


The current segments to threads allocation algorithm always allocates one 
thread per segment. This is detrimental to performance in case of skew in 
segment sizes since small segments also get their dedicated thread. This can 
lead to performance degradation due to context switching overheads.

 

A better algorithm which is cognizant of size skew would have better 
performance for realistic scenarios



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13336) maxBooleanClauses ignored; can result in exponential expansion of naive queries

2019-04-09 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813281#comment-16813281
 ] 

David Smiley commented on SOLR-13336:
-

+1 to your approach as described I’m traveling and can’t look at your patch

> maxBooleanClauses ignored; can result in exponential expansion of naive 
> queries
> ---
>
> Key: SOLR-13336
> URL: https://issues.apache.org/jira/browse/SOLR-13336
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.0, 7.6, master (9.0)
>Reporter: Michael Gibney
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13336.patch, SOLR-13336.patch
>
>
> Since SOLR-10921 it appears that Solr always sets 
> {{BooleanQuery.maxClauseCount}} (at the Lucene level) to 
> {{Integer.MAX_VALUE-1}}. I assume this is because Solr parses 
> {{maxBooleanClauses}} out of the config and applies it externally.
> In any case, when used as part of 
> {{lucene.util.QueryBuilder.analyzeGraphPhrase}} (and possibly other places?), 
> the Lucene code checks internally against only the static {{maxClauseCount}} 
> variable (permanently set to {{Integer.MAX_VALUE-1}} in the context of Solr).
> Thus in at least one case ({{analyzeGraphPhrase()}}, but possibly others?), 
> {{maxBooleanClauses}} is having no effect. I'm pretty sure this is what's 
> underlying the [issue reported here as being related to Solr 
> 7.6|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201902.mbox/%3CCAF%3DheHE6-MOtn2XRbEg7%3D1tpNEGtE8GaChnOhFLPeJzpF18SGA%40mail.gmail.com%3E].
> To summarize, users are definitely susceptible (to varying degrees of likely 
> severity, assuming no actual _malicious_ attack) if:
>  # Running Solr >= 7.6.0
>  # Using edismax with "ps" param set to >0
>  # Query-time analysis chain is _at all_ capable of producing graphs (e.g., 
> WordDelimiterGraphFilter, SynonymGraphFilter that has corresponding synonyms 
> with varying token lengths.
> Users are _particularly_ vulnerable in practice if they have query-time 
> {{WordDelimiterGraphFilter}} configured with {{preserveOriginal=true}}.
> To clarify, Lucene/Solr 7.6 didn't exactly _introduce_ the issue; it only 
> increased the likelihood of problems manifesting (as a result of 
> LUCENE-8531). Notably, the "enumerated strings" approach to graph phrase 
> query (reintroduced by LUCENE-8531) was previously in place pre-6.5 – at 
> which point it could rely on default Lucene-level {{maxClauseCount}} failsafe 
> (removed as of 7.0). This explains the odd "Affects versions" => 
> maxBooleanClauses was disabled at the Lucene level (in Solr contexts) 
> starting with version 7.0, but the change became more likely to manifest 
> problems for users as of 7.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13376) Multi-node race condition to create/remove nodeLost markers

2019-04-09 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813271#comment-16813271
 ] 

Andrzej Bialecki  commented on SOLR-13376:
--

bq. it's expected that InactiveMarkersPlanAction is what will clean up the 
markers

It's expected to _eventually_ clean them - the trigger runs once a day. That's 
why the section in {{OverseerTriggerThread.run()}} was removing them on 
overseer leader change, to clean the markers that we know for sure are no 
longer needed. And apparently this creates the race condition.
 
bq.  you just re-enabled the test (w/o any modifications to it) and re-resolved 
this issue

Well, for the record, see 1cfbd3e1c84d35e741cfc068a8e88f0eff4ea9e1 where I 
tried to address another source of the test's instability, and the test's 
reliability improved after that change. The race condition that you discovered 
is something new that I wasn't aware of before, so I'm going to fix it (and add 
the missing documentation on {{.scheduled_maintenance}} trigger).

> Multi-node race condition to create/remove nodeLost markers
> ---
>
> Key: SOLR-13376
> URL: https://issues.apache.org/jira/browse/SOLR-13376
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> NodeMarkersRegistrationTest.testNodeMarkersRegistration is frequently failing 
> on jenkins builds in the same spot, with a similar looking logs.
> Although i haven't been able to reproduce these failures locally, I am fairly 
> confident that the problem is a race condition bug that exists between 
> when/how a new Overseer will process & clean up "nodeLost" marker's in ZK, 
> with how other nodes may (mistakenly) re-create those markers in their 
> liveNodes listener.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 66 - Still unstable!

2019-04-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/66/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testParallelCommitStream

Error Message:
expected:<5> but was:<3>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([49022154FCDBF4C1:69E84354609A198D]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testParallelCommitStream(StreamDecoratorTest.java:3309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16530 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.StreamDecoratorTest
   [junit4]   2> 

[jira] [Commented] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813257#comment-16813257
 ] 

AntonyJohnson commented on SOLR-13383:
--

have added the image for your reference 

> auto-scaling not working in solr7.4 - autoaddreplica
> 
>
> Key: SOLR-13383
> URL: https://issues.apache.org/jira/browse/SOLR-13383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Affects Versions: 7.4
> Environment: Production
>Reporter: AntonyJohnson
>Priority: Blocker
>  Labels: performance
> Fix For: 7.4.1
>
> Attachments: 3collections.PNG
>
>
> We're able to create new server via auto-scaling for our solr cluster 7.4 
> version. But the newly created server is not adding in our solr cluster 
> automatically. Is there any settings or configurations we need to add in 
> order to add the replica automatically in cluster for any collections.
> *commands used:* 
> {code:java}
> curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
> {code}
> Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its 
> not getting added in solr 7.4 version. The same scenario is working fine in 
> solr 7.5 version.
> *scaling policy used:*
> {code:java}
> 1)
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
>  "set-cluster-policy" : [{
>  "replica" : "1",
>  "shard" : "#EACH",
>  "node" : "#ANY",
>  }]
> }'
> 2)###Node Added Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_added_trigger",
> "event" : "nodeAdded",
> "waitFor" : "5s",
> "preferredOperation": "ADDREPLICA",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> 3)###Node Lost Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_lost_trigger",
> "event" : "nodeLost",
> "waitFor" : "5s",
> "preferredOperation": "DELETENODE",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> {code}
> Note the same policy(2,3) not working in 7.4
> *Errors:*
> {code:java}
> [Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_added_trigger",
> >   "event" : "nodeAdded",
> >   "waitFor" : "5s",
> >   "preferredOperation": "ADDREPLICA",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> "status":400,
> "QTime":5},
>   "result":"failure",
>   "WARNING":"This response format is experimental.  It is likely to change in 
> the future.",
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
>   "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
> "details":[{
> "set-trigger":{
>   "name":"node_added_trigger",
>   "event":"nodeAdded",
>   "waitFor":"5s",
>   "preferredOperation":"ADDREPLICA",
>   "enabled":true,
>   "actions":[{
>   "name":"compute_plan",
>   "class":"solr.ComputePlanAction"},
> {
>   "name":"execute_plan",
>   "class":"solr.ExecutePlanAction"}]},
> "errorMessages":["Error validating trigger config node_added_trigger: 
> TriggerValidationException{name=node_added_trigger, 
> details='{preferredOperation=unknown property}'}"]}],
> "msg":"Error in command payload",
> "code":400}}
> [Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
> [Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_lost_trigger",
> >   "event" : "nodeLost",
> >   "waitFor" : "5s",
> >   "preferredOperation": "DELETENODE",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> 

[jira] [Commented] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813251#comment-16813251
 ] 

AntonyJohnson commented on SOLR-13383:
--

i have 3nodes in solr cluster

{code:java}
[Tue Apr 09 10:31:01 UTC root@hawkeye-common ~]# curl 
'http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS=json'
{
  "responseHeader":{
"status":0,
"QTime":2},
  "cluster":{
"collections":{},
"live_nodes":["solr1-demotest.rbtest.skavard.com:8983_solr",
  "solr2-demotest.rbtest.skavard.com:8983_solr",
  "solr3-demotest.rbtest.skavard.com:8983_solr"]}}
{code}

this is the cmd which have used to create a collection

{code:java}
[Tue Apr 09 10:31:45 UTC root@hawkeye-common ~]# curl 
"http://internal-stg-rbtest-demo74-solr-elb-918015379.us-east-1.elb.amazonaws.com:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3;
{
  "responseHeader":{
"status":0,
"QTime":4693},
  "success":{
"solr2-demotest.rbtest.skavard.com:8983_solr":{
  "responseHeader":{
"status":0,
"QTime":3108},
  "core":"my_collection_3_shard1_replica_n2"},
"solr1-demotest.rbtest.skavard.com:8983_solr":{
  "responseHeader":{
"status":0,
"QTime":3140},
  "core":"my_collection_3_shard1_replica_n4"},
"solr3-demotest.rbtest.skavard.com:8983_solr":{
  "responseHeader":{
"status":0,
"QTime":3150},
  "core":"my_collection_3_shard1_replica_n1"}},
  "warning":"Using _default configset. Data driven schema functionality is 
enabled by default, which is NOT RECOMMENDED for production use. To turn it 
off: curl http://{host:port}/solr/my_collection_3/config -d 
'{\"set-user-property\": {\"update.autoCreateFields\":\"false\"}}'"}
{code}

when i add 2 more servers into that cluster it will automatically added

{code:java}
[Tue Apr 09 10:36:08 UTC root@hawkeye-common ~]# curl 
'http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS=json'
{
  "responseHeader":{
"status":0,
"QTime":1},
  "cluster":{
"collections":{
  "my_collection_3":{
"pullReplicas":"0",
"replicationFactor":"3",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{
  "core_node3":{
"core":"my_collection_3_shard1_replica_n1",
"base_url":"http://solr3-demotest.rbtest.skavard.com:8983/solr;,
"node_name":"solr3-demotest.rbtest.skavard.com:8983_solr",
"state":"active",
"type":"NRT",
"force_set_state":"false"},
  "core_node5":{
"core":"my_collection_3_shard1_replica_n2",
"base_url":"http://solr2-demotest.rbtest.skavard.com:8983/solr;,
"node_name":"solr2-demotest.rbtest.skavard.com:8983_solr",
"state":"active",
"type":"NRT",
"force_set_state":"false",
"leader":"true"},
  "core_node6":{
"core":"my_collection_3_shard1_replica_n4",
"base_url":"http://solr1-demotest.rbtest.skavard.com:8983/solr;,
"node_name":"solr1-demotest.rbtest.skavard.com:8983_solr",
"state":"active",
"type":"NRT",
"force_set_state":"false",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"nrtReplicas":"1",
"tlogReplicas":"0",
"znodeVersion":5,
"configName":"my_collection_3.AUTOCREATED"}},
"live_nodes":["10.42.3.159:8983_solr",
  "solr1-demotest.rbtest.skavard.com:8983_solr",
  "10.42.2.68:8983_solr",
  "solr2-demotest.rbtest.skavard.com:8983_solr",
  "solr3-demotest.rbtest.skavard.com:8983_solr"]}}
{code}

but the collection can't shared this is the problem here


> auto-scaling not working in solr7.4 - autoaddreplica
> 
>
> Key: SOLR-13383
> URL: https://issues.apache.org/jira/browse/SOLR-13383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Affects Versions: 7.4
> Environment: Production
>Reporter: AntonyJohnson
>Priority: Blocker
>  Labels: performance
> Fix For: 7.4.1
>
> Attachments: 3collections.PNG
>
>
> We're able to create new server via auto-scaling for our solr cluster 7.4 
> version. But the newly created server is not adding in our solr cluster 
> automatically. Is there any settings or configurations we need to add in 
> order to add the replica automatically in cluster for any collections.
> *commands used:* 
> {code:java}
> curl 
> 

[jira] [Updated] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AntonyJohnson updated SOLR-13383:
-
Attachment: 3collections.PNG

> auto-scaling not working in solr7.4 - autoaddreplica
> 
>
> Key: SOLR-13383
> URL: https://issues.apache.org/jira/browse/SOLR-13383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Affects Versions: 7.4
> Environment: Production
>Reporter: AntonyJohnson
>Priority: Blocker
>  Labels: performance
> Fix For: 7.4.1
>
> Attachments: 3collections.PNG
>
>
> We're able to create new server via auto-scaling for our solr cluster 7.4 
> version. But the newly created server is not adding in our solr cluster 
> automatically. Is there any settings or configurations we need to add in 
> order to add the replica automatically in cluster for any collections.
> *commands used:* 
> {code:java}
> curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
> {code}
> Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its 
> not getting added in solr 7.4 version. The same scenario is working fine in 
> solr 7.5 version.
> *scaling policy used:*
> {code:java}
> 1)
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
>  "set-cluster-policy" : [{
>  "replica" : "1",
>  "shard" : "#EACH",
>  "node" : "#ANY",
>  }]
> }'
> 2)###Node Added Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_added_trigger",
> "event" : "nodeAdded",
> "waitFor" : "5s",
> "preferredOperation": "ADDREPLICA",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> 3)###Node Lost Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_lost_trigger",
> "event" : "nodeLost",
> "waitFor" : "5s",
> "preferredOperation": "DELETENODE",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> {code}
> Note the same policy(2,3) not working in 7.4
> *Errors:*
> {code:java}
> [Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_added_trigger",
> >   "event" : "nodeAdded",
> >   "waitFor" : "5s",
> >   "preferredOperation": "ADDREPLICA",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> "status":400,
> "QTime":5},
>   "result":"failure",
>   "WARNING":"This response format is experimental.  It is likely to change in 
> the future.",
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
>   "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
> "details":[{
> "set-trigger":{
>   "name":"node_added_trigger",
>   "event":"nodeAdded",
>   "waitFor":"5s",
>   "preferredOperation":"ADDREPLICA",
>   "enabled":true,
>   "actions":[{
>   "name":"compute_plan",
>   "class":"solr.ComputePlanAction"},
> {
>   "name":"execute_plan",
>   "class":"solr.ExecutePlanAction"}]},
> "errorMessages":["Error validating trigger config node_added_trigger: 
> TriggerValidationException{name=node_added_trigger, 
> details='{preferredOperation=unknown property}'}"]}],
> "msg":"Error in command payload",
> "code":400}}
> [Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
> [Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_lost_trigger",
> >   "event" : "nodeLost",
> >   "waitFor" : "5s",
> >   "preferredOperation": "DELETENODE",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> "status":400,
> "QTime":1},
>   

[JENKINS] Lucene-Solr-8.x-Linux (32bit/jdk1.8.0_172) - Build # 370 - Unstable!

2019-04-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/370/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseSerialGC

6 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([E7C73CF7A5DB3CF:A31CC7C467621BBA]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.ReindexCollectionTest.testBasicReindexing

Error Message:
num docs expected:<200> but was:<192>

Stack Trace:
java.lang.AssertionError: num docs expected:<200> but 

Re: Welcome Tomoko Uchida as Lucene/Solr committer

2019-04-09 Thread jim ferenczi
Welcome Tomoko!

Le mar. 9 avr. 2019 à 12:19, Ishan Chattopadhyaya 
a écrit :

> Yokoso, Tomoko-san! Congratulations..
>
> On Tue, Apr 9, 2019 at 2:28 PM Christine Poerschke (BLOOMBERG/ LONDON)
>  wrote:
> >
> > Welcome Tomoko!
> >
> > From: dev@lucene.apache.org At: 04/08/19 16:20:59
> > To: dev@lucene.apache.org, tomoko.uchida.1...@gmail.com
> > Subject: Welcome Tomoko Uchida as Lucene/Solr committer
> >
> > Hi all,
> >
> > Please join me in welcoming Tomoko Uchida as the latest Lucene/Solr
> committer!
> >
> > She has been working on
> https://issues.apache.org/jira/browse/LUCENE-2562 for
> > several years with awesome progress and finally we got the fantastic
> Luke as a
> > branch on ASF JIRA:
> >
> https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=shortlog;h=refs/heads/ji
> > ra/lucene-2562-luke-swing-3
> > Looking forward to the first release of Apache Lucene 8.1 with Luke
> bundled in
> > the distribution. I will take care of merging it to master and 8.x
> branches
> > together with her once she got the ASF account.
> >
> > Tomoko also helped with the Japanese and Korean Analyzers.
> >
> > Congratulations and Welcome, Tomoko! Tomoko, it's traditional for you to
> > introduce yourself with a brief bio.
> >
> > Uwe & Robert (who nominated Tomoko)
> >
> > -
> > Uwe Schindler
> > Achterdiek 19, D-28357 Bremen
> > https://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-8708) Can we simplify conjunctions of range queries automatically?

2019-04-09 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813239#comment-16813239
 ] 

Ignacio Vera commented on LUCENE-8708:
--

Just an idea maybe bias for my background.

One of the issues here is that we visit the tree for each range and this is 
what we are trying to improve. Maybe adding a query that can accept more than 
one range with a logical relationship ('AND', 'OR',...) might be less invasive 
and encapsulates the logic.

> Can we simplify conjunctions of range queries automatically?
> 
>
> Key: LUCENE-8708
> URL: https://issues.apache.org/jira/browse/LUCENE-8708
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: interval_range_clauses_merging0704.patch
>
>
> BooleanQuery#rewrite already has some logic to make queries more efficient, 
> such as deduplicating filters or rewriting boolean queries that wrap a single 
> positive clause to that clause.
> It would be nice to also simplify conjunctions of range queries, so that eg. 
> {{foo: [5 TO *] AND foo:[* TO 20]}} would be rewritten to {{foo:[5 TO 20]}}. 
> When constructing queries manually or via the classic query parser, it feels 
> unnecessary as this is something that the user can fix easily. However if you 
> want to implement a query parser that only allows specifying one bound at 
> once, such as Gmail ({{after:2018-12-31}} 
> https://support.google.com/mail/answer/7190?hl=en) or GitHub 
> ({{updated:>=2018-12-31}} 
> https://help.github.com/en/articles/searching-issues-and-pull-requests#search-by-when-an-issue-or-pull-request-was-created-or-last-updated)
>  then you might end up with inefficient queries if the end user specifies 
> both an upper and a lower bound. It would be nice if we optimized those 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7386) Flatten nested disjunctions

2019-04-09 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813236#comment-16813236
 ] 

Jim Ferenczi commented on LUCENE-7386:
--

+1, I also find it more easy to read when the simplification is done at the 
rewrite level.

> Flatten nested disjunctions
> ---
>
> Key: LUCENE-7386
> URL: https://issues.apache.org/jira/browse/LUCENE-7386
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7386.patch, LUCENE-7386.patch, LUCENE-7386.patch
>
>
> Now that coords are gone it became easier to flatten nested disjunctions. It 
> might sound weird to write nested disjunctions in the first place, but 
> disjunctions can be created implicitly by other queries such as 
> more-like-this, LatLonPoint.newBoxQuery, non-scoring synonym queries, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tomoko Uchida as Lucene/Solr committer

2019-04-09 Thread Ishan Chattopadhyaya
Yokoso, Tomoko-san! Congratulations..

On Tue, Apr 9, 2019 at 2:28 PM Christine Poerschke (BLOOMBERG/ LONDON)
 wrote:
>
> Welcome Tomoko!
>
> From: dev@lucene.apache.org At: 04/08/19 16:20:59
> To: dev@lucene.apache.org, tomoko.uchida.1...@gmail.com
> Subject: Welcome Tomoko Uchida as Lucene/Solr committer
>
> Hi all,
>
> Please join me in welcoming Tomoko Uchida as the latest Lucene/Solr committer!
>
> She has been working on https://issues.apache.org/jira/browse/LUCENE-2562 for
> several years with awesome progress and finally we got the fantastic Luke as a
> branch on ASF JIRA:
> https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=shortlog;h=refs/heads/ji
> ra/lucene-2562-luke-swing-3
> Looking forward to the first release of Apache Lucene 8.1 with Luke bundled in
> the distribution. I will take care of merging it to master and 8.x branches
> together with her once she got the ASF account.
>
> Tomoko also helped with the Japanese and Korean Analyzers.
>
> Congratulations and Welcome, Tomoko! Tomoko, it's traditional for you to
> introduce yourself with a brief bio.
>
> Uwe & Robert (who nominated Tomoko)
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> https://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813224#comment-16813224
 ] 

AntonyJohnson edited comment on SOLR-13383 at 4/9/19 10:16 AM:
---

[~Goodman],
Thanks for the update.yes i can understand but when i launch a new instance its 
added automatically into live nodes list but the data's not been shared yet 
,Can you please advice me to resolve this issue 
or can you share me the live trigger and listener configuration  for this 
activity


was (Author: antojohn):
[~Goodman],
Thanks for the update.yes i can understand but when i launch a new instance its 
added automatically into live nodes list but the data's not been shared yet 
,Can you please advice me to resolve this issue 

> auto-scaling not working in solr7.4 - autoaddreplica
> 
>
> Key: SOLR-13383
> URL: https://issues.apache.org/jira/browse/SOLR-13383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Affects Versions: 7.4
> Environment: Production
>Reporter: AntonyJohnson
>Priority: Blocker
>  Labels: performance
> Fix For: 7.4.1
>
>
> We're able to create new server via auto-scaling for our solr cluster 7.4 
> version. But the newly created server is not adding in our solr cluster 
> automatically. Is there any settings or configurations we need to add in 
> order to add the replica automatically in cluster for any collections.
> *commands used:* 
> {code:java}
> curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
> {code}
> Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its 
> not getting added in solr 7.4 version. The same scenario is working fine in 
> solr 7.5 version.
> *scaling policy used:*
> {code:java}
> 1)
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
>  "set-cluster-policy" : [{
>  "replica" : "1",
>  "shard" : "#EACH",
>  "node" : "#ANY",
>  }]
> }'
> 2)###Node Added Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_added_trigger",
> "event" : "nodeAdded",
> "waitFor" : "5s",
> "preferredOperation": "ADDREPLICA",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> 3)###Node Lost Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_lost_trigger",
> "event" : "nodeLost",
> "waitFor" : "5s",
> "preferredOperation": "DELETENODE",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> {code}
> Note the same policy(2,3) not working in 7.4
> *Errors:*
> {code:java}
> [Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_added_trigger",
> >   "event" : "nodeAdded",
> >   "waitFor" : "5s",
> >   "preferredOperation": "ADDREPLICA",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> "status":400,
> "QTime":5},
>   "result":"failure",
>   "WARNING":"This response format is experimental.  It is likely to change in 
> the future.",
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
>   "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
> "details":[{
> "set-trigger":{
>   "name":"node_added_trigger",
>   "event":"nodeAdded",
>   "waitFor":"5s",
>   "preferredOperation":"ADDREPLICA",
>   "enabled":true,
>   "actions":[{
>   "name":"compute_plan",
>   "class":"solr.ComputePlanAction"},
> {
>   "name":"execute_plan",
>   "class":"solr.ExecutePlanAction"}]},
> "errorMessages":["Error validating trigger config node_added_trigger: 
> TriggerValidationException{name=node_added_trigger, 
> details='{preferredOperation=unknown property}'}"]}],
> "msg":"Error in command payload",
> "code":400}}
> [Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
> [Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
> 

[jira] [Commented] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813224#comment-16813224
 ] 

AntonyJohnson commented on SOLR-13383:
--

[~Goodman],
Thanks for the update.yes i can understand but when i launch a new instance its 
added automatically into live nodes list but the data's not been shared yet 
,Can you please advice me to resolve this issue 

> auto-scaling not working in solr7.4 - autoaddreplica
> 
>
> Key: SOLR-13383
> URL: https://issues.apache.org/jira/browse/SOLR-13383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Affects Versions: 7.4
> Environment: Production
>Reporter: AntonyJohnson
>Priority: Blocker
>  Labels: performance
> Fix For: 7.4.1
>
>
> We're able to create new server via auto-scaling for our solr cluster 7.4 
> version. But the newly created server is not adding in our solr cluster 
> automatically. Is there any settings or configurations we need to add in 
> order to add the replica automatically in cluster for any collections.
> *commands used:* 
> {code:java}
> curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
> {code}
> Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its 
> not getting added in solr 7.4 version. The same scenario is working fine in 
> solr 7.5 version.
> *scaling policy used:*
> {code:java}
> 1)
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
>  "set-cluster-policy" : [{
>  "replica" : "1",
>  "shard" : "#EACH",
>  "node" : "#ANY",
>  }]
> }'
> 2)###Node Added Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_added_trigger",
> "event" : "nodeAdded",
> "waitFor" : "5s",
> "preferredOperation": "ADDREPLICA",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> 3)###Node Lost Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_lost_trigger",
> "event" : "nodeLost",
> "waitFor" : "5s",
> "preferredOperation": "DELETENODE",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> {code}
> Note the same policy(2,3) not working in 7.4
> *Errors:*
> {code:java}
> [Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_added_trigger",
> >   "event" : "nodeAdded",
> >   "waitFor" : "5s",
> >   "preferredOperation": "ADDREPLICA",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> "status":400,
> "QTime":5},
>   "result":"failure",
>   "WARNING":"This response format is experimental.  It is likely to change in 
> the future.",
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
>   "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
> "details":[{
> "set-trigger":{
>   "name":"node_added_trigger",
>   "event":"nodeAdded",
>   "waitFor":"5s",
>   "preferredOperation":"ADDREPLICA",
>   "enabled":true,
>   "actions":[{
>   "name":"compute_plan",
>   "class":"solr.ComputePlanAction"},
> {
>   "name":"execute_plan",
>   "class":"solr.ExecutePlanAction"}]},
> "errorMessages":["Error validating trigger config node_added_trigger: 
> TriggerValidationException{name=node_added_trigger, 
> details='{preferredOperation=unknown property}'}"]}],
> "msg":"Error in command payload",
> "code":400}}
> [Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
> [Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_lost_trigger",
> >   "event" : "nodeLost",
> >   "waitFor" : "5s",
> >   "preferredOperation": "DELETENODE",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> 

[jira] [Commented] (LUCENE-7386) Flatten nested disjunctions

2019-04-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813211#comment-16813211
 ] 

Adrien Grand commented on LUCENE-7386:
--

For the record I had to disable the verification of scores for this run of the 
benchmark since this change removes intermediate casts to float which trigger 
slight changes in the produced scores.

> Flatten nested disjunctions
> ---
>
> Key: LUCENE-7386
> URL: https://issues.apache.org/jira/browse/LUCENE-7386
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7386.patch, LUCENE-7386.patch, LUCENE-7386.patch
>
>
> Now that coords are gone it became easier to flatten nested disjunctions. It 
> might sound weird to write nested disjunctions in the first place, but 
> disjunctions can be created implicitly by other queries such as 
> more-like-this, LatLonPoint.newBoxQuery, non-scoring synonym queries, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8708) Can we simplify conjunctions of range queries automatically?

2019-04-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813199#comment-16813199
 ] 

Adrien Grand commented on LUCENE-8708:
--

Thanks Atri for giving it a try! This change is a bit too invasive to my taste 
given that this is only a nice feature to have. That said I don't really have 
ideas how to make it better... 

> Can we simplify conjunctions of range queries automatically?
> 
>
> Key: LUCENE-8708
> URL: https://issues.apache.org/jira/browse/LUCENE-8708
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: interval_range_clauses_merging0704.patch
>
>
> BooleanQuery#rewrite already has some logic to make queries more efficient, 
> such as deduplicating filters or rewriting boolean queries that wrap a single 
> positive clause to that clause.
> It would be nice to also simplify conjunctions of range queries, so that eg. 
> {{foo: [5 TO *] AND foo:[* TO 20]}} would be rewritten to {{foo:[5 TO 20]}}. 
> When constructing queries manually or via the classic query parser, it feels 
> unnecessary as this is something that the user can fix easily. However if you 
> want to implement a query parser that only allows specifying one bound at 
> once, such as Gmail ({{after:2018-12-31}} 
> https://support.google.com/mail/answer/7190?hl=en) or GitHub 
> ({{updated:>=2018-12-31}} 
> https://help.github.com/en/articles/searching-issues-and-pull-requests#search-by-when-an-issue-or-pull-request-was-created-or-last-updated)
>  then you might end up with inefficient queries if the end user specifies 
> both an upper and a lower bound. It would be nice if we optimized those 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8753) New PostingFormat - UniformSplit

2019-04-09 Thread Bruno Roustant (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813171#comment-16813171
 ] 

Bruno Roustant edited comment on LUCENE-8753 at 4/9/19 9:26 AM:


I agree.

We profiled wikimediumall and we saw that 90% of the time is spent in the 
scoring, and less than a couple of percent is spent to access the dictionary 
blocks.

Our own use-case is to have multiple small-to-medium cores, the size of 
wikimedium500k, that's why we studied it more.


was (Author: bruno.roustant):
I agree.

We profile wikimediumall and we saw that 90% of the time is spent in the 
scoring, and less than a couple of percent is spent to access the dictionary 
blocks.

Our own use-case is to have multiple small-to-medium cores, the size of 
wikimedium500k, that's why we studied it more.

> New PostingFormat - UniformSplit
> 
>
> Key: LUCENE-8753
> URL: https://issues.apache.org/jira/browse/LUCENE-8753
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Bruno Roustant
>Priority: Major
> Attachments: Uniform Split Technique.pdf, luceneutil.benchmark.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a proposal to add a new PostingsFormat called "UniformSplit" with 4 
> objectives:
>  - Clear design and simple code.
>  - Easily extensible, for both the logic and the index format.
>  - Light memory usage with a very compact FST.
>  - Focus on efficient TermQuery, PhraseQuery and PrefixQuery performance.
> (the pdf attached explains visually the technique in more details)
>  The principle is to split the list of terms into blocks and use a FST to 
> access the block, but not as a prefix trie, rather with a seek-floor pattern. 
> For the selection of the blocks, there is a target average block size (number 
> of terms), with an allowed delta variation (10%) to compare the terms and 
> select the one with the minimal distinguishing prefix.
>  There are also several optimizations inside the block to make it more 
> compact and speed up the loading/scanning.
> The performance obtained is interesting with the luceneutil benchmark, 
> comparing UniformSplit with BlockTree. Find it in the first comment and also 
> attached for better formatting.
> Although the precise percentages vary between runs, three main points:
>  - TermQuery and PhraseQuery are improved.
>  - PrefixQuery and WildcardQuery are ok.
>  - Fuzzy queries are clearly less performant, because BlockTree is so 
> optimized for them.
> Compared to BlockTree, FST size is reduced by 15%, and segment writing time 
> is reduced by 20%. So this PostingsFormat scales to lots of docs, as 
> BlockTree.
> This initial version passes all Lucene tests. Use “ant test 
> -Dtests.codec=UniformSplitTesting” to test with this PostingsFormat.
> Subjectively, we think we have fulfilled our goal of code simplicity. And we 
> have already exercised this PostingsFormat extensibility to create a 
> different flavor for our own use-case.
> Contributors: Juan Camilo Rodriguez Duran, Bruno Roustant, David Smiley



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8753) New PostingFormat - UniformSplit

2019-04-09 Thread Bruno Roustant (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813171#comment-16813171
 ] 

Bruno Roustant commented on LUCENE-8753:


I agree.

We profile wikimediumall and we saw that 90% of the time is spent in the 
scoring, and less than a couple of percent is spent to access the dictionary 
blocks.

Our own use-case is to have multiple small-to-medium cores, the size of 
wikimedium500k, that's why we studied it more.

> New PostingFormat - UniformSplit
> 
>
> Key: LUCENE-8753
> URL: https://issues.apache.org/jira/browse/LUCENE-8753
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Bruno Roustant
>Priority: Major
> Attachments: Uniform Split Technique.pdf, luceneutil.benchmark.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a proposal to add a new PostingsFormat called "UniformSplit" with 4 
> objectives:
>  - Clear design and simple code.
>  - Easily extensible, for both the logic and the index format.
>  - Light memory usage with a very compact FST.
>  - Focus on efficient TermQuery, PhraseQuery and PrefixQuery performance.
> (the pdf attached explains visually the technique in more details)
>  The principle is to split the list of terms into blocks and use a FST to 
> access the block, but not as a prefix trie, rather with a seek-floor pattern. 
> For the selection of the blocks, there is a target average block size (number 
> of terms), with an allowed delta variation (10%) to compare the terms and 
> select the one with the minimal distinguishing prefix.
>  There are also several optimizations inside the block to make it more 
> compact and speed up the loading/scanning.
> The performance obtained is interesting with the luceneutil benchmark, 
> comparing UniformSplit with BlockTree. Find it in the first comment and also 
> attached for better formatting.
> Although the precise percentages vary between runs, three main points:
>  - TermQuery and PhraseQuery are improved.
>  - PrefixQuery and WildcardQuery are ok.
>  - Fuzzy queries are clearly less performant, because BlockTree is so 
> optimized for them.
> Compared to BlockTree, FST size is reduced by 15%, and segment writing time 
> is reduced by 20%. So this PostingsFormat scales to lots of docs, as 
> BlockTree.
> This initial version passes all Lucene tests. Use “ant test 
> -Dtests.codec=UniformSplitTesting” to test with this PostingsFormat.
> Subjectively, we think we have fulfilled our goal of code simplicity. And we 
> have already exercised this PostingsFormat extensibility to create a 
> different flavor for our own use-case.
> Contributors: Juan Camilo Rodriguez Duran, Bruno Roustant, David Smiley



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-04-09 Thread Richard (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813170#comment-16813170
 ] 

Richard commented on SOLR-13240:


I guess the test isn't named as well as it can be, but it's mainly testing the 
new comparator, and that if there is a set of replicas which have more than 1 
leader, it won't throw an exception

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
>  
> 

[jira] [Commented] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread Richard (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813169#comment-16813169
 ] 

Richard commented on SOLR-13383:


I think you're getting confused with some of the autoscaling features. 

When you're creating a collection you shouldn't need to specify 
{{autoAddReplica}}, if you do the following:
{code}
curl -s "http://localhost:8080/solr/admin/autoscaling; 
{code}
you should see in the response, something like the following:
{code}
".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":120,
  "actions":[{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
{code}
This come's "out of the box" with autoscaling, and is enabled by default.

Your nodeLost trigger is also confusing, you're asking solr to delete a node 
once a node has been lost? What if your cluster was only 2 nodes?


> auto-scaling not working in solr7.4 - autoaddreplica
> 
>
> Key: SOLR-13383
> URL: https://issues.apache.org/jira/browse/SOLR-13383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Affects Versions: 7.4
> Environment: Production
>Reporter: AntonyJohnson
>Priority: Blocker
>  Labels: performance
> Fix For: 7.4.1
>
>
> We're able to create new server via auto-scaling for our solr cluster 7.4 
> version. But the newly created server is not adding in our solr cluster 
> automatically. Is there any settings or configurations we need to add in 
> order to add the replica automatically in cluster for any collections.
> *commands used:* 
> {code:java}
> curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
> {code}
> Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its 
> not getting added in solr 7.4 version. The same scenario is working fine in 
> solr 7.5 version.
> *scaling policy used:*
> {code:java}
> 1)
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
>  "set-cluster-policy" : [{
>  "replica" : "1",
>  "shard" : "#EACH",
>  "node" : "#ANY",
>  }]
> }'
> 2)###Node Added Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_added_trigger",
> "event" : "nodeAdded",
> "waitFor" : "5s",
> "preferredOperation": "ADDREPLICA",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> 3)###Node Lost Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_lost_trigger",
> "event" : "nodeLost",
> "waitFor" : "5s",
> "preferredOperation": "DELETENODE",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> {code}
> Note the same policy(2,3) not working in 7.4
> *Errors:*
> {code:java}
> [Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_added_trigger",
> >   "event" : "nodeAdded",
> >   "waitFor" : "5s",
> >   "preferredOperation": "ADDREPLICA",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> "status":400,
> "QTime":5},
>   "result":"failure",
>   "WARNING":"This response format is experimental.  It is likely to change in 
> the future.",
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
>   "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
> "details":[{
> "set-trigger":{
>   "name":"node_added_trigger",
>   "event":"nodeAdded",
>   "waitFor":"5s",
>   "preferredOperation":"ADDREPLICA",
>   "enabled":true,
>   "actions":[{
>   "name":"compute_plan",
>   "class":"solr.ComputePlanAction"},
> {
>   "name":"execute_plan",
>   "class":"solr.ExecutePlanAction"}]},
> "errorMessages":["Error validating trigger config node_added_trigger: 
> 

[jira] [Commented] (LUCENE-8753) New PostingFormat - UniformSplit

2019-04-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813164#comment-16813164
 ] 

Adrien Grand commented on LUCENE-8753:
--

bq. BlockTree and UniformSplit had the same QPS for Term and Phrase queries. I 
didn't understand why a different behavior between a small and a large index.

I think this is expected. Query processing needs to look up the term in the 
terms dict and then process documents that contain this term. When the index 
gets larger, postings usually grow more quickly than the terms dictionary, so 
processing postings takes more time relatively compared to looking up the term 
in the terms dictionary. Term dictionary lookup performance only really matters 
for queries that have few matches (which you somehow simulated by running the 
benchmark on wikimedium500k) and updates, which are simulated by the PKLookup 
task.

> New PostingFormat - UniformSplit
> 
>
> Key: LUCENE-8753
> URL: https://issues.apache.org/jira/browse/LUCENE-8753
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Bruno Roustant
>Priority: Major
> Attachments: Uniform Split Technique.pdf, luceneutil.benchmark.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a proposal to add a new PostingsFormat called "UniformSplit" with 4 
> objectives:
>  - Clear design and simple code.
>  - Easily extensible, for both the logic and the index format.
>  - Light memory usage with a very compact FST.
>  - Focus on efficient TermQuery, PhraseQuery and PrefixQuery performance.
> (the pdf attached explains visually the technique in more details)
>  The principle is to split the list of terms into blocks and use a FST to 
> access the block, but not as a prefix trie, rather with a seek-floor pattern. 
> For the selection of the blocks, there is a target average block size (number 
> of terms), with an allowed delta variation (10%) to compare the terms and 
> select the one with the minimal distinguishing prefix.
>  There are also several optimizations inside the block to make it more 
> compact and speed up the loading/scanning.
> The performance obtained is interesting with the luceneutil benchmark, 
> comparing UniformSplit with BlockTree. Find it in the first comment and also 
> attached for better formatting.
> Although the precise percentages vary between runs, three main points:
>  - TermQuery and PhraseQuery are improved.
>  - PrefixQuery and WildcardQuery are ok.
>  - Fuzzy queries are clearly less performant, because BlockTree is so 
> optimized for them.
> Compared to BlockTree, FST size is reduced by 15%, and segment writing time 
> is reduced by 20%. So this PostingsFormat scales to lots of docs, as 
> BlockTree.
> This initial version passes all Lucene tests. Use “ant test 
> -Dtests.codec=UniformSplitTesting” to test with this PostingsFormat.
> Subjectively, we think we have fulfilled our goal of code simplicity. And we 
> have already exercised this PostingsFormat extensibility to create a 
> different flavor for our own use-case.
> Contributors: Juan Camilo Rodriguez Duran, Bruno Roustant, David Smiley



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813151#comment-16813151
 ] 

Christine Poerschke commented on SOLR-13383:


SOLR-12715 and SOLR-12716 sound related.

> auto-scaling not working in solr7.4 - autoaddreplica
> 
>
> Key: SOLR-13383
> URL: https://issues.apache.org/jira/browse/SOLR-13383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Affects Versions: 7.4
> Environment: Production
>Reporter: AntonyJohnson
>Priority: Blocker
>  Labels: performance
> Fix For: 7.4.1
>
>
> We're able to create new server via auto-scaling for our solr cluster 7.4 
> version. But the newly created server is not adding in our solr cluster 
> automatically. Is there any settings or configurations we need to add in 
> order to add the replica automatically in cluster for any collections.
> *commands used:* 
> {code:java}
> curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
> {code}
> Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its 
> not getting added in solr 7.4 version. The same scenario is working fine in 
> solr 7.5 version.
> *scaling policy used:*
> {code:java}
> 1)
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
>  "set-cluster-policy" : [{
>  "replica" : "1",
>  "shard" : "#EACH",
>  "node" : "#ANY",
>  }]
> }'
> 2)###Node Added Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_added_trigger",
> "event" : "nodeAdded",
> "waitFor" : "5s",
> "preferredOperation": "ADDREPLICA",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> 3)###Node Lost Trigger
> curl http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> "set-trigger": {
> "name" : "node_lost_trigger",
> "event" : "nodeLost",
> "waitFor" : "5s",
> "preferredOperation": "DELETENODE",
> "enabled" : true,
> "actions" : [
> {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
> },
> {
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
> }
> ]
> }
> }'
> {code}
> Note the same policy(2,3) not working in 7.4
> *Errors:*
> {code:java}
> [Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_added_trigger",
> >   "event" : "nodeAdded",
> >   "waitFor" : "5s",
> >   "preferredOperation": "ADDREPLICA",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> "status":400,
> "QTime":5},
>   "result":"failure",
>   "WARNING":"This response format is experimental.  It is likely to change in 
> the future.",
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
>   "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
> "details":[{
> "set-trigger":{
>   "name":"node_added_trigger",
>   "event":"nodeAdded",
>   "waitFor":"5s",
>   "preferredOperation":"ADDREPLICA",
>   "enabled":true,
>   "actions":[{
>   "name":"compute_plan",
>   "class":"solr.ComputePlanAction"},
> {
>   "name":"execute_plan",
>   "class":"solr.ExecutePlanAction"}]},
> "errorMessages":["Error validating trigger config node_added_trigger: 
> TriggerValidationException{name=node_added_trigger, 
> details='{preferredOperation=unknown property}'}"]}],
> "msg":"Error in command payload",
> "code":400}}
> [Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
> [Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
> http://localhost:8983/solr/admin/autoscaling -H 
> 'Content-type:application/json' -d '{
> >  "set-trigger": {
> >   "name" : "node_lost_trigger",
> >   "event" : "nodeLost",
> >   "waitFor" : "5s",
> >   "preferredOperation": "DELETENODE",
> >   "enabled" : true,
> >   "actions" : [
> >{
> > "name" : "compute_plan",
> > "class": "solr.ComputePlanAction"
> >},
> >{
> > "name" : "execute_plan",
> > "class": "solr.ExecutePlanAction"
> >}
> >   ]
> >  }
> > }'
> {
>   "responseHeader":{
> "status":400,
> 

Re:Welcome Tomoko Uchida as Lucene/Solr committer

2019-04-09 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Welcome Tomoko!

From: dev@lucene.apache.org At: 04/08/19 16:20:59To:  dev@lucene.apache.org,  
tomoko.uchida.1...@gmail.com
Subject: Welcome Tomoko Uchida as Lucene/Solr committer

Hi all,

Please join me in welcoming Tomoko Uchida as the latest Lucene/Solr committer! 

She has been working on https://issues.apache.org/jira/browse/LUCENE-2562 for 
several years with awesome progress and finally we got the fantastic Luke as a 
branch on ASF JIRA: 
https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=shortlog;h=refs/heads/ji
ra/lucene-2562-luke-swing-3
Looking forward to the first release of Apache Lucene 8.1 with Luke bundled in 
the distribution. I will take care of merging it to master and 8.x branches 
together with her once she got the ASF account.

Tomoko also helped with the Japanese and Korean Analyzers.  

Congratulations and Welcome, Tomoko! Tomoko, it's traditional for you to 
introduce yourself with a brief bio.

Uwe & Robert (who nominated Tomoko)

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
https://www.thetaphi.de
eMail: u...@thetaphi.de


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-13381) Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a PointField facet

2019-04-09 Thread Haochao Zhuang (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813117#comment-16813117
 ] 

Haochao Zhuang commented on SOLR-13381:
---

I tried it in Lucene7.5. TrieIntField has the same problem.

> Unexpected docvalues type SORTED_NUMERIC Exception when grouping by a 
> PointField facet
> --
>
> Key: SOLR-13381
> URL: https://issues.apache.org/jira/browse/SOLR-13381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 7.0, 7.6, 7.7, 7.7.1
> Environment: solr, solrcloud
>Reporter: Zhu JiaJun
>Priority: Major
>
> Hey,
> I got an "Unexpected docvalues type SORTED_NUMERIC" exception when I perform 
> group facet on an IntPointField. Debugging into the source code, the cause is 
> that internally the docvalue type for PointField is "NUMERIC" (single value) 
> or "SORTED_NUMERIC" (multi value), while the TermGroupFacetCollector class 
> requires the facet field must have a "SORTED" or "SOTRTED_SET" docvalue type: 
> [https://github.com/apache/lucene-solr/blob/2480b74887eff01f729d62a57b415d772f947c91/lucene/grouping/src/java/org/apache/lucene/search/grouping/TermGroupFacetCollector.java#L313]
>  
> When I change schema for all int field to TrieIntField, the group facet then 
> work. Since internally the docvalue type for TrieField is SORTED (single 
> value) or SORTED_SET (multi value).
> Regarding that the "TrieField" is depreciated in Solr7, please help on this 
> grouping facet issue for PointField. I also commented this issue in SOLR-7495.
>  
> In addtional, all place of "${solr.tests.IntegerFieldType}" in the unit test 
> files seems to be using the "TrieintField", if change to "IntPointField", 
> some unit tests will fail, for example: 
> [https://github.com/apache/lucene-solr/blob/3de0b3671998cc9bc723d10f1b31ce48cbd4fa64/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L417]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8753) New PostingFormat - UniformSplit

2019-04-09 Thread Bruno Roustant (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813106#comment-16813106
 ] 

Bruno Roustant edited comment on LUCENE-8753 at 4/9/19 8:15 AM:


It took me some time to run wikimedimall 8 GB index (didn't anticipate 1h 
indexing initially - a little less for UniformSplit, then I had an exception 
about facets).

Then I got results which surprised me. BlockTree and UniformSplit had the same 
QPS for Term and Phrase queries. I didn't understand why a different behavior 
between a small and a large index.

Then I thought about 2 explanations:
 * Much larger index could mean less OS IO cache hits. I ran the benchmark with 
a 16 GB laptop and a 64 GB desktop. Actually I got nearly no difference in my 
test.
 * Much larger index could mean more results. So the time spent to score and 
rank the results could become much larger and diminish the effect of a change 
in the dictionary. I have no clue there at the moment.

Here is the result of wikimedimall on a 64 GB desktop:

(I used -Jira option, but it does not seem to recognize the "color" tag)

||Task||QPS BT||StdDev BT||QPS CUS||StdDev CUS||Pct diff
|Fuzzy1|72.81|3.11|21.77|0.71|\{color:red}72%\{color}-\{color:red}67%\{color}|
|Fuzzy2|66.77|3.77|20.41|0.67|\{color:red}72%\{color}-\{color:red}66%\{color}|
|Respell|8.85|0.64|6.02|0.33|\{color:red}40%\{color}-\{color:red}22%\{color}|
|PKLookup|130.83|3.96|121.66|12.37|\{color:red}18%\{color}-\{color:green}5%\{color}|
|Wildcard|25.03|1.33|23.93|1.19|\{color:red}13%\{color}-\{color:green}6%\{color}|
|HighTermMonthSort|19.03|2.55|18.40|1.56|\{color:red}21%\{color}-\{color:green}21%\{color}|
|Prefix3|12.47|0.82|12.10|0.78|\{color:red}14%\{color}-\{color:green}10%\{color}|
|LowTerm|182.95|14.94|177.97|18.67|\{color:red}19%\{color}-\{color:green}17%\{color}|
|IntNRQ|5.21|0.54|5.09|0.56|\{color:red}21%\{color}-\{color:green}21%\{color}|
|MedTerm|90.74|3.99|89.14|4.24|\{color:red}10%\{color}-\{color:green}7%\{color}|
|HighTerm|42.54|1.95|41.86|2.00|\{color:red}10%\{color}-\{color:green}8%\{color}|
|OrNotHighLow|532.96|16.16|526.86|24.40|\{color:red}8%\{color}-\{color:green}6%\{color}|
|HighSloppyPhrase|12.00|0.39|11.90|0.48|\{color:red}7%\{color}-\{color:green}6%\{color}|
|OrNotHighMed|53.64|1.08|53.37|1.22|\{color:red}4%\{color}-\{color:green}3%\{color}|
|MedSloppyPhrase|31.83|0.59|31.67|0.78|\{color:red}4%\{color}-\{color:green}3%\{color}|
|HighPhrase|32.24|0.85|32.09|0.81|\{color:red}5%\{color}-\{color:green}4%\{color}|
|LowSloppyPhrase|29.51|0.43|29.40|0.58|\{color:red}3%\{color}-\{color:green}3%\{color}|
|AndHighHigh|26.97|0.31|26.88|0.37|\{color:red}2%\{color}-\{color:green}2%\{color}|
|MedPhrase|4.95|0.16|4.94|0.15|\{color:red}6%\{color}-\{color:green}6%\{color}|
|AndHighMed|50.03|0.72|49.97|0.72|\{color:red}2%\{color}-\{color:green}2%\{color}|
|OrNotHighHigh|18.85|0.76|18.85|0.82|\{color:red}8%\{color}-\{color:green}8%\{color}|
|OrHighNotHigh|9.35|0.32|9.35|0.35|\{color:red}6%\{color}-\{color:green}7%\{color}|
|OrHighLow|15.85|0.59|15.85|0.52|\{color:red}6%\{color}-\{color:green}7%\{color}|
|OrHighNotLow|17.56|0.71|17.57|0.70|\{color:red}7%\{color}-\{color:green}8%\{color}|
|AndHighLow|284.39|4.41|284.60|5.65|\{color:red}3%\{color}-\{color:green}3%\{color}|
|LowPhrase|224.73|4.35|224.97|4.84|\{color:red}3%\{color}-\{color:green}4%\{color}|
|OrHighNotMed|13.21|0.49|13.22|0.50|\{color:red}7%\{color}-\{color:green}7%\{color}|
|OrHighMed|13.22|0.73|13.30|0.70|\{color:red}9%\{color}-\{color:green}12%\{color}|
|OrHighHigh|7.56|0.43|7.62|0.41|\{color:red}9%\{color}-\{color:green}12%\{color}|
|BrowseMonthTaxoFacets|7.96|1.92|8.06|1.78|\{color:red}36%\{color}-\{color:green}63%\{color}|
|LowSpanNear|11.84|0.19|11.99|0.21|\{color:red}2%\{color}-\{color:green}4%\{color}|
|HighTermDayOfYearSort|20.05|1.40|20.31|2.15|\{color:red}15%\{color}-\{color:green}20%\{color}|
|BrowseDayOfYearTaxoFacets|7.96|1.91|8.07|1.85|\{color:red}37%\{color}-\{color:green}64%\{color}|
|BrowseMonthSSDVFacets|7.95|1.90|8.07|1.87|\{color:red}37%\{color}-\{color:green}64%\{color}|
|BrowseDayOfYearSSDVFacets|7.96|1.93|8.08|1.84|\{color:red}36%\{color}-\{color:green}64%\{color}|
|MedSpanNear|10.50|0.18|10.67|0.21|\{color:red}2%\{color}-\{color:green}5%\{color}|
|BrowseDateTaxoFacets|7.91|1.81|8.07|1.83|\{color:red}35%\{color}-\{color:green}62%\{color}|
|HighSpanNear|8.68|0.19|8.88|0.19|\{color:red}2%\{color}-\{color:green}6%\{color}|
 


was (Author: bruno.roustant):
It took me some time to run wikimedimall 8 GB index (didn't anticipate 1h 
indexing initially - a little less for UniformSplit, then I had an exception 
about facets).

Then I got results which surprised me. BlockTree and UniformSplit had the same 
QPS for Term and Phrase queries. I didn't understand why a different behavior 
between a small and a large index.

Then I thought about 2 explanations:
 * Much larger index could mean less OS IO cache 

[jira] [Comment Edited] (LUCENE-8753) New PostingFormat - UniformSplit

2019-04-09 Thread Bruno Roustant (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813106#comment-16813106
 ] 

Bruno Roustant edited comment on LUCENE-8753 at 4/9/19 8:13 AM:


It took me some time to run wikimedimall 8 GB index (didn't anticipate 1h 
indexing initially - a little less for UniformSplit, then I had an exception 
about facets).

Then I got results which surprised me. BlockTree and UniformSplit had the same 
QPS for Term and Phrase queries. I didn't understand why a different behavior 
between a small and a large index.

Then I thought about 2 explanations:
 * Much larger index could mean less OS IO cache hits. I ran the benchmark with 
a 16 GB laptop and a 64 GB desktop. Actually I got nearly no difference in my 
test.
 * Much larger index could mean more results. So the time spent to score and 
rank the results could become much larger and diminish the effect of a change 
in the dictionary. I have no clue there at the moment.

Here is the result of wikimedimall on a 64 GB desktop:

(I used -Jira option, but it does not seem to recognize the \{color} tag)
||Task||QPS BT||StdDev BT||QPS CUS||StdDev CUS||Pct diff||
|Fuzzy1|72.81|3.11|21.77|0.71|{color:red}72%\{color}-\{color:red}67%\{color}|
|Fuzzy2|66.77|3.77|20.41|0.67|{color:red}72%\{color}-\{color:red}66%\{color}|
|Respell|8.85|0.64|6.02|0.33|{color:red}40%\{color}-\{color:red}22%\{color}|
|PKLookup|130.83|3.96|121.66|12.37|{color:red}18%\{color}-\{color:green}5%\{color}|
|Wildcard|25.03|1.33|23.93|1.19|{color:red}13%\{color}-\{color:green}6%\{color}|
|HighTermMonthSort|19.03|2.55|18.40|1.56|{color:red}21%\{color}-\{color:green}21%\{color}|
|Prefix3|12.47|0.82|12.10|0.78|{color:red}14%\{color}-\{color:green}10%\{color}|
|LowTerm|182.95|14.94|177.97|18.67|{color:red}19%\{color}-\{color:green}17%\{color}|
|IntNRQ|5.21|0.54|5.09|0.56|{color:red}21%\{color}-\{color:green}21%\{color}|
|MedTerm|90.74|3.99|89.14|4.24|{color:red}10%\{color}-\{color:green}7%\{color}|
|HighTerm|42.54|1.95|41.86|2.00|{color:red}10%\{color}-\{color:green}8%\{color}|
|OrNotHighLow|532.96|16.16|526.86|24.40|{color:red}8%\{color}-\{color:green}6%\{color}|
|HighSloppyPhrase|12.00|0.39|11.90|0.48|{color:red}7%\{color}-\{color:green}6%\{color}|
|OrNotHighMed|53.64|1.08|53.37|1.22|{color:red}4%\{color}-\{color:green}3%\{color}|
|MedSloppyPhrase|31.83|0.59|31.67|0.78|{color:red}4%\{color}-\{color:green}3%\{color}|
|HighPhrase|32.24|0.85|32.09|0.81|{color:red}5%\{color}-\{color:green}4%\{color}|
|LowSloppyPhrase|29.51|0.43|29.40|0.58|{color:red}3%\{color}-\{color:green}3%\{color}|
|AndHighHigh|26.97|0.31|26.88|0.37|{color:red}2%\{color}-\{color:green}2%\{color}|
|MedPhrase|4.95|0.16|4.94|0.15|{color:red}6%\{color}-\{color:green}6%\{color}|
|AndHighMed|50.03|0.72|49.97|0.72|{color:red}2%\{color}-\{color:green}2%\{color}|
|OrNotHighHigh|18.85|0.76|18.85|0.82|{color:red}8%\{color}-\{color:green}8%\{color}|
|OrHighNotHigh|9.35|0.32|9.35|0.35|{color:red}6%\{color}-\{color:green}7%\{color}|
|OrHighLow|15.85|0.59|15.85|0.52|{color:red}6%\{color}-\{color:green}7%\{color}|
|OrHighNotLow|17.56|0.71|17.57|0.70|{color:red}7%\{color}-\{color:green}8%\{color}|
|AndHighLow|284.39|4.41|284.60|5.65|{color:red}3%\{color}-\{color:green}3%\{color}|
|LowPhrase|224.73|4.35|224.97|4.84|{color:red}3%\{color}-\{color:green}4%\{color}|
|OrHighNotMed|13.21|0.49|13.22|0.50|{color:red}7%\{color}-\{color:green}7%\{color}|
|OrHighMed|13.22|0.73|13.30|0.70|{color:red}9%\{color}-\{color:green}12%\{color}|
|OrHighHigh|7.56|0.43|7.62|0.41|{color:red}9%\{color}-\{color:green}12%\{color}|
|BrowseMonthTaxoFacets|7.96|1.92|8.06|1.78|{color:red}36%\{color}-\{color:green}63%\{color}|
|LowSpanNear|11.84|0.19|11.99|0.21|{color:red}2%\{color}-\{color:green}4%\{color}|
|HighTermDayOfYearSort|20.05|1.40|20.31|2.15|{color:red}15%\{color}-\{color:green}20%\{color}|
|BrowseDayOfYearTaxoFacets|7.96|1.91|8.07|1.85|{color:red}37%\{color}-\{color:green}64%\{color}|
|BrowseMonthSSDVFacets|7.95|1.90|8.07|1.87|{color:red}37%\{color}-\{color:green}64%\{color}|
|BrowseDayOfYearSSDVFacets|7.96|1.93|8.08|1.84|{color:red}36%\{color}-\{color:green}64%\{color}|
|MedSpanNear|10.50|0.18|10.67|0.21|{color:red}2%\{color}-\{color:green}5%\{color}|
|BrowseDateTaxoFacets|7.91|1.81|8.07|1.83|{color:red}35%\{color}-\{color:green}62%\{color}|
|HighSpanNear|8.68|0.19|8.88|0.19|{color:red}2%\{color}-\{color:green}6%\{color}|


was (Author: bruno.roustant):
It took me some time to run wikimedimall 8 GB index (didn't anticipate 1h 
indexing initially - a little less for UniformSplit, then I had an exception 
about facets).

Then I got results which surprised me. BlockTree and UniformSplit had the same 
QPS for Term and Phrase queries. I didn't understand why a different behavior 
between a small and a large index.

Then I thought about 2 explanations:
 * Much larger index could mean less OS IO cache hits. I ran the benchmark with 
a 16 GB 

[jira] [Commented] (LUCENE-8753) New PostingFormat - UniformSplit

2019-04-09 Thread Bruno Roustant (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813106#comment-16813106
 ] 

Bruno Roustant commented on LUCENE-8753:


It took me some time to run wikimedimall 8 GB index (didn't anticipate 1h 
indexing initially - a little less for UniformSplit, then I had an exception 
about facets).

Then I got results which surprised me. BlockTree and UniformSplit had the same 
QPS for Term and Phrase queries. I didn't understand why a different behavior 
between a small and a large index.

Then I thought about 2 explanations:
 * Much larger index could mean less OS IO cache hits. I ran the benchmark with 
a 16 GB laptop and a 64 GB desktop. Actually I got nearly no difference in my 
test.
 * Much larger index could mean more results. So the time spent to score and 
rank the results could become much larger and diminish the effect of a change 
in the dictionary. I have no clue there at the moment.

Here is the result of wikimedimall on a 64 GB desktop:

||Task||QPS BT||StdDev BT||QPS CUS||StdDev CUS||Pct diff
|Fuzzy1|72.81|3.11|21.77|0.71|\{color:red}72%\{color}-\{color:red}67%\{color}|
|Fuzzy2|66.77|3.77|20.41|0.67|\{color:red}72%\{color}-\{color:red}66%\{color}|
|Respell|8.85|0.64|6.02|0.33|\{color:red}40%\{color}-\{color:red}22%\{color}|
|PKLookup|130.83|3.96|121.66|12.37|\{color:red}18%\{color}-\{color:green}5%\{color}|
|Wildcard|25.03|1.33|23.93|1.19|\{color:red}13%\{color}-\{color:green}6%\{color}|
|HighTermMonthSort|19.03|2.55|18.40|1.56|\{color:red}21%\{color}-\{color:green}21%\{color}|
|Prefix3|12.47|0.82|12.10|0.78|\{color:red}14%\{color}-\{color:green}10%\{color}|
|LowTerm|182.95|14.94|177.97|18.67|\{color:red}19%\{color}-\{color:green}17%\{color}|
|IntNRQ|5.21|0.54|5.09|0.56|\{color:red}21%\{color}-\{color:green}21%\{color}|
|MedTerm|90.74|3.99|89.14|4.24|\{color:red}10%\{color}-\{color:green}7%\{color}|
|HighTerm|42.54|1.95|41.86|2.00|\{color:red}10%\{color}-\{color:green}8%\{color}|
|OrNotHighLow|532.96|16.16|526.86|24.40|\{color:red}8%\{color}-\{color:green}6%\{color}|
|HighSloppyPhrase|12.00|0.39|11.90|0.48|\{color:red}7%\{color}-\{color:green}6%\{color}|
|OrNotHighMed|53.64|1.08|53.37|1.22|\{color:red}4%\{color}-\{color:green}3%\{color}|
|MedSloppyPhrase|31.83|0.59|31.67|0.78|\{color:red}4%\{color}-\{color:green}3%\{color}|
|HighPhrase|32.24|0.85|32.09|0.81|\{color:red}5%\{color}-\{color:green}4%\{color}|
|LowSloppyPhrase|29.51|0.43|29.40|0.58|\{color:red}3%\{color}-\{color:green}3%\{color}|
|AndHighHigh|26.97|0.31|26.88|0.37|\{color:red}2%\{color}-\{color:green}2%\{color}|
|MedPhrase|4.95|0.16|4.94|0.15|\{color:red}6%\{color}-\{color:green}6%\{color}|
|AndHighMed|50.03|0.72|49.97|0.72|\{color:red}2%\{color}-\{color:green}2%\{color}|
|OrNotHighHigh|18.85|0.76|18.85|0.82|\{color:red}8%\{color}-\{color:green}8%\{color}|
|OrHighNotHigh|9.35|0.32|9.35|0.35|\{color:red}6%\{color}-\{color:green}7%\{color}|
|OrHighLow|15.85|0.59|15.85|0.52|\{color:red}6%\{color}-\{color:green}7%\{color}|
|OrHighNotLow|17.56|0.71|17.57|0.70|\{color:red}7%\{color}-\{color:green}8%\{color}|
|AndHighLow|284.39|4.41|284.60|5.65|\{color:red}3%\{color}-\{color:green}3%\{color}|
|LowPhrase|224.73|4.35|224.97|4.84|\{color:red}3%\{color}-\{color:green}4%\{color}|
|OrHighNotMed|13.21|0.49|13.22|0.50|\{color:red}7%\{color}-\{color:green}7%\{color}|
|OrHighMed|13.22|0.73|13.30|0.70|\{color:red}9%\{color}-\{color:green}12%\{color}|
|OrHighHigh|7.56|0.43|7.62|0.41|\{color:red}9%\{color}-\{color:green}12%\{color}|
|BrowseMonthTaxoFacets|7.96|1.92|8.06|1.78|\{color:red}36%\{color}-\{color:green}63%\{color}|
|LowSpanNear|11.84|0.19|11.99|0.21|\{color:red}2%\{color}-\{color:green}4%\{color}|
|HighTermDayOfYearSort|20.05|1.40|20.31|2.15|\{color:red}15%\{color}-\{color:green}20%\{color}|
|BrowseDayOfYearTaxoFacets|7.96|1.91|8.07|1.85|\{color:red}37%\{color}-\{color:green}64%\{color}|
|BrowseMonthSSDVFacets|7.95|1.90|8.07|1.87|\{color:red}37%\{color}-\{color:green}64%\{color}|
|BrowseDayOfYearSSDVFacets|7.96|1.93|8.08|1.84|\{color:red}36%\{color}-\{color:green}64%\{color}|
|MedSpanNear|10.50|0.18|10.67|0.21|\{color:red}2%\{color}-\{color:green}5%\{color}|
|BrowseDateTaxoFacets|7.91|1.81|8.07|1.83|\{color:red}35%\{color}-\{color:green}62%\{color}|
|HighSpanNear|8.68|0.19|8.88|0.19|\{color:red}2%\{color}-\{color:green}6%\{color}|

> New PostingFormat - UniformSplit
> 
>
> Key: LUCENE-8753
> URL: https://issues.apache.org/jira/browse/LUCENE-8753
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Bruno Roustant
>Priority: Major
> Attachments: Uniform Split Technique.pdf, luceneutil.benchmark.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a proposal to add a new PostingsFormat called "UniformSplit" with 4 
> 

Re: Welcome Tomoko Uchida as Lucene/Solr committer

2019-04-09 Thread Đạt Cao Mạnh
Welcome and congrats Tomoko!

On Tue, Apr 9, 2019 at 7:11 AM Karl Wright  wrote:

> Welcome!
> Karl
>
> On Mon, Apr 8, 2019 at 8:28 PM Christian Moen  wrote:
>
>> Congratulations, Tomoko-san!
>>
>> On Tue, Apr 9, 2019 at 12:20 AM Uwe Schindler  wrote:
>>
>>> Hi all,
>>>
>>> Please join me in welcoming Tomoko Uchida as the latest Lucene/Solr
>>> committer!
>>>
>>> She has been working on
>>> https://issues.apache.org/jira/browse/LUCENE-2562 for several years
>>> with awesome progress and finally we got the fantastic Luke as a branch on
>>> ASF JIRA:
>>> https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=shortlog;h=refs/heads/jira/lucene-2562-luke-swing-3
>>> Looking forward to the first release of Apache Lucene 8.1 with Luke
>>> bundled in the distribution. I will take care of merging it to master and
>>> 8.x branches together with her once she got the ASF account.
>>>
>>> Tomoko also helped with the Japanese and Korean Analyzers.
>>>
>>> Congratulations and Welcome, Tomoko! Tomoko, it's traditional for you to
>>> introduce yourself with a brief bio.
>>>
>>> Uwe & Robert (who nominated Tomoko)
>>>
>>> -
>>> Uwe Schindler
>>> Achterdiek 19, D-28357 Bremen
>>> https://www.thetaphi.de
>>> eMail: u...@thetaphi.de
>>>
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>

-- 
*Best regards,*
*Cao Mạnh Đạt*


*D.O.B : 31-07-1991Cell: (+84) 946.328.329E-mail: caomanhdat...@gmail.com
*


[jira] [Updated] (PYLUCENE-47) Type matching in methods with same number of arguments

2019-04-09 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/PYLUCENE-47?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Petrus Hyvönen updated PYLUCENE-47:
---
Attachment: java-example-test-parameters.zip

> Type matching in methods with same number of arguments
> --
>
> Key: PYLUCENE-47
> URL: https://issues.apache.org/jira/browse/PYLUCENE-47
> Project: PyLucene
>  Issue Type: Bug
>Reporter: Petrus Hyvönen
>Priority: Major
> Attachments: java-example-test-parameters.zip
>
>
> If the same number of arguments are used in a method and the arguments are 
> positively matched also on subclasses of the argument. The order of testing 
> in the generated code will matter and give unpredictable results. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PYLUCENE-47) Type matching in methods with same number of arguments

2019-04-09 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/PYLUCENE-47?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Petrus Hyvönen updated PYLUCENE-47:
---
Description: If the same number of arguments are used in a method and the 
arguments are positively matched also on subclasses of the argument. The order 
of testing in the generated code will matter and give unpredictable results. 

> Type matching in methods with same number of arguments
> --
>
> Key: PYLUCENE-47
> URL: https://issues.apache.org/jira/browse/PYLUCENE-47
> Project: PyLucene
>  Issue Type: Bug
>Reporter: Petrus Hyvönen
>Priority: Major
>
> If the same number of arguments are used in a method and the arguments are 
> positively matched also on subclasses of the argument. The order of testing 
> in the generated code will matter and give unpredictable results. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PYLUCENE-47) Type matching in methods with same number of arguments

2019-04-09 Thread JIRA
Petrus Hyvönen created PYLUCENE-47:
--

 Summary: Type matching in methods with same number of arguments
 Key: PYLUCENE-47
 URL: https://issues.apache.org/jira/browse/PYLUCENE-47
 Project: PyLucene
  Issue Type: Bug
Reporter: Petrus Hyvönen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AntonyJohnson updated SOLR-13383:
-
Description: 
We're able to create new server via auto-scaling for our solr cluster 7.4 
version. But the newly created server is not adding in our solr cluster 
automatically. Is there any settings or configurations we need to add in order 
to add the replica automatically in cluster for any collections.

*commands used:* 
{code:java}
curl 
"http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
{code}
Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its not 
getting added in solr 7.4 version. The same scenario is working fine in solr 
7.5 version.

*scaling policy used:*


{code:java}
1)
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
 "set-cluster-policy" : [{
 "replica" : "1",
 "shard" : "#EACH",
 "node" : "#ANY",
 }]

}'
2)###Node Added Trigger
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
"set-trigger": {
"name" : "node_added_trigger",
"event" : "nodeAdded",
"waitFor" : "5s",
"preferredOperation": "ADDREPLICA",
"enabled" : true,
"actions" : [
{
"name" : "compute_plan",
"class": "solr.ComputePlanAction"
},
{
"name" : "execute_plan",
"class": "solr.ExecutePlanAction"
}
]
}
}'

3)###Node Lost Trigger
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
"set-trigger": {
"name" : "node_lost_trigger",
"event" : "nodeLost",
"waitFor" : "5s",
"preferredOperation": "DELETENODE",
"enabled" : true,
"actions" : [
{
"name" : "compute_plan",
"class": "solr.ComputePlanAction"
},
{
"name" : "execute_plan",
"class": "solr.ExecutePlanAction"
}
]
}
}'
{code}


Note the same policy(2,3) not working in 7.4

*Errors:*


{code:java}
[Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
http://localhost:8983/solr/admin/autoscaling -H 'Content-type:application/json' 
-d '{
>  "set-trigger": {
>   "name" : "node_added_trigger",
>   "event" : "nodeAdded",
>   "waitFor" : "5s",
>   "preferredOperation": "ADDREPLICA",
>   "enabled" : true,
>   "actions" : [
>{
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
>},
>{
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
>}
>   ]
>  }
> }'
{
  "responseHeader":{
"status":400,
"QTime":5},
  "result":"failure",
  "WARNING":"This response format is experimental.  It is likely to change in 
the future.",
  "error":{
"metadata":[
  "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
  "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
"details":[{
"set-trigger":{
  "name":"node_added_trigger",
  "event":"nodeAdded",
  "waitFor":"5s",
  "preferredOperation":"ADDREPLICA",
  "enabled":true,
  "actions":[{
  "name":"compute_plan",
  "class":"solr.ComputePlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}]},
"errorMessages":["Error validating trigger config node_added_trigger: 
TriggerValidationException{name=node_added_trigger, 
details='{preferredOperation=unknown property}'}"]}],
"msg":"Error in command payload",
"code":400}}
[Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
[Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
http://localhost:8983/solr/admin/autoscaling -H 'Content-type:application/json' 
-d '{
>  "set-trigger": {
>   "name" : "node_lost_trigger",
>   "event" : "nodeLost",
>   "waitFor" : "5s",
>   "preferredOperation": "DELETENODE",
>   "enabled" : true,
>   "actions" : [
>{
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
>},
>{
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
>}
>   ]
>  }
> }'
{
  "responseHeader":{
"status":400,
"QTime":1},
  "result":"failure",
  "WARNING":"This response format is experimental.  It is likely to change in 
the future.",
  "error":{
"metadata":[
  "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
  "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
"details":[{
"set-trigger":{
  "name":"node_lost_trigger",
  "event":"nodeLost",
  "waitFor":"5s",
  "preferredOperation":"DELETENODE",
  "enabled":true,
  "actions":[{
  "name":"compute_plan",
  "class":"solr.ComputePlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}]},
"errorMessages":["Error validating trigger config node_lost_trigger: 
TriggerValidationException{name=node_lost_trigger, 
details='{preferredOperation=unknown property}'}"]}],
"msg":"Error in command 

[jira] [Updated] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AntonyJohnson updated SOLR-13383:
-
Description: 
We're able to create new server via auto-scaling for our solr cluster 7.4 
version. But the newly created server is not adding in our solr cluster 
automatically. Is there any settings or configurations we need to add in order 
to add the replica automatically in cluster for any collections.

*commands used:* 
{code:java}
curl 
"http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
{code}
Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its not 
getting added in solr 7.4 version. The same scenario is working fine in solr 
7.5 version.

*scaling policy used:*


{code:java}
1)
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
 "set-cluster-policy" : [{
 "replica" : "1",
 "shard" : "#EACH",
 "node" : "#ANY",
 }]

}'
2)###Node Added Trigger
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
"set-trigger": {
"name" : "node_added_trigger",
"event" : "nodeAdded",
"waitFor" : "5s",
"preferredOperation": "ADDREPLICA",
"enabled" : true,
"actions" : [
{
"name" : "compute_plan",
"class": "solr.ComputePlanAction"
},
{
"name" : "execute_plan",
"class": "solr.ExecutePlanAction"
}
]
}
}'

3)###Node Lost Trigger
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
"set-trigger": {
"name" : "node_lost_trigger",
"event" : "nodeLost",
"waitFor" : "5s",
"preferredOperation": "DELETENODE",
"enabled" : true,
"actions" : [
{
"name" : "compute_plan",
"class": "solr.ComputePlanAction"
},
{
"name" : "execute_plan",
"class": "solr.ExecutePlanAction"
}
]
}
}'
{code}


Note the same policy(2,3) not working in 7.4

*Errors:*


{code:java}
[Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
http://localhost:8983/solr/admin/autoscaling -H 'Content-type:application/json' 
-d '{
>  "set-trigger": {
>   "name" : "node_added_trigger",
>   "event" : "nodeAdded",
>   "waitFor" : "5s",
>   "preferredOperation": "ADDREPLICA",
>   "enabled" : true,
>   "actions" : [
>{
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
>},
>{
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
>}
>   ]
>  }
> }'
{
  "responseHeader":{
"status":400,
"QTime":5},
  "result":"failure",
  "WARNING":"This response format is experimental.  It is likely to change in 
the future.",
  "error":{
"metadata":[
  "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
  "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
"details":[{
"set-trigger":{
  "name":"node_added_trigger",
  "event":"nodeAdded",
  "waitFor":"5s",
  "preferredOperation":"ADDREPLICA",
  "enabled":true,
  "actions":[{
  "name":"compute_plan",
  "class":"solr.ComputePlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}]},
"errorMessages":["Error validating trigger config node_added_trigger: 
TriggerValidationException{name=node_added_trigger, 
details='{preferredOperation=unknown property}'}"]}],
"msg":"Error in command payload",
"code":400}}
[Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
[Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
http://localhost:8983/solr/admin/autoscaling -H 'Content-type:application/json' 
-d '{
>  "set-trigger": {
>   "name" : "node_lost_trigger",
>   "event" : "nodeLost",
>   "waitFor" : "5s",
>   "preferredOperation": "DELETENODE",
>   "enabled" : true,
>   "actions" : [
>{
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
>},
>{
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
>}
>   ]
>  }
> }'
{
  "responseHeader":{
"status":400,
"QTime":1},
  "result":"failure",
  "WARNING":"This response format is experimental.  It is likely to change in 
the future.",
  "error":{
"metadata":[
  "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
  "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
"details":[{
"set-trigger":{
  "name":"node_lost_trigger",
  "event":"nodeLost",
  "waitFor":"5s",
  "preferredOperation":"DELETENODE",
  "enabled":true,
  "actions":[{
  "name":"compute_plan",
  "class":"solr.ComputePlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}]},
"errorMessages":["Error validating trigger config node_lost_trigger: 
TriggerValidationException{name=node_lost_trigger, 
details='{preferredOperation=unknown property}'}"]}],
"msg":"Error in command 

[jira] [Updated] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AntonyJohnson updated SOLR-13383:
-
Description: 
We're able to create new server via auto-scaling for our solr cluster 7.4 
version. But the newly created server is not adding in our solr cluster 
automatically. Is there any settings or configurations we need to add in order 
to add the replica automatically in cluster for any collections.

*commands used:* 
{code:java}
curl 
"http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
{code}
Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its not 
getting added in solr 7.4 version. The same scenario is working fine in solr 
7.5 version.

*scaling policy used:*
{code:java}
1)
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
 "set-cluster-policy" : [{
 "replica" : "1",
 "shard" : "#EACH",
 "node" : "#ANY",
 }]

}'
2)###Node Added Trigger
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
"set-trigger": {
"name" : "node_added_trigger",
"event" : "nodeAdded",
"waitFor" : "5s",
"preferredOperation": "ADDREPLICA",
"enabled" : true,
"actions" : [
{
"name" : "compute_plan",
"class": "solr.ComputePlanAction"
},
{
"name" : "execute_plan",
"class": "solr.ExecutePlanAction"
}
]
}
}'

3)###Node Lost Trigger
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
"set-trigger": {
"name" : "node_lost_trigger",
"event" : "nodeLost",
"waitFor" : "5s",
"preferredOperation": "DELETENODE",
"enabled" : true,
"actions" : [
{
"name" : "compute_plan",
"class": "solr.ComputePlanAction"
},
{
"name" : "execute_plan",
"class": "solr.ExecutePlanAction"
}
]
}
}'{code}

Note the same policy(2,3) not working in 7.4

*Errors:*


{code:java}
[Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
http://localhost:8983/solr/admin/autoscaling -H 'Content-type:application/json' 
-d '{
>  "set-trigger": {
>   "name" : "node_added_trigger",
>   "event" : "nodeAdded",
>   "waitFor" : "5s",
>   "preferredOperation": "ADDREPLICA",
>   "enabled" : true,
>   "actions" : [
>{
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
>},
>{
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
>}
>   ]
>  }
> }'
{
  "responseHeader":{
"status":400,
"QTime":5},
  "result":"failure",
  "WARNING":"This response format is experimental.  It is likely to change in 
the future.",
  "error":{
"metadata":[
  "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
  "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
"details":[{
"set-trigger":{
  "name":"node_added_trigger",
  "event":"nodeAdded",
  "waitFor":"5s",
  "preferredOperation":"ADDREPLICA",
  "enabled":true,
  "actions":[{
  "name":"compute_plan",
  "class":"solr.ComputePlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}]},
"errorMessages":["Error validating trigger config node_added_trigger: 
TriggerValidationException{name=node_added_trigger, 
details='{preferredOperation=unknown property}'}"]}],
"msg":"Error in command payload",
"code":400}}
[Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
[Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
http://localhost:8983/solr/admin/autoscaling -H 'Content-type:application/json' 
-d '{
>  "set-trigger": {
>   "name" : "node_lost_trigger",
>   "event" : "nodeLost",
>   "waitFor" : "5s",
>   "preferredOperation": "DELETENODE",
>   "enabled" : true,
>   "actions" : [
>{
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
>},
>{
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
>}
>   ]
>  }
> }'
{
  "responseHeader":{
"status":400,
"QTime":1},
  "result":"failure",
  "WARNING":"This response format is experimental.  It is likely to change in 
the future.",
  "error":{
"metadata":[
  "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
  "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
"details":[{
"set-trigger":{
  "name":"node_lost_trigger",
  "event":"nodeLost",
  "waitFor":"5s",
  "preferredOperation":"DELETENODE",
  "enabled":true,
  "actions":[{
  "name":"compute_plan",
  "class":"solr.ComputePlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}]},
"errorMessages":["Error validating trigger config node_lost_trigger: 
TriggerValidationException{name=node_lost_trigger, 
details='{preferredOperation=unknown property}'}"]}],
"msg":"Error in command payload",
 

[jira] [Created] (SOLR-13384) Security: Clear text Passwords are printed in solr-console.log

2019-04-09 Thread Sagar Sheth (JIRA)
Sagar Sheth created SOLR-13384:
--

 Summary: Security: Clear text Passwords are printed in 
solr-console.log
 Key: SOLR-13384
 URL: https://issues.apache.org/jira/browse/SOLR-13384
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Affects Versions: 5.5.5
Reporter: Sagar Sheth
 Attachments: solr-8983-console.log

I am using Solr 5.5.5 with SAS AML 7.1, in the logs the passwords are printed 
in clear text. How to disable this in the log4j.properties? 

I Have attached the solr-8983-console.log and have masked the passwords.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13383) auto-scaling not working in solr7.4 - autoaddreplica

2019-04-09 Thread AntonyJohnson (JIRA)
AntonyJohnson created SOLR-13383:


 Summary: auto-scaling not working in solr7.4 - autoaddreplica
 Key: SOLR-13383
 URL: https://issues.apache.org/jira/browse/SOLR-13383
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: config-api
Affects Versions: 7.4
 Environment: Production
Reporter: AntonyJohnson
 Fix For: 7.4.1


We're able to create new server via auto-scaling for our solr cluster 7.4 
version. But the newly created server is not adding in our solr cluster 
automatically. Is there any settings or configurations we need to add in order 
to add the replica automatically in cluster for any collections.

*commands used:* 
{code:java}
curl 
"http://localhost:8983/solr/admin/collections?action=CREATE=my_collection_3=1=3=true;
{code}
Currently 3 nodes are in our cluster and i'm trying to add 4th nod but its not 
getting added in solr 7.4 version. The same scenario is working fine in solr 
7.5 version.

*scaling policy used:*
{code:java}
1)
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
 "set-cluster-policy" : [{
 "replica" : "1",
 "shard" : "#EACH",
 "node" : "#ANY",
 }]

}'
2)###Node Added Trigger
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
"set-trigger": {
"name" : "node_added_trigger",
"event" : "nodeAdded",
"waitFor" : "5s",
"preferredOperation": "ADDREPLICA",
"enabled" : true,
"actions" : [
{
"name" : "compute_plan",
"class": "solr.ComputePlanAction"
},
{
"name" : "execute_plan",
"class": "solr.ExecutePlanAction"
}
]
}
}'

3)###Node Lost Trigger
curl http://localhost:8983/solr/admin/autoscaling -H 
'Content-type:application/json' -d '{
"set-trigger": {
"name" : "node_lost_trigger",
"event" : "nodeLost",
"waitFor" : "5s",
"preferredOperation": "DELETENODE",
"enabled" : true,
"actions" : [
{
"name" : "compute_plan",
"class": "solr.ComputePlanAction"
},
{
"name" : "execute_plan",
"class": "solr.ExecutePlanAction"
}
]
}
}'{code}

Note the same policy(2,3) not working in 7.4

*Errors:*


{code:java}
[Mon Apr 08 11:52:00 UTC root@hawkeye-common ~]# curl 
http://localhost:8983/solr/admin/autoscaling -H 'Content-type:application/json' 
-d '{
>  "set-trigger": {
>   "name" : "node_added_trigger",
>   "event" : "nodeAdded",
>   "waitFor" : "5s",
>   "preferredOperation": "ADDREPLICA",
>   "enabled" : true,
>   "actions" : [
>{
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
>},
>{
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
>}
>   ]
>  }
> }'
{
  "responseHeader":{
"status":400,
"QTime":5},
  "result":"failure",
  "WARNING":"This response format is experimental.  It is likely to change in 
the future.",
  "error":{
"metadata":[
  "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
  "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
"details":[{
"set-trigger":{
  "name":"node_added_trigger",
  "event":"nodeAdded",
  "waitFor":"5s",
  "preferredOperation":"ADDREPLICA",
  "enabled":true,
  "actions":[{
  "name":"compute_plan",
  "class":"solr.ComputePlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}]},
"errorMessages":["Error validating trigger config node_added_trigger: 
TriggerValidationException{name=node_added_trigger, 
details='{preferredOperation=unknown property}'}"]}],
"msg":"Error in command payload",
"code":400}}
[Mon Apr 08 11:52:09 UTC root@hawkeye-common ~]#
[Mon Apr 08 11:52:16 UTC root@hawkeye-common ~]# curl 
http://localhost:8983/solr/admin/autoscaling -H 'Content-type:application/json' 
-d '{
>  "set-trigger": {
>   "name" : "node_lost_trigger",
>   "event" : "nodeLost",
>   "waitFor" : "5s",
>   "preferredOperation": "DELETENODE",
>   "enabled" : true,
>   "actions" : [
>{
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
>},
>{
> "name" : "execute_plan",
> "class": "solr.ExecutePlanAction"
>}
>   ]
>  }
> }'
{
  "responseHeader":{
"status":400,
"QTime":1},
  "result":"failure",
  "WARNING":"This response format is experimental.  It is likely to change in 
the future.",
  "error":{
"metadata":[
  "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
  "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
"details":[{
"set-trigger":{
  "name":"node_lost_trigger",
  "event":"nodeLost",
  "waitFor":"5s",
  "preferredOperation":"DELETENODE",
  "enabled":true,
  "actions":[{
  "name":"compute_plan",
  "class":"solr.ComputePlanAction"},
 

[GitHub] [lucene-solr] uschindler commented on a change in pull request #637: LUCENE-8754: Prevent ConcurrentModificationException in SegmentInfo

2019-04-09 Thread GitBox
uschindler commented on a change in pull request #637: LUCENE-8754: Prevent 
ConcurrentModificationException in SegmentInfo
URL: https://github.com/apache/lucene-solr/pull/637#discussion_r273345219
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/index/SegmentInfo.java
 ##
 @@ -111,12 +112,12 @@ public SegmentInfo(Directory dir, Version version, 
Version minVersion, String na
 this.maxDoc = maxDoc;
 this.isCompoundFile = isCompoundFile;
 this.codec = codec;
-this.diagnostics = Objects.requireNonNull(diagnostics);
+this.diagnostics = Collections.unmodifiableMap(new 
HashMap<>(Objects.requireNonNull(diagnostics)));
 this.id = id;
 if (id.length != StringHelper.ID_LENGTH) {
   throw new IllegalArgumentException("invalid id: " + Arrays.toString(id));
 }
-this.attributes = Objects.requireNonNull(attributes);
+this.attributes = Collections.unmodifiableMap(new 
HashMap<>(Objects.requireNonNull(attributes)));
 
 Review comment:
   One thing: once master is on Java 11, this should use: 
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Map.html#copyOf(java.util.Map)
   
   We may simplify more code doing that as Java 11 has now unmodifiable maps 
using a static initializer like Guava once had with ImmutableMap. Same for 
lists and sets.
   
   On top, if the input is unmodifiable already it does not copy, so it is more 
heap efficient.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] uschindler commented on a change in pull request #637: LUCENE-8754: Prevent ConcurrentModificationException in SegmentInfo

2019-04-09 Thread GitBox
uschindler commented on a change in pull request #637: LUCENE-8754: Prevent 
ConcurrentModificationException in SegmentInfo
URL: https://github.com/apache/lucene-solr/pull/637#discussion_r273345219
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/index/SegmentInfo.java
 ##
 @@ -111,12 +112,12 @@ public SegmentInfo(Directory dir, Version version, 
Version minVersion, String na
 this.maxDoc = maxDoc;
 this.isCompoundFile = isCompoundFile;
 this.codec = codec;
-this.diagnostics = Objects.requireNonNull(diagnostics);
+this.diagnostics = Collections.unmodifiableMap(new 
HashMap<>(Objects.requireNonNull(diagnostics)));
 this.id = id;
 if (id.length != StringHelper.ID_LENGTH) {
   throw new IllegalArgumentException("invalid id: " + Arrays.toString(id));
 }
-this.attributes = Objects.requireNonNull(attributes);
+this.attributes = Collections.unmodifiableMap(new 
HashMap<>(Objects.requireNonNull(attributes)));
 
 Review comment:
   One thing: once master is on Java 11, this should use: 
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Map.html#copyOf(java.util.Map)
   
   We may simplify more code doing that as Java 11 has now unmodifiable maps 
using a static initializer like Guava once had with ImmutableMap. Same for 
lists and sets.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >