[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742776#comment-16742776
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 1/15/19 6:49 AM:
--

Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} if not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with the above, I will add 
relevant documentation and finish this up.


was (Author: sarkaramr...@gmail.com):
Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with the above, I will add 
relevant documentation and finish this up.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: (was: SOLR-13035.patch)

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: (was: SOLR-13035.patch)

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742776#comment-16742776
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 1/15/19 6:06 AM:
--

Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with the above, I will add 
relevant documentation and finish this up.


was (Author: sarkaramr...@gmail.com):
Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with all good above, I will 
add relevant documentation and finish this up.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1751 - Unstable

2019-01-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1751/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv

Error Message:
Some docs had errors -- check logs expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: Some docs had errors -- check logs expected:<0> but 
was:<2>
at 
__randomizedtesting.SeedInfo.seed([BB333F3561AF4F2B:8D275D73EBF2753A]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:345)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv(TestStressCloudBlindAtomicUpdates.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 15306 lines...]
   [junit4] Suite: 

[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742776#comment-16742776
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 1/15/19 6:05 AM:
--

Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with all good above, I will 
add relevant documentation and finish this up.


was (Author: sarkaramr...@gmail.com):
Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. SOLR_DATA_HOME now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. SOLR_DATA_HOME is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for SOLR_DATA_HOME, not sure how to test startup script 
changes except manually. If everyone agrees with all good above, I will add 
relevant documentation and finish this up.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742776#comment-16742776
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. SOLR_DATA_HOME now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. SOLR_DATA_HOME is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for SOLR_DATA_HOME, not sure how to test startup script 
changes except manually. If everyone agrees with all good above, I will add 
relevant documentation and finish this up.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 958 - Unstable!

2019-01-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/958/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas Timeout waiting to see 
state for collection=MissingSegmentRecoveryTest 
:DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"MissingSegmentRecoveryTest_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:53912/solr;,   
"node_name":"127.0.0.1:53912_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node4":{   
"core":"MissingSegmentRecoveryTest_shard1_replica_n2",   
"base_url":"http://127.0.0.1:53909/solr;,   
"node_name":"127.0.0.1:53909_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"} Live 
Nodes: [127.0.0.1:53909_solr, 127.0.0.1:53912_solr] Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"MissingSegmentRecoveryTest_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:53912/solr;,   
"node_name":"127.0.0.1:53912_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node4":{   
"core":"MissingSegmentRecoveryTest_shard1_replica_n2",   
"base_url":"http://127.0.0.1:53909/solr;,   
"node_name":"127.0.0.1:53909_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
Timeout waiting to see state for collection=MissingSegmentRecoveryTest 
:DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n1",
  "base_url":"http://127.0.0.1:53912/solr;,
  "node_name":"127.0.0.1:53912_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node4":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n2",
  "base_url":"http://127.0.0.1:53909/solr;,
  "node_name":"127.0.0.1:53909_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
Live Nodes: [127.0.0.1:53909_solr, 127.0.0.1:53912_solr]
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n1",
  "base_url":"http://127.0.0.1:53912/solr;,
  "node_name":"127.0.0.1:53912_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node4":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n2",
  "base_url":"http://127.0.0.1:53909/solr;,
  "node_name":"127.0.0.1:53909_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([1FB81C6070EAFD56:4FED846329CB4B4B]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:289)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:267)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:106)

[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 3 - Failure

2019-01-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/3/

No tests ran.

Build Log:
[...truncated 23490 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2465 links (2016 relative) to 3228 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742662#comment-16742662
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit d970375cd2f2357b6a8da5ac67ef994f8d43 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d970375 ]

SOLR-12923: add a latch to TestTriggerListener to harden test that use it so 
they can deterministically know when all events have been proceeded

This hardens several flakey tests, and allows the removal of a several 
arbitrary sleep calls


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742660#comment-16742660
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 39d4dd6294f650777a872f0b33f6f17958bb167b in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=39d4dd6 ]

SOLR-12923: add a latch to TestTriggerListener to harden test that use it so 
they can deterministically know when all events have been proceeded

This hardens several flakey tests, and allows the removal of a several 
arbitrary sleep calls

(cherry picked from commit d970375cd2f2357b6a8da5ac67ef994f8d43)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742661#comment-16742661
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 62e9ff436a40183d1683e61e0b464e3faf0bd5db in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=62e9ff4 ]

SOLR-12923: add a latch to TestTriggerListener to harden test that use it so 
they can deterministically know when all events have been proceeded

This hardens several flakey tests, and allows the removal of a several 
arbitrary sleep calls

(cherry picked from commit d970375cd2f2357b6a8da5ac67ef994f8d43)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



BadApple report

2019-01-14 Thread Erick Erickson
Well, I didn't add stuff last week, slipped through the cracks.

Anyway, here's the current list. NOTE: lots more tests are being
un-annotated than annotated, which is good.

Also, this last report has 421 total tests that failed sometime in the
last 4 weeks. The report before had 655. Still quite a ways to go, but
nice progress!

 **Annotations will be removed from the following tests because they
haven't failed in the last 4 rollups.

  **Methods: 25
   CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap
   ComputePlanActionTest.testNodeAdded
   ComputePlanActionTest.testNodeLostTriggerWithDeleteNodePreferredOp
   CustomCollectionTest.testRouteFieldForHashRouter
   DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplicaLegacy
   MathExpressionTest.testMultiVariateNormalDistribution
   ScheduledTriggerIntegrationTest.testScheduledTrigger
   ShardSplitTest.testSplitMixedReplicaTypes
   ShardSplitTest.testSplitMixedReplicaTypesLink
   SolrRrdBackendFactoryTest.testBasic
   StreamDecoratorTest.testParallelExecutorStream
   StreamingTest.testParallelMergeStream
   StreamingTest.testZeroParallelReducerStream
   TestCloudRecovery.corruptedLogTest
   TestDistribIDF.testMultiCollectionQuery
   TestIndexWriterOnVMError.testCheckpoint
   TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName
   TestPullReplica.testCreateDelete
   TestSkipOverseerOperations.testSkipDownOperations
   TestStressInPlaceUpdates.stressTest
   TestTlogReplica.testCreateDelete
   TestWithCollection.testAddReplicaWithPolicy
   TestWithCollection.testNodeAdded
   TimeRoutedAliasUpdateProcessorTest.test
   ZkShardTermsTest.testParticipationOfReplicas


Failures in Hoss' reports for the last 4 rollups.

There were 421 unannotated tests that failed in Hoss' rollups. Ordered
by the date I downloaded the rollup file, newest->oldest. See above
for the dates the files were collected
These tests were NOT BadApple'd or AwaitsFix'd
All tests that failed 4 weeks running will be BadApple'd unless there
are objections

Failures in the last 4 reports..
   Report   Pct runsfails   test
 0123  28.6   74 25  LIROnShardRestartTest.testAllReplicasInLIR
 0123   1.1 1682 21  TestSQLHandler.doTest
 0123   0.4  670 12  TestSimTriggerIntegration.testCooldown
 0123   0.3 1280 20  TestSimTriggerIntegration.testListeners
 0123   0.2 2018 87
TestSimTriggerIntegration.testNodeLostTriggerRestoreState
 0123   8.8  669179
TestSimTriggerIntegration.testNodeMarkersRegistration
 Will BadApple all tests above this line except ones
listed at the top**

Erick
DO NOT ENABLE LIST:
MoveReplicaHDFSTest.testFailedMove
MoveReplicaHDFSTest.testNormalFailedMove
TestControlledRealTimeReopenThread.testCRTReopen
TestICUNormalizer2CharFilter.testRandomStrings
TestICUTokenizerCJK
TestImpersonationWithHadoopAuth.testForwarding
TestLTRReRankingPipeline.testDifferentTopN
TestRandomChains


DO NOT ANNOTATE LIST
CdcrBidirectionalTest.testBiDir
IndexSizeTriggerTest.testMergeIntegration
IndexSizeTriggerTest.testMixedBounds
IndexSizeTriggerTest.testSplitIntegration
IndexSizeTriggerTest.testTrigger
InfixSuggestersTest.testShutdownDuringBuild
ShardSplitTest.test
ShardSplitTest.testSplitMixedReplicaTypes
ShardSplitTest.testSplitWithChaosMonkey
TestLatLonShapeQueries.testRandomBig
TestRandomChains.testRandomChainsWithLargeStrings
TestTriggerIntegration.testSearchRate

Processing file (History bit 3): HOSS-2019-01-15.csv
Processing file (History bit 2): HOSS-2019-01-08.csv
Processing file (History bit 1): HOSS-2018-12-31.csv
Processing file (History bit 0): HOSS-2018-12-24.csv


**Annotated tests that didn't fail in the last 4 weeks.

  **Tests removed from the next two lists because they were specified in 
'doNotEnable' in the properties file
 MoveReplicaHDFSTest.testNormalFailedMove

  **Annotations will be removed from the following tests because they haven't 
failed in the last 4 rollups.

  **Methods: 25
   CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap
   ComputePlanActionTest.testNodeAdded
   ComputePlanActionTest.testNodeLostTriggerWithDeleteNodePreferredOp
   CustomCollectionTest.testRouteFieldForHashRouter
   DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplicaLegacy
   MathExpressionTest.testMultiVariateNormalDistribution
   ScheduledTriggerIntegrationTest.testScheduledTrigger
   ShardSplitTest.testSplitMixedReplicaTypes
   ShardSplitTest.testSplitMixedReplicaTypesLink
   SolrRrdBackendFactoryTest.testBasic
   StreamDecoratorTest.testParallelExecutorStream
   StreamingTest.testParallelMergeStream
   StreamingTest.testZeroParallelReducerStream
   TestCloudRecovery.corruptedLogTest
   TestDistribIDF.testMultiCollectionQuery
   TestIndexWriterOnVMError.testCheckpoint
   TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName
   TestPullReplica.testCreateDelete
   

[jira] [Commented] (SOLR-5207) Admin UI - Zookeeper status graph

2019-01-14 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742595#comment-16742595
 ] 

Erick Erickson commented on SOLR-5207:
--

My instant response is that since there aren't very many ZK nodes, the ZK 
status page is fine.

+1 to close

FWIW.

> Admin UI - Zookeeper status graph
> -
>
> Key: SOLR-5207
> URL: https://issues.apache.org/jira/browse/SOLR-5207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI, SolrCloud
>Affects Versions: 4.4
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: zk-graph.png
>
>
> SOLR-5169 puts forth the idea of having an API to show zookeeper status.  
> This issue aims to use that information to draw a graph for zookeeper similar 
> to what we have for SolrCloud nodes.  Attached is an extremely rough image of 
> what I'm shooting for.  It probably needs to have a black outline around one 
> of the nodes to indicate the leader, just like the existing cloud graph does.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5207) Admin UI - Zookeeper status graph

2019-01-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-5207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742546#comment-16742546
 ] 

Jan Høydahl commented on SOLR-5207:
---

Is this necessary now that we have the dedicated ZK status page?

> Admin UI - Zookeeper status graph
> -
>
> Key: SOLR-5207
> URL: https://issues.apache.org/jira/browse/SOLR-5207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI, SolrCloud
>Affects Versions: 4.4
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: zk-graph.png
>
>
> SOLR-5169 puts forth the idea of having an API to show zookeeper status.  
> This issue aims to use that information to draw a graph for zookeeper similar 
> to what we have for SolrCloud nodes.  Attached is an extremely rough image of 
> what I'm shooting for.  It probably needs to have a black outline around one 
> of the nodes to indicate the leader, just like the existing cloud graph does.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Admin UI - Collections Management

2019-01-14 Thread Erick Erickson
bq. The collections API allowed me to move the replicas around by
investigating the core names and locations,

What about the MOVEREPLICA command?
https://lucene.apache.org/solr/guide/6_6/collections-api.html

Although that still requires you to know things like node names and
replica names. It'd be way cool to have
some kind of drag-n-drop or wizard-driven capability, perhaps built on
top of that. In addition to Jan's
suggestions.

Best,
Erick

On Mon, Jan 14, 2019 at 2:37 PM Jan Høydahl  wrote:
>
> Hi and thanks for offering to help.
>
> If you are not familiar with the new Autoscaling framework, I'd start by 
> exploring it, since it aims at solving replica placement without explicit 
> commands.
> https://lucene.apache.org/solr/guide/7_6/solrcloud-autoscaling.html
>
> Next, I'd try to find an open JIRA issue to solve, perhaps something related 
> to Collections API and/or Admin UI. Pick something very simple, just to get 
> started with the procedure of contributing, and then look at e.g. 
> https://issues.apache.org/jira/browse/SOLR-10209 which seems related in that 
> it aims to expose collection api through UI
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 14. jan. 2019 kl. 21:39 skrev Branham, Jeremy (Experis) :
>
> I recently split some shards, and the new replicas didn’t go to the nodes I 
> wanted them.
> The collections API allowed me to move the replicas around by investigating 
> the core names and locations, then constructing the correct urls to execute 
> the moves.
> This worked, but it would have been faster if the admin UI supported such 
> operations.
>
> Is this something I could contribute to? Maybe a PR in GitHub?
> I’ve been a solr user for quite a while and would like to start giving back 
> some.
>
> Thanks!
>
> Jeremy Branham
> jb...@allstate.com
> Allstate Insurance Company | UCV Technology Services | Information Services 
> Group
>
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Admin UI - Collections Management

2019-01-14 Thread Jan Høydahl
Hi and thanks for offering to help.

If you are not familiar with the new Autoscaling framework, I'd start by 
exploring it, since it aims at solving replica placement without explicit 
commands.
https://lucene.apache.org/solr/guide/7_6/solrcloud-autoscaling.html

Next, I'd try to find an open JIRA issue to solve, perhaps something related to 
Collections API and/or Admin UI. Pick something very simple, just to get 
started with the procedure of contributing, and then look at e.g. 
https://issues.apache.org/jira/browse/SOLR-10209 which seems related in that it 
aims to expose collection api through UI

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 14. jan. 2019 kl. 21:39 skrev Branham, Jeremy (Experis) :
> 
> I recently split some shards, and the new replicas didn’t go to the nodes I 
> wanted them.
> The collections API allowed me to move the replicas around by investigating 
> the core names and locations, then constructing the correct urls to execute 
> the moves.
> This worked, but it would have been faster if the admin UI supported such 
> operations.
>  
> Is this something I could contribute to? Maybe a PR in GitHub?
> I’ve been a solr user for quite a while and would like to start giving back 
> some.
>  
> Thanks!
>  
> Jeremy Branham
> jb...@allstate.com 
> Allstate Insurance Company | UCV Technology Services | Information Services 
> Group
>  



[jira] [Commented] (SOLR-7555) Display total space and available space in Admin

2019-01-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742543#comment-16742543
 ] 

Jan Høydahl commented on SOLR-7555:
---

We now have the new "Nodes" tab that displays disk space for each node. Perhaps 
that is enough and we can close this?

> Display total space and available space in Admin
> 
>
> Key: SOLR-7555
> URL: https://issues.apache.org/jira/browse/SOLR-7555
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.1
>Reporter: Eric Pugh
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 6.0
>
> Attachments: DiskSpaceAwareDirectory.java, 
> SOLR-7555-display_disk_space.patch, SOLR-7555-display_disk_space_v2.patch, 
> SOLR-7555-display_disk_space_v3.patch, SOLR-7555-display_disk_space_v4.patch, 
> SOLR-7555-display_disk_space_v5.patch, SOLR-7555.patch, SOLR-7555.patch, 
> SOLR-7555.patch
>
>
> Frequently I have access to the Solr Admin console, but not the underlying 
> server, and I'm curious how much space remains available.   This little patch 
> exposes total Volume size as well as the usable space remaining:
> !https://monosnap.com/file/VqlReekCFwpK6utI3lP18fbPqrGI4b.png!
> I'm not sure if this is the best place to put this, as every shard will share 
> the same data, so maybe it should be on the top level Dashboard?  Also not 
> sure what to call the fields! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Admin UI - Collections Management

2019-01-14 Thread Branham, Jeremy (Experis)
I recently split some shards, and the new replicas didn’t go to the nodes I 
wanted them.
The collections API allowed me to move the replicas around by investigating the 
core names and locations, then constructing the correct urls to execute the 
moves.
This worked, but it would have been faster if the admin UI supported such 
operations.

Is this something I could contribute to? Maybe a PR in GitHub?
I’ve been a solr user for quite a while and would like to start giving back 
some.

Thanks!

Jeremy Branham
jb...@allstate.com
Allstate Insurance Company | UCV Technology Services | Information Services 
Group



[JENKINS] Lucene-Solr-repro - Build # 2680 - Unstable

2019-01-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2680/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/265/consoleText

[repro] Revision: 734f20b298c0846cc319cbb011c3f44398b54005

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=B568B06FFDA5808C 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=da 
-Dtests.timezone=Europe/Luxembourg -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
d965b3547e2fb87cb7551687bae312a0ff62e526
[repro] git fetch
[repro] git checkout 734f20b298c0846cc319cbb011c3f44398b54005

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimTriggerIntegration
[repro] ant compile-test

[...truncated 3605 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestSimTriggerIntegration" -Dtests.showOutput=onerror  
-Dtests.seed=B568B06FFDA5808C -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=da -Dtests.timezone=Europe/Luxembourg 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 5702 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro] git checkout d965b3547e2fb87cb7551687bae312a0ff62e526

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12902) Block Expensive Queries custom Solr component

2019-01-14 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742424#comment-16742424
 ] 

Hoss Man commented on SOLR-12902:
-

Quick comment on something specific...
{quote}... I have added a test-case in the code to explain the scenario in 
which the custom component will be helpful.
{quote}
Tirth: what you're describing is really more of an "example documentation" ... 
when we folks talk about having test cases for new functionality/patches, what 
they mean is new JUnit powered tests that are either unit tests proving that 
the underlying methods behave as documented, or integration level tests showing 
that when a Solr request comes in, the search component behaves as expected (in 
this case: letting the request execute and return the expected results, or 
returning an expected error if it violates the configuration)

General feedback:

This is a type of functionality we've talked about for a long time, but one of 
the reasons we (or at least "I") have never tackled it head on relates to my 
main concern with the approach currently taking the in the PR patch: it sets us 
down the path of needing a "laundry list" (which we have to maintain and 
constantly update moving forward) of every possible param/feature (and 
combination there of) that _some_ people *might* find problematic (with 
configuration options for all of them) in order to help ensure that something 
like this is useful for _most_ people.

The reason i say that is because typically when users come along to assess a 
feature like this, and they are concerned about "A, B, X & C" it's not useful 
to them if it only solves "A, B C, & D" – w/o support for X. Because if they 
need their own custom solution/plugin for preventing X they might as well 
encorporate a custom solution for "A, B, & C" as well, so they only need to 
worry about configurating one solution instead of two.

The permutations of things to worry about providing configuration options for 
is problematic as well, because it's not just a question of "here's *every* 
solr param, let's add a config option to turn it off or limit it's range of 
legal values" (if it were we could maybe simplify the impl w/a "rules" syntax 
that didn't need to know about specific param names) but it's also all about 
the permutations of interconnected params – ex: folks who want to support both 
faceting & highlighting, but not on the same requests; or highlighting is ok, 
as long as rows isn't too big.

I think the only way to offer a really re-usable generalized solution for 
something like this would be via the ScriptEngine, and letting people configure 
their own set of arbitrary script(s) that could be compiled on startup, and 
then evaled against the request params (and request context). We could test & 
provide some small re-usable sample/example scripts that people could choose to 
mix and match or customize ... similar to how spam assissian rules are 
provided/configured.

I think the simplest implementation on the javaside would be:
 * configure a list of script files
 * compile all scripts on init
 * at request time loop over each script in order and eval
 * if script eval result is something that is null or .equals() FALSE or "new 
Float(0)" continue
 * if script eval result is anything else, return the toString as an error 
message

that way people could write scripts like...
{code:java}
if (params[rows] > 100) {
  return "rows param is too high"
}
if (params[start] > 10) {
  return "start param is too high"
}
if (null != params[facet.pivot] && null != params[highlight]) {
  ...
...
return 0
{code}
But we could potentially also support a varient option for simpler scripts 
w/less control over the error message returned...
 * configure a NamedList mapping error strings to lists of script files, ie...
{code:java}

  script1.js
  
 script2.js
 script3.js
 script4.js
  ...
{code}

 * compile all scripts on init, maintain a mapping to their error string
 * at request time loop over each script in order and eval
 * if script eval result is not .equals() TRUE then retrun the associated error 
string

that way people could have much simpler "boolean expression" scripts like
{code:java}
   (rows <= 100)
&& (start <= 10)
&& (params[facet.pivot] ^ params[highlight])
&& ...
{code}
 

A lot of the "plumbing" code we'd need for something like this already exists 
in the StatelessScriptUpdateProcessorFactory – we'd just need to refactor it 
into a "ScriptUtils" helper place, and tease out some bits that require the 
"Invocable" API since we wouldn't really need that here.

Thoughts?

> Block Expensive Queries custom Solr component
> -
>
> Key: SOLR-12902
> URL: https://issues.apache.org/jira/browse/SOLR-12902
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 432 - Still unstable

2019-01-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/432/

2 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:44482/forceleader_test_collection

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:44482/forceleader_test_collection
at 
__randomizedtesting.SeedInfo.seed([CD8D4CEA280B801A:2B1A782A1189797B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:504)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:479)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1063)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1035)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23521 - Unstable!

2019-01-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23521/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.solr.cloud.LeaderTragicEventTest.test

Error Message:
Timeout waiting for new replica become leader Timeout waiting to see state for 
collection=collection1 
:DocCollection(collection1//collections/collection1/state.json/6)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"collection1_shard1_replica_n1",   
"base_url":"https://127.0.0.1:40291/solr;,   
"node_name":"127.0.0.1:40291_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"collection1_shard1_replica_n2",   
"base_url":"https://127.0.0.1:38587/solr;,   
"node_name":"127.0.0.1:38587_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"} Live Nodes: [127.0.0.1:38587_solr, 127.0.0.1:40291_solr] 
Last available state: 
DocCollection(collection1//collections/collection1/state.json/6)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"collection1_shard1_replica_n1",   
"base_url":"https://127.0.0.1:40291/solr;,   
"node_name":"127.0.0.1:40291_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"collection1_shard1_replica_n2",   
"base_url":"https://127.0.0.1:38587/solr;,   
"node_name":"127.0.0.1:38587_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for new replica become leader
Timeout waiting to see state for collection=collection1 
:DocCollection(collection1//collections/collection1/state.json/6)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"collection1_shard1_replica_n1",
  "base_url":"https://127.0.0.1:40291/solr;,
  "node_name":"127.0.0.1:40291_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"collection1_shard1_replica_n2",
  "base_url":"https://127.0.0.1:38587/solr;,
  "node_name":"127.0.0.1:38587_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
Live Nodes: [127.0.0.1:38587_solr, 127.0.0.1:40291_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/6)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"collection1_shard1_replica_n1",
  "base_url":"https://127.0.0.1:40291/solr;,
  "node_name":"127.0.0.1:40291_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"collection1_shard1_replica_n2",
  "base_url":"https://127.0.0.1:38587/solr;,
  "node_name":"127.0.0.1:38587_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([8AC89CD3E35806E4:29CA3094DA46B1C]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:289)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:267)
at 
org.apache.solr.cloud.LeaderTragicEventTest.test(LeaderTragicEventTest.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

[jira] [Comment Edited] (SOLR-13007) Use javabin instead of JSON to send messages to overseer

2019-01-14 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742345#comment-16742345
 ] 

Bar Rotstein edited comment on SOLR-13007 at 1/14/19 5:48 PM:
--

Perhaps I got this wrong,

But I meant that cluster configs and such(more likely to be used to debug) 
would be kept as JSON, while the messages sent to overseer would be sent using 
zk as javabin.

I'll have a closer look at the code and see how feasible this is.

If this over complicates stuff I guess it wouldn't be worth the effort.


was (Author: brot):
Perhaps I got this wrong,

But I meant that cluster configs and such(more likely to be used to debug) 
would be kept as JSON, while the messages sent to overseer would be sent using 
zk as javabin.

I'll have a closer look at the core and see how feasible this is.

> Use javabin instead of JSON to send messages to overseer
> 
>
> Key: SOLR-13007
> URL: https://issues.apache.org/jira/browse/SOLR-13007
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> The messages themselves are ephemeral and the readability is not a big issue. 
> Using javabin can:
> * reduce the payload size
> * make processing faster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13126) Inconsistent score in debug and result with multiple multiplicative boosts

2019-01-14 Thread Thomas Aglassinger (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742343#comment-16742343
 ] 

Thomas Aglassinger commented on SOLR-13126:
---

We've been digging into this and managed to somewhat track the issue down 
although unfortunately our knowledge of the inner workings of Solr and Lucene 
in particular is not sufficient to fix it and provide a patch.

We did however add logging statements that showcase the difference in the 
scoring for some trivial queries. To make the logging easier to read we 
refactored several anonymous classes to inner classes with expressive names and 
added several {{toString()}} functions. The log messages are deliberately 
written with level warning so we can easily separate them from Solr's own info 
and debug messages.

If it helps we can make these changes available although it's not feasible to 
merge them because they are only debug hacks.

Here's what we found out so far:

As described in the initial issue description we can reproduce that the score 
of a query result is computed correctly in the explain segments but incorrectly 
in the actual result if only one of two multiplicative boost conditions match. 
We now further simplified our query by splitting it into 3 separate queries 
with a filter query on one specific document. The cases are:

 # name matches both boost (netzteil and sony): Original Sony Vaio Netzteil
 # name matches one boost (netzteil but not sony): GS-Netzteil 20W schwarz
 # name matches no boost (neither netzteil nor sony): Camcorderband DV 100min 
(2)

Attached you find the log files for these queries and the JSON of the queries 
themselves. This time we did not enable debugQuery in order to log only the 
incorrect score of the actual result.

Each request was executed on a freshly restarted server (local, no replication, 
no shards) to ensure caching does not pollute the findings.

We made the following observations:
 # Both matches: lucene detects both matches with {{QueryDocValues.exists()}} 
and then computes scores for them using QueryDocValues.floatValue(). This seems 
to be called eventually by the scorer utilized by the result of 
{{org.apache.lucene.search.DoubleValues#withDefault()}} based on a formerly 
anonymous class renamed to DoubleValues_DoubleValuesWithDefault()
 # Single match: {{QueryDocValues.exists()}} detects one match and considers 
the other false (which seems correct). After that however it only seems to work 
with various variants of a constant score of 1.0, which in the end results in 
1.0. Notice that this query uses the same {{withDefault()}} as above but 
performs a very different computation mostly based on constant values. There is 
no call to {{QueryDocValues.floatVal()}}
 # No match: {{QueryDocValues.exists()}} does not find anything and results in 
a score of 1.0 as expected.
 # All logs seems to compute the score for a document with the ID -1, which 
utilizes {{QueryDocValues.floatVal()}}. As far as we understand this seems to 
be some initialization step independent of the actual query that happens only 
for the first query sent to the server.

Interestingly when you compare the logs for single and no match the are almost 
identical apart from the {{QueryDocValues.exists()}}, an additional 
{{BooleanWeight()}} and various {{toString()}} hashes.

Our expectation would have been that queries for single and both matches would 
have produced a fairly similar log using similar scorers but different scores 
(2.0 vs 6.0).

As we can reproduce these results consistently in a small testing environment 
we currently see the following options to proceed further:
 # With some hints on where to further dig into the source code we might be 
able to find the real culprit causing the inconsistent score. Any pointers?
 # We could make the solrconfig.xml, schema.xml and the core files for Solr 7.5 
available for someone else to debug who has a better grasp of the inner 
workings. Again, this is small test environment with only a few documents, and 
we could probably reduce this further (e.g. by removing Solr fields unrelated 
to this issue).

Any help would be much appreciated,
 Thomas

> Inconsistent score in debug and result with multiple multiplicative boosts
> --
>
> Key: SOLR-13126
> URL: https://issues.apache.org/jira/browse/SOLR-13126
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 7.5.0
> Environment: Reproduced with macOS 10.14.1, a quick test with Windows 
> 10 showed the same result.
>Reporter: Thomas Aglassinger
>Priority: Major
> Attachments: debugQuery.json, 
> solr_match_neither_nextteil_nor_sony.json, 
> 

[jira] [Commented] (SOLR-13007) Use javabin instead of JSON to send messages to overseer

2019-01-14 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742345#comment-16742345
 ] 

Bar Rotstein commented on SOLR-13007:
-

Perhaps I got this wrong,

But I meant that cluster configs and such(more likely to be used to debug) 
would be kept as JSON, while the messages sent to overseer would be sent using 
zk as javabin.

I'll have a closer look at the core and see how feasible this is.

> Use javabin instead of JSON to send messages to overseer
> 
>
> Key: SOLR-13007
> URL: https://issues.apache.org/jira/browse/SOLR-13007
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> The messages themselves are ephemeral and the readability is not a big issue. 
> Using javabin can:
> * reduce the payload size
> * make processing faster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13126) Inconsistent score in debug and result with multiple multiplicative boosts

2019-01-14 Thread Thomas Aglassinger (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Aglassinger updated SOLR-13126:
--
Attachment: solr_match_neither_nextteil_nor_sony.txt
solr_match_netzteil_and_sony.txt
solr_match_netzteil_only.txt
solr_match_netzteil_and_sony.json
solr_match_neither_nextteil_nor_sony.json
solr_match_netzteil_only.json

> Inconsistent score in debug and result with multiple multiplicative boosts
> --
>
> Key: SOLR-13126
> URL: https://issues.apache.org/jira/browse/SOLR-13126
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 7.5.0
> Environment: Reproduced with macOS 10.14.1, a quick test with Windows 
> 10 showed the same result.
>Reporter: Thomas Aglassinger
>Priority: Major
> Attachments: debugQuery.json, 
> solr_match_neither_nextteil_nor_sony.json, 
> solr_match_neither_nextteil_nor_sony.txt, solr_match_netzteil_and_sony.json, 
> solr_match_netzteil_and_sony.txt, solr_match_netzteil_only.json, 
> solr_match_netzteil_only.txt
>
>
> Under certain circumstances search results from queries with multiple 
> multiplicative boosts using the Solr functions {{product()}} and {{query()}} 
> result in a score that is inconsistent with the one from the debugQuery 
> information. Also only the debug score is correct while the actual search 
> results show a wrong score.
> This seems somewhat similar to the behaviour described in 
> https://issues.apache.org/jira/browse/LUCENE-7132, though this issue has been 
> resolved a while ago.
> A little background: we are using Solr as a search platform for the 
> e-commerce framework SAP Hybris. There the shop administrator can create 
> multiplicative boost rules (see below for an example) where a value like 2.0 
> means that an item gets boosted to 200%. This works fine in the demo shop 
> distributed by SAP but breaks in our shop. We encountered the issue when 
> Upgrading from Solr 7.2.1 / Hybris 6.7 to Solr 7.5 / Hybris 18.8.3 (which 
> would have been named Hybris 6.8 but the version naming schema changed).
> We reduced the Solr query generated by Hybris to the relevant parts and could 
> reproduce the issue in the Solr admin without any Hybris connection.
> I attached the JSON result of a test query but here's a description of the 
> parts that seemed most relevant to me.
> The {{responseHeader.params}} reads (slightly rearranged):
> {code:java}
> "q":"{!boost b=$ymb}(+{!lucene v=$yq})",
> "ymb":"product(query({!v=\"name_text_de\\:Netzteil\\^=2.0\"},1),query({!v=\"name_text_de\\:Sony\\^=3.0\"},1))",
> "yq":"*:*",
> "sort":"score desc",
> "debugQuery":"true",
> // Added to keep the output small but probably unrelated to the actual issue
> "fl":"score,id,code_string,name_text_de",
> "fq":"catalogId:\"someProducts\"",
> "rows":"10",
> {code}
> This example boosts the German product name (field {{name_text_de}}) in case 
> in contains certain terms:
>  * "Netzteil" (power supply) is boosted to 200%
>  * "Sony" is boosted to 300%
> Consequently a product containing both terms should be boosted to 600%.
> Also the query function has the value 1 specified as default in case the name 
> does not contain the respective term resulting in a pseudo boost that 
> preserves the score.
> According to the debug information the parser used is the LuceneQParser, 
> which translates this to the following parsed query:
> {quote}FunctionScoreQuery(FunctionScoreQuery(+*:*, scored by 
> boost(product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0)
> {quote}
> And the translated boost is:
> {quote}org.apache.lucene.queries.function.valuesource.ProductFloatFunction:product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0))
> {quote}
> When taking a look at the search result, among other the following products 
> are included (see the JSON comments for an analysis of each result):
> {code:javascript}
>  {
> "id":"someProducts/Online/test711",
> "name_text_de":"Original Sony Vaio Netzteil",
> "code_string":"test711",
> // CORRECT, both "Netzteil" and "Sony" are included in the name
> "score":6.0},
>   {
> "id":"someProducts/Online/taxTestingProductThree",
> "name_text_de":"Steuertestprodukt Zwei",
> "code_string":"taxTestingProductThree",
> // CORRECT, neither "Netzteil" nor "Sony" are included in the name
> "score":1.0},
>   {
> "id":"someProducts/Online/79785630",
> 

[jira] [Created] (LUCENE-8637) WeightedSpanTermExtractor unnexessarily enforces rewrite for some SpanQueiries

2019-01-14 Thread Christoph Goller (JIRA)
Christoph Goller created LUCENE-8637:


 Summary: WeightedSpanTermExtractor unnexessarily enforces rewrite 
for some SpanQueiries
 Key: LUCENE-8637
 URL: https://issues.apache.org/jira/browse/LUCENE-8637
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Affects Versions: 7.5, 7.3.1, 7.4, 7.6
Reporter: Christoph Goller


Method mustRewriteQuery(SpanQuery) returns true for SpanPositionCheckQuery, 
SpanContainingQuery, SpanWithinQuery, and SpanBoostQuery, however, these 
queries do not require rewriting. One effect of this is e.g. that 
UnifiedHighlighter does not work with OffsetSource Postings and switches to 
Analysis which of course has consequences for performance.

I attach a patch for lucene version 7.6.0. I have not checked whether it breaks 
existing unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Cannot find doap file: http://lucene.apache.org/solr/doap.rdf

2019-01-14 Thread Steve Rowe
Hi sebb,

From the .htaccess file at http://lucene.apache.org/ :

-
# DOAP file redirects to source repository
RedirectMatch Permanent /core/doap.rdf
 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob_plain;f=dev-tools/doap/lucene.rdf;hb=HEAD

RedirectMatch Permanent /solr/doap.rdf 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob_plain;f=dev-tools/doap/solr.rdf;hb=HEAD
-

We switched to gitbox last week, and for a short period, after the git-wip-us 
repo was removed, and before a redirect to gitbox was put in place, these DOAP 
files were inaccessible.  There is now a redirect from git-wip-us URLs to the 
gitbox repo, and for me the DOAP files are downloadable.

I have now changed the .htaccess file entries above to point to the gitbox repo.

--
Steve

> On Jan 12, 2019, at 4:06 AM, sebb  wrote:
> 
> Please can you fix the missing DOAP?
> 
> Either replace the file on the website, or update projects.xml with
> the new location
> 
> Likewise for the file:
> 
> http://lucene.apache.org/core/doap.rdf
> 
> -- Forwarded message -
> From: Projects 
> Date: Sat, 12 Jan 2019 at 02:01
> Subject: Cannot find doap file: http://lucene.apache.org/solr/doap.rdf
> To: Site Development 
> 
> 
> URL: http://lucene.apache.org/solr/doap.rdf
> HTTP Error 404: Not Found
> Source: 
> https://svn.apache.org/repos/asf/comdev/projects.apache.org/trunk/data/projects.xml
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13007) Use javabin instead of JSON to send messages to overseer

2019-01-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742295#comment-16742295
 ] 

Tomás Fernández Löbbe commented on SOLR-13007:
--

I'm not sure I follow. Messages to the Overseer are sent via ZooKeeper

> Use javabin instead of JSON to send messages to overseer
> 
>
> Key: SOLR-13007
> URL: https://issues.apache.org/jira/browse/SOLR-13007
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> The messages themselves are ephemeral and the readability is not a big issue. 
> Using javabin can:
> * reduce the payload size
> * make processing faster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13007) Use javabin instead of JSON to send messages to overseer

2019-01-14 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742291#comment-16742291
 ] 

Bar Rotstein commented on SOLR-13007:
-

How about the messages be sent as javabin to overseer, but stored as join in 
zookeeper?

Would that be alright?

> Use javabin instead of JSON to send messages to overseer
> 
>
> Key: SOLR-13007
> URL: https://issues.apache.org/jira/browse/SOLR-13007
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> The messages themselves are ephemeral and the readability is not a big issue. 
> Using javabin can:
> * reduce the payload size
> * make processing faster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13007) Use javabin instead of JSON to send messages to overseer

2019-01-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742284#comment-16742284
 ] 

Tomás Fernández Löbbe commented on SOLR-13007:
--

I'm not sure this is a good idea. The messages to the overseer are usually very 
small, I don't think the gain will be that much (speed or size). The impact to 
readability is big IMO, specially when you need to read them the most, like on 
a prod outage or when debugging something.

> Use javabin instead of JSON to send messages to overseer
> 
>
> Key: SOLR-13007
> URL: https://issues.apache.org/jira/browse/SOLR-13007
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> The messages themselves are ephemeral and the readability is not a big issue. 
> Using javabin can:
> * reduce the payload size
> * make processing faster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2019-01-14 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742254#comment-16742254
 ] 

Tomoko Uchida commented on LUCENE-2562:
---

Thank you, Uwe.
{quote}I would also favour to remove Guice, if it's easy to do.
{quote}
Yes, it should be technically easy to remove Guice though some work is needed.

Also I'd better to remove a few more libraries (ini4j and FindBugs) which are 
no longer maintained.
{quote}I mentioned a small thing on the Pull Request: Please don't allow writes 
to sysprops GLOBALLY!
{quote}
(If I remember right) I modified the policy file because unit tests encountered 
errors come from log4j. I will try to find workarounds for it, so let me 
discuss with you about the details later.

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png, 
> スクリーンショット 2018-11-05 9.19.47.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 428 - Failure

2019-01-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/428/

No tests ran.

Build Log:
[...truncated 23484 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2458 links (2009 relative) to 3223 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.7.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[jira] [Commented] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742116#comment-16742116
 ] 

Dawid Weiss commented on LUCENE-8636:
-

They all call the same underlying method with different max document range.  
The "big one" is a nightly test:
{code}
  public void testRandomBinaryTiny() throws Exception {
doTestRandomBinary(10);
  }

  public void testRandomBinaryMedium() throws Exception {
doTestRandomBinary(1);
  }

  @Nightly
  public void testRandomBinaryBig() throws Exception {
doTestRandomBinary(10);
  }
{code}

I'm not sure what it is you'd like to do :)


> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8636.patch
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742114#comment-16742114
 ] 

Adrien Grand commented on LUCENE-8636:
--

Thanks for checking. Maybe we should remove this test then, I don't think it 
will tell us more than testRandomBinaryBig?

> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8636.patch
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742106#comment-16742106
 ] 

Dawid Weiss commented on LUCENE-8636:
-

Even with those changes I get 30 seconds for testRandomBinaryBig on that test. 
Maybe you have a much faster machine (very likely), but those apache build 
boxes will timeout, I'm pretty sure of that.

> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8636.patch
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Windows (64bit/jdk-12-ea+23) - Build # 9 - Failure!

2019-01-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/9/
Java: 64bit/jdk-12-ea+23 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2100 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\core\test\temp\junit4-J0-20190114_125432_7012102254238971452822.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 5 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\core\test\temp\junit4-J1-20190114_125432_70116995077423182291815.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 317 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\test-framework\test\temp\junit4-J0-20190114_130323_81017295061968296646662.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 5 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\test-framework\test\temp\junit4-J1-20190114_130323_81013710514441321883486.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 1084 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\analysis\common\test\temp\junit4-J0-20190114_130457_0535637105672123856200.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\analysis\common\test\temp\junit4-J1-20190114_130457_05312319913466052730035.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 257 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\analysis\icu\test\temp\junit4-J1-20190114_130747_03911306890575093734414.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\analysis\icu\test\temp\junit4-J0-20190114_130747_03913996080565120920269.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 254 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\analysis\kuromoji\test\temp\junit4-J1-20190114_130802_5269685056548006729149.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\analysis\kuromoji\test\temp\junit4-J0-20190114_130802_52610630340362013032735.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 163 lines...]
   [junit4] JVM J0: stderr was not empty, see: 

[jira] [Commented] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742094#comment-16742094
 ] 

ASF subversion and git services commented on LUCENE-8636:
-

Commit d965b3547e2fb87cb7551687bae312a0ff62e526 in lucene-solr's branch 
refs/heads/master from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d965b35 ]

LUCENE-8636: TestPointQueries times out on nightly (decreased big range to 50k, 
excluded simple text codec).


> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8636.patch
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742095#comment-16742095
 ] 

ASF subversion and git services commented on LUCENE-8636:
-

Commit c38f87d966c39831a2285da96875b4a721e57423 in lucene-solr's branch 
refs/heads/branch_8x from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c38f87d ]

LUCENE-8636: TestPointQueries times out on nightly (decreased big range to 50k, 
excluded simple text codec).


> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8636.patch
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742100#comment-16742100
 ] 

Dawid Weiss commented on LUCENE-8636:
-

Hi Adrien. I've already committed this patch since it seemed trivial. No 
problem in improving it though. I'll follow-up.

> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8636.patch
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742097#comment-16742097
 ] 

ASF subversion and git services commented on LUCENE-8636:
-

Commit b462d8ed2a29b6c9811cba73efb2799e12d1ff63 in lucene-solr's branch 
refs/heads/branch_7x from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b462d8e ]

LUCENE-8636: TestPointQueries times out on nightly (decreased big range to 50k, 
excluded simple text codec).


> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8636.patch
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742092#comment-16742092
 ] 

Adrien Grand commented on LUCENE-8636:
--

Hi Dawid, thanks for looking. It gives the nightly test a number of documents 
that is not so much higher than testRandomBinaryMedium. Since this test is more 
about querying than indexing, maybe we should disable RandomIndexWriter 
instead? The below patch makes the test run in 4 seconds on my machine with the 
reproduction line that you shared.

{code}
diff --git 
a/lucene/core/src/test/org/apache/lucene/search/TestPointQueries.java 
b/lucene/core/src/test/org/apache/lucene/search/TestPointQueries.java
index 90df7c3..39bb32e 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestPointQueries.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestPointQueries.java
@@ -675,7 +675,7 @@ public class TestPointQueries extends LuceneTestCase {
   dir = newDirectory();
 }
 
-RandomIndexWriter w = new RandomIndexWriter(random(), dir, iwc);
+IndexWriter w = new IndexWriter(dir, iwc);
 
 int numValues = docValues.length;
 if (VERBOSE) {
@@ -742,7 +742,7 @@ public class TestPointQueries extends LuceneTestCase {
   }
   w.forceMerge(1);
 }
-final IndexReader r = w.getReader();
+final IndexReader r = DirectoryReader.open(w);
 w.close();
 
 IndexSearcher s = newSearcher(r, false);
{code}

> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8636.patch
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Three quick questions

2019-01-14 Thread Michael McCandless
Hi John,

1. When the total size across all in memory segments crosses 100 MB, then
IW will pick the largest segment(s) to move to disk
2. There are in memory segments; when an indexing thread comes in, it will
write to one segment and no other thread can write to that segment while
that thread is indexing the one document.  We used to have some thread
affinity so a given thread would prefer to write to the same in-memory
segment but I think we don't do that anymore.  If an indexing thread
arrives to index a doc, but all in memory segments are already handling
other documents, then we will create a new in-memory segment for that
thread to write to.
3. Should be .del files

Mike McCandless

http://blog.mikemccandless.com


On Mon, Jan 7, 2019 at 7:08 PM John Wilson  wrote:

> Hi,
>
>
>1. Assume I have two index writer threads using an IndexWriter object
>(IndexWriter is thread safe) and my ramBufferSizeMB is set to 100M, then
>are segments created when each thread writes 100M or when the total size
>written in the buffers is 100M?
>2. Does each index writer thread writes its own segment or the two
>writers can write to the same segment (requiring synchronization)?
>3. When a document is deleted/updated, the document is marked and this
>info is stored in a separate file so that the next merge deletes the
>document. What would be a typical name (or extension) of the file in my
>index directory?
>
> Thanks,
> John
>


[jira] [Updated] (SOLR-13125) Optimize Queries when sorting by router.field

2019-01-14 Thread mosh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh updated SOLR-13125:

Attachment: (was: SOLR-13125-no-commit.patch)

> Optimize Queries when sorting by router.field
> -
>
> Key: SOLR-13125
> URL: https://issues.apache.org/jira/browse/SOLR-13125
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Minor
> Attachments: SOLR-13125-no-commit.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are currently testing TRA using Solr 7.7, having >300 shards in the alias, 
> with much growth in the coming months.
> The "hot" data(in our case, more recent) will be stored on stronger 
> nodes(SSD, more RAM, etc).
> A proposal of optimizing queries sorted by router.field(the field which TRA 
> uses to route the data to the correct collection) has emerged.
> Perhaps, in queries which are sorted by router.field, Solr could be smart 
> enough to wait for the more recent collections, and in case the limit was 
> reached cancel other queries(or just not block and wait for the results)?
> For example:
> When querying a TRA which with a filter on a different field than 
> router.field, but sorting by router.field desc, limit=100.
> Since this is a TRA, solr will issue queries for all the collections in the 
> alias.
> But to optimize this particular type of query, Solr could wait for the most 
> recent collection in the TRA, see whether the result set matches or exceeds 
> the limit. If so, the query could be returned to the user without waiting for 
> the rest of the shards. If not, the issuing node will block until the second 
> query returns, and so forth, until the limit of the request is reached.
> This might also be useful for deep paging, querying each collection and only 
> skipping to the next once there are no more results in the specified 
> collection.
> Thoughts or inputs are always welcome.
> This is just my two cents, and I'm always happy to brainstorm.
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13125) Optimize Queries when sorting by router.field

2019-01-14 Thread mosh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh updated SOLR-13125:

Attachment: SOLR-13125-no-commit.patch

> Optimize Queries when sorting by router.field
> -
>
> Key: SOLR-13125
> URL: https://issues.apache.org/jira/browse/SOLR-13125
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Minor
> Attachments: SOLR-13125-no-commit.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are currently testing TRA using Solr 7.7, having >300 shards in the alias, 
> with much growth in the coming months.
> The "hot" data(in our case, more recent) will be stored on stronger 
> nodes(SSD, more RAM, etc).
> A proposal of optimizing queries sorted by router.field(the field which TRA 
> uses to route the data to the correct collection) has emerged.
> Perhaps, in queries which are sorted by router.field, Solr could be smart 
> enough to wait for the more recent collections, and in case the limit was 
> reached cancel other queries(or just not block and wait for the results)?
> For example:
> When querying a TRA which with a filter on a different field than 
> router.field, but sorting by router.field desc, limit=100.
> Since this is a TRA, solr will issue queries for all the collections in the 
> alias.
> But to optimize this particular type of query, Solr could wait for the most 
> recent collection in the TRA, see whether the result set matches or exceeds 
> the limit. If so, the query could be returned to the user without waiting for 
> the rest of the shards. If not, the issuing node will block until the second 
> query returns, and so forth, until the limit of the request is reached.
> This might also be useful for deep paging, querying each collection and only 
> skipping to the next once there are no more results in the specified 
> collection.
> Thoughts or inputs are always welcome.
> This is just my two cents, and I'm always happy to brainstorm.
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13125) Optimize Queries when sorting by router.field

2019-01-14 Thread mosh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh updated SOLR-13125:

Attachment: SOLR-13125-no-commit.patch

> Optimize Queries when sorting by router.field
> -
>
> Key: SOLR-13125
> URL: https://issues.apache.org/jira/browse/SOLR-13125
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Minor
> Attachments: SOLR-13125-no-commit.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are currently testing TRA using Solr 7.7, having >300 shards in the alias, 
> with much growth in the coming months.
> The "hot" data(in our case, more recent) will be stored on stronger 
> nodes(SSD, more RAM, etc).
> A proposal of optimizing queries sorted by router.field(the field which TRA 
> uses to route the data to the correct collection) has emerged.
> Perhaps, in queries which are sorted by router.field, Solr could be smart 
> enough to wait for the more recent collections, and in case the limit was 
> reached cancel other queries(or just not block and wait for the results)?
> For example:
> When querying a TRA which with a filter on a different field than 
> router.field, but sorting by router.field desc, limit=100.
> Since this is a TRA, solr will issue queries for all the collections in the 
> alias.
> But to optimize this particular type of query, Solr could wait for the most 
> recent collection in the TRA, see whether the result set matches or exceeds 
> the limit. If so, the query could be returned to the user without waiting for 
> the rest of the shards. If not, the issuing node will block until the second 
> query returns, and so forth, until the limit of the request is reached.
> This might also be useful for deep paging, querying each collection and only 
> skipping to the next once there are no more results in the specified 
> collection.
> Thoughts or inputs are always welcome.
> This is just my two cents, and I'm always happy to brainstorm.
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13125) Optimize Queries when sorting by router.field

2019-01-14 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742062#comment-16742062
 ] 

mosh commented on SOLR-13125:
-

Just added a no commit patch, which is up to date with the linked PR.

> Optimize Queries when sorting by router.field
> -
>
> Key: SOLR-13125
> URL: https://issues.apache.org/jira/browse/SOLR-13125
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Minor
> Attachments: SOLR-13125-no-commit.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are currently testing TRA using Solr 7.7, having >300 shards in the alias, 
> with much growth in the coming months.
> The "hot" data(in our case, more recent) will be stored on stronger 
> nodes(SSD, more RAM, etc).
> A proposal of optimizing queries sorted by router.field(the field which TRA 
> uses to route the data to the correct collection) has emerged.
> Perhaps, in queries which are sorted by router.field, Solr could be smart 
> enough to wait for the more recent collections, and in case the limit was 
> reached cancel other queries(or just not block and wait for the results)?
> For example:
> When querying a TRA which with a filter on a different field than 
> router.field, but sorting by router.field desc, limit=100.
> Since this is a TRA, solr will issue queries for all the collections in the 
> alias.
> But to optimize this particular type of query, Solr could wait for the most 
> recent collection in the TRA, see whether the result set matches or exceeds 
> the limit. If so, the query could be returned to the user without waiting for 
> the rest of the shards. If not, the issuing node will block until the second 
> query returns, and so forth, until the limit of the request is reached.
> This might also be useful for deep paging, querying each collection and only 
> skipping to the next once there are no more results in the specified 
> collection.
> Thoughts or inputs are always welcome.
> This is just my two cents, and I'm always happy to brainstorm.
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread Dawid Weiss (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-8636:

Attachment: LUCENE-8636.patch

> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8636.patch
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2019-01-14 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742046#comment-16742046
 ] 

Uwe Schindler commented on LUCENE-2562:
---

Hi Tomoko,
I would also favour to remove Guice, if it's easy to do. By the way, I opened 
the CGLIB issue back at that time: https://github.com/cglib/cglib/issues/93

Some background: In early Java 9 version deep reflection was not working at 
all, which was relaxed before release. The "official" statement is: CGLIB is no 
longer maintained, people should migrate to ByteBuddy. Because of this we 
removed all our Mocking frameworks (we had many in Solr) and moved everything 
to Mockito. Mockito uses ByteBuddy and uses a more modern way to produce 
proxies.

I really don't like to make Lucene depend on a library (CGLIB) that's no longer 
maintained by the community. If we want to use Guice, Guice should move to 
ByteBuddy. This is one reason why Elasticsearch dropped dependency injection 
with Guice long ago. That's also too much magic and should be avoided.

+1 for the patch without Guice.

Once you are done, I will look over it again. I mentioned a small thing on the 
Pull Request: Please don't allow writes to sysprops GLOBALLY! There is a reason 
why tests are not allowed to do this. If You really need to set a system 
property please document it and list those exemptions separately in the policy 
file (with comment).

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png, 
> スクリーンショット 2018-11-05 9.19.47.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742043#comment-16742043
 ] 

Dawid Weiss commented on LUCENE-8636:
-

This seed takes ~5 minutes on my beefy machine to complete (after disabling 
simple text). That's quite heavy, even for a nightly test IMHO.

> TestPointQueries times out on nightly
> -
>
> Key: LUCENE-8636
> URL: https://issues.apache.org/jira/browse/LUCENE-8636
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 8.0
>
>
> Nightlies have failed with a suite timeout on:
> {code}
> -Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
> -Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {code}
> This is a result of plain text codec being used and a large volume of 
> repetitions.
> I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8636) TestPointQueries times out on nightly

2019-01-14 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-8636:
---

 Summary: TestPointQueries times out on nightly
 Key: LUCENE-8636
 URL: https://issues.apache.org/jira/browse/LUCENE-8636
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
 Fix For: 8.0


Nightlies have failed with a suite timeout on:
{code}
-Dtestcase=TestPointQueries -Dtests.method=testRandomBinaryBig 
-Dtests.seed=81DB11C283A04F59 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
{code}

This is a result of plain text codec being used and a large volume of 
repetitions.

I'll disable plain text codec on that test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13072) Management of markers for nodeLost / nodeAdded events is broken

2019-01-14 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13072:
-
Fix Version/s: master (9.0)

> Management of markers for nodeLost / nodeAdded events is broken
> ---
>
> Key: SOLR-13072
> URL: https://issues.apache.org/jira/browse/SOLR-13072
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.5, 7.6, 8.0
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.0, 7.7, master (9.0)
>
>
> In order to prevent {{nodeLost}} events from being lost when it's the 
> Overseer leader that is the node that was lost a mechanism was added to 
> record markers for these events by any other live node, in 
> {{ZkController.registerLiveNodesListener()}}. As similar mechanism also 
> exists for {{nodeAdded}} events.
> On Overseer leader restart if the autoscaling configuration didn't contain 
> any triggers that consume {{nodeLost}} events then these markers are removed. 
> If there are 1 or more trigger configs that consume {{nodeLost}} events then 
> these triggers would read the markers, remove them and generate appropriate 
> events.
> However, as the {{NodeMarkersRegistrationTest}} shows this mechanism is 
> broken and susceptible to race conditions.
> It's not unusual to have more than 1 {{nodeLost}} trigger because in addition 
> to any user-defined triggers there's always one that is automatically defined 
> if missing: {{.auto_add_replicas}}. However, if there's more than 1 
> {{nodeLost}} trigger then the process of consuming and removing the markers 
> becomes non-deterministic - each trigger may pick up (and delete) all, none, 
> or some of the markers.
> So as it is now this mechanism is broken if more than 1 {{nodeLost}} or more 
> than 1 {{nodeAdded}} trigger is defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13072) Management of markers for nodeLost / nodeAdded events is broken

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16741958#comment-16741958
 ] 

ASF subversion and git services commented on SOLR-13072:


Commit fea79a8f6b55707268955b1a59154e18c37da253 in lucene-solr's branch 
refs/heads/branch_7x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fea79a8 ]

SOLR-13072: Wait for autoscaling config refresh to finish before modifying the 
cluster
and enable the tests for now.


> Management of markers for nodeLost / nodeAdded events is broken
> ---
>
> Key: SOLR-13072
> URL: https://issues.apache.org/jira/browse/SOLR-13072
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.5, 7.6, 8.0
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.0, 7.7
>
>
> In order to prevent {{nodeLost}} events from being lost when it's the 
> Overseer leader that is the node that was lost a mechanism was added to 
> record markers for these events by any other live node, in 
> {{ZkController.registerLiveNodesListener()}}. As similar mechanism also 
> exists for {{nodeAdded}} events.
> On Overseer leader restart if the autoscaling configuration didn't contain 
> any triggers that consume {{nodeLost}} events then these markers are removed. 
> If there are 1 or more trigger configs that consume {{nodeLost}} events then 
> these triggers would read the markers, remove them and generate appropriate 
> events.
> However, as the {{NodeMarkersRegistrationTest}} shows this mechanism is 
> broken and susceptible to race conditions.
> It's not unusual to have more than 1 {{nodeLost}} trigger because in addition 
> to any user-defined triggers there's always one that is automatically defined 
> if missing: {{.auto_add_replicas}}. However, if there's more than 1 
> {{nodeLost}} trigger then the process of consuming and removing the markers 
> becomes non-deterministic - each trigger may pick up (and delete) all, none, 
> or some of the markers.
> So as it is now this mechanism is broken if more than 1 {{nodeLost}} or more 
> than 1 {{nodeAdded}} trigger is defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13072) Management of markers for nodeLost / nodeAdded events is broken

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16741959#comment-16741959
 ] 

ASF subversion and git services commented on SOLR-13072:


Commit 794f7f829cdf655f750c992803df1968a58f101e in lucene-solr's branch 
refs/heads/branch_7x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=794f7f8 ]

SOLR-13072: Use the same wait in other simulated tests where the same race 
condition may occur.


> Management of markers for nodeLost / nodeAdded events is broken
> ---
>
> Key: SOLR-13072
> URL: https://issues.apache.org/jira/browse/SOLR-13072
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.5, 7.6, 8.0
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.0, 7.7
>
>
> In order to prevent {{nodeLost}} events from being lost when it's the 
> Overseer leader that is the node that was lost a mechanism was added to 
> record markers for these events by any other live node, in 
> {{ZkController.registerLiveNodesListener()}}. As similar mechanism also 
> exists for {{nodeAdded}} events.
> On Overseer leader restart if the autoscaling configuration didn't contain 
> any triggers that consume {{nodeLost}} events then these markers are removed. 
> If there are 1 or more trigger configs that consume {{nodeLost}} events then 
> these triggers would read the markers, remove them and generate appropriate 
> events.
> However, as the {{NodeMarkersRegistrationTest}} shows this mechanism is 
> broken and susceptible to race conditions.
> It's not unusual to have more than 1 {{nodeLost}} trigger because in addition 
> to any user-defined triggers there's always one that is automatically defined 
> if missing: {{.auto_add_replicas}}. However, if there's more than 1 
> {{nodeLost}} trigger then the process of consuming and removing the markers 
> becomes non-deterministic - each trigger may pick up (and delete) all, none, 
> or some of the markers.
> So as it is now this mechanism is broken if more than 1 {{nodeLost}} or more 
> than 1 {{nodeAdded}} trigger is defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-12-ea+23) - Build # 7694 - Unstable!

2019-01-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7694/
Java: 64bit/jdk-12-ea+23 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 
__randomizedtesting.SeedInfo.seed([F41D2D6F85AD480A:3DA86FC18CCA8EFF]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testEventQueue(TestSimTriggerIntegration.java:717)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 

[jira] [Commented] (LUCENE-8633) Remove term weighting from interval scoring

2019-01-14 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16741936#comment-16741936
 ] 

Alan Woodward commented on LUCENE-8633:
---

I added a configurable pivot and exponent - by default we go for a saturation 
function with pivot of 1, but you can tune the pivot, and adding an exponent 
makes it use a sigmoid function instead (sigmoid with an exponent of 1 is 
equivalent to the saturation function).  I looked at adding the log function as 
well, but a) it isn't bounded in the same way, and b) it ends up exposing the 
IntervalScoreFunction as a public interface, which I'd rather not do.

> Remove term weighting from interval scoring
> ---
>
> Key: LUCENE-8633
> URL: https://issues.apache.org/jira/browse/LUCENE-8633
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8633.patch, LUCENE-8633.patch
>
>
> IntervalScorer currently uses the same scoring mechanism as SpanScorer, 
> summing the IDF of all possibly matching terms from its parent 
> IntervalsSource and using that in conjunction with a sloppy frequency to 
> produce a similarity-based score.  This doesn't really make sense, however, 
> as it means that terms that don't appear in a document can still contribute 
> to the score, and appears to make scores from interval queries comparable 
> with scores from term or phrase queries when they really aren't.
> I'd like to explore a different scoring mechanism for intervals, based purely 
> on sloppy frequency and ignoring term weighting.  This should make the scores 
> easier to reason about, as well as making them useful for things like 
> proximity boosting on boolean queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8633) Remove term weighting from interval scoring

2019-01-14 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8633:
--
Attachment: LUCENE-8633.patch

> Remove term weighting from interval scoring
> ---
>
> Key: LUCENE-8633
> URL: https://issues.apache.org/jira/browse/LUCENE-8633
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8633.patch, LUCENE-8633.patch
>
>
> IntervalScorer currently uses the same scoring mechanism as SpanScorer, 
> summing the IDF of all possibly matching terms from its parent 
> IntervalsSource and using that in conjunction with a sloppy frequency to 
> produce a similarity-based score.  This doesn't really make sense, however, 
> as it means that terms that don't appear in a document can still contribute 
> to the score, and appears to make scores from interval queries comparable 
> with scores from term or phrase queries when they really aren't.
> I'd like to explore a different scoring mechanism for intervals, based purely 
> on sloppy frequency and ignoring term weighting.  This should make the scores 
> easier to reason about, as well as making them useful for things like 
> proximity boosting on boolean queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13072) Management of markers for nodeLost / nodeAdded events is broken

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16741931#comment-16741931
 ] 

ASF subversion and git services commented on SOLR-13072:


Commit 229a0894fbcb152db4ca08119da085a002953943 in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=229a089 ]

SOLR-13072: Wait for autoscaling config refresh to finish before modifying the 
cluster
and enable the tests for now.


> Management of markers for nodeLost / nodeAdded events is broken
> ---
>
> Key: SOLR-13072
> URL: https://issues.apache.org/jira/browse/SOLR-13072
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.5, 7.6, 8.0
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.0, 7.7
>
>
> In order to prevent {{nodeLost}} events from being lost when it's the 
> Overseer leader that is the node that was lost a mechanism was added to 
> record markers for these events by any other live node, in 
> {{ZkController.registerLiveNodesListener()}}. As similar mechanism also 
> exists for {{nodeAdded}} events.
> On Overseer leader restart if the autoscaling configuration didn't contain 
> any triggers that consume {{nodeLost}} events then these markers are removed. 
> If there are 1 or more trigger configs that consume {{nodeLost}} events then 
> these triggers would read the markers, remove them and generate appropriate 
> events.
> However, as the {{NodeMarkersRegistrationTest}} shows this mechanism is 
> broken and susceptible to race conditions.
> It's not unusual to have more than 1 {{nodeLost}} trigger because in addition 
> to any user-defined triggers there's always one that is automatically defined 
> if missing: {{.auto_add_replicas}}. However, if there's more than 1 
> {{nodeLost}} trigger then the process of consuming and removing the markers 
> becomes non-deterministic - each trigger may pick up (and delete) all, none, 
> or some of the markers.
> So as it is now this mechanism is broken if more than 1 {{nodeLost}} or more 
> than 1 {{nodeAdded}} trigger is defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13072) Management of markers for nodeLost / nodeAdded events is broken

2019-01-14 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16741932#comment-16741932
 ] 

ASF subversion and git services commented on SOLR-13072:


Commit b33df8dc0ff387e999348a03a748d466c2e6de50 in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b33df8d ]

SOLR-13072: Use the same wait in other simulated tests where the same race 
condition may occur.


> Management of markers for nodeLost / nodeAdded events is broken
> ---
>
> Key: SOLR-13072
> URL: https://issues.apache.org/jira/browse/SOLR-13072
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.5, 7.6, 8.0
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.0, 7.7
>
>
> In order to prevent {{nodeLost}} events from being lost when it's the 
> Overseer leader that is the node that was lost a mechanism was added to 
> record markers for these events by any other live node, in 
> {{ZkController.registerLiveNodesListener()}}. As similar mechanism also 
> exists for {{nodeAdded}} events.
> On Overseer leader restart if the autoscaling configuration didn't contain 
> any triggers that consume {{nodeLost}} events then these markers are removed. 
> If there are 1 or more trigger configs that consume {{nodeLost}} events then 
> these triggers would read the markers, remove them and generate appropriate 
> events.
> However, as the {{NodeMarkersRegistrationTest}} shows this mechanism is 
> broken and susceptible to race conditions.
> It's not unusual to have more than 1 {{nodeLost}} trigger because in addition 
> to any user-defined triggers there's always one that is automatically defined 
> if missing: {{.auto_add_replicas}}. However, if there's more than 1 
> {{nodeLost}} trigger then the process of consuming and removing the markers 
> becomes non-deterministic - each trigger may pick up (and delete) all, none, 
> or some of the markers.
> So as it is now this mechanism is broken if more than 1 {{nodeLost}} or more 
> than 1 {{nodeAdded}} trigger is defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-8.x - Build # 5 - Unstable

2019-01-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/5/

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:34833/f_

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:34833/f_
at 
__randomizedtesting.SeedInfo.seed([7399E9BB72CF773E:FBCDD661DC331AC6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:256)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:213)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:338)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1068)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1042)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: (was: SOLR-13035.patch)

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SimpleQueryParser to support field filtering?

2019-01-14 Thread Itamar Syn-Hershko
Hi all,

I sent a PR back in November to resolve the title and would appreciate
feedback.

Summary:

SimpleQueryParser lacks support for the `field:` operator for creating
queries which operate on fields other than the default field. Seems
like one can either get the parsed query to operate on a single field, or
on ALL defined fields (+ weights). No support for specifying `field:value`
in the query.

It probably wasn't forgotten, but rather left out for simplicity, but since
we are using this QP implementation more and more (mostly through
Elasticsearch) we thought it would be useful to have it in.

JIRA: https://issues.apache.org/jira/browse/LUCENE-8565

PR: https://github.com/apache/lucene-solr/pull/498

What do people think?

Cheers,

--

Itamar Syn-Hershko
CTO, Founder
BigData Boutique 
Elasticsearch Consulting Partner
http://code972.com | @synhershko 


[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-10.0.1) - Build # 147 - Failure!

2019-01-14 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/147/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 1964 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/core/test/temp/junit4-J0-20190114_064412_40417876070039338495816.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 6 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/core/test/temp/junit4-J1-20190114_064412_3997932993856687007589.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/core/test/temp/junit4-J2-20190114_064412_39911516173332783459654.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 298 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190114_065136_71015621211905845254066.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190114_065136_71011471031020824319829.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 6 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190114_065136_71010403966686590772141.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 1083 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190114_065300_47211032905176626593607.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190114_065300_45812972065405378595015.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190114_065300_458645624123724375682.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 258 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-BadApples-7.x-Linux/lucene/build/analysis/icu/test/temp/junit4-J2-20190114_065458_02718293344128577804743.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J0: stderr was not empty, see: