[jira] [Updated] (LUCENE-7932) Search record with field value='a' or 'A' returns all records along with one more field value

2017-08-20 Thread Rohit Balekundri (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Balekundri updated LUCENE-7932:
-
   Priority: Major  (was: Critical)
Component/s: (was: core/queryparser)
 core/search

> Search record with field value='a' or 'A' returns all records along with one 
> more field value
> -
>
> Key: LUCENE-7932
> URL: https://issues.apache.org/jira/browse/LUCENE-7932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.3, 6.6
> Environment: Windows and Linux
>Reporter: Rohit Balekundri
>  Labels: features
>
> Hi Lucene Team,
> I would like to explain more on about issue facing after querying using 
> QueryParser API.
> Here I am just giving examples of our project with field names (Not related 
> with Lucene):
> In our document which needs to be archived having key fields and non-key 
> fields.
> A> Key fields: 
> 1. LocationCode (DataType=long)
> 2. CollectionObjectID (DataType=long)
> B> Non-key fields
> Category (DataType=string)
> Steps we followed:
> 1. Step 1: We stored multiple document records with category values as below 
> in index files.
>  LocationCode = 1  Category =b
>  LocationCode = 2  Category =BC
>  LocationCode =3  Category =bcd
> 2. In Step 2: we query for records and we pass query parameters as below.
> a) LocationCode=1 and  Category =a  
>  Result= all records displayed
> b) LocationCode=1 and  Category =A  result= all records displayed
>  Result= all records displayed
> I faced above issue in Lucene 5.3.
> Llater I found even Lucene 6.6 is also having same issue.
> Kindly consider this bug on top priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.6-Windows (64bit/jdk-9-ea+181) - Build # 24 - Still Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Windows/24/
Java: 64bit/jdk-9-ea+181 -XX:-UseCompressedOops -XX:+UseG1GC 
--illegal-access=deny

3 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([81FD66612BB88682:ADAB5B06ABE2D06]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:437)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Comment Edited] (LUCENE-7932) Search record with field value='a' or 'A' returns all records along with one more field value

2017-08-20 Thread Rohit Balekundri (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134699#comment-16134699
 ] 

Rohit Balekundri edited comment on LUCENE-7932 at 8/21/17 5:42 AM:
---

Hi Steve,
I updated exact relevant steps in description on how we are getting issue after 
making query from org.apache.lucene.search.IndexSearcher. This class is found 
in lucene-core-5.3.jar



was (Author: mbalekundri):
Hi Steve,
I updated exact steps in description on how we are getting issue after making 
query from org.apache.lucene.search.IndexSearcher. This class is found in 
lucene-core-5.3.jar


> Search record with field value='a' or 'A' returns all records along with one 
> more field value
> -
>
> Key: LUCENE-7932
> URL: https://issues.apache.org/jira/browse/LUCENE-7932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 5.3, 6.6
> Environment: Windows and Linux
>Reporter: Rohit Balekundri
>Priority: Critical
>  Labels: features
>
> Hi Lucene Team,
> I would like to explain more on about issue facing after querying using 
> QueryParser API.
> Here I am just giving examples of our project with field names (Not related 
> with Lucene):
> In our document which needs to be archived having key fields and non-key 
> fields.
> A> Key fields: 
> 1. LocationCode (DataType=long)
> 2. CollectionObjectID (DataType=long)
> B> Non-key fields
> Category (DataType=string)
> Steps we followed:
> 1. Step 1: We stored multiple document records with category values as below 
> in index files.
>  LocationCode = 1  Category =b
>  LocationCode = 2  Category =BC
>  LocationCode =3  Category =bcd
> 2. In Step 2: we query for records and we pass query parameters as below.
> a) LocationCode=1 and  Category =a  
>  Result= all records displayed
> b) LocationCode=1 and  Category =A  result= all records displayed
>  Result= all records displayed
> I faced above issue in Lucene 5.3.
> Llater I found even Lucene 6.6 is also having same issue.
> Kindly consider this bug on top priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7932) Search record with field value='a' or 'A' returns all records along with one more field value

2017-08-20 Thread Rohit Balekundri (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Balekundri updated LUCENE-7932:
-
Affects Version/s: 5.3

> Search record with field value='a' or 'A' returns all records along with one 
> more field value
> -
>
> Key: LUCENE-7932
> URL: https://issues.apache.org/jira/browse/LUCENE-7932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 5.3, 6.6
> Environment: Windows and Linux
>Reporter: Rohit Balekundri
>Priority: Critical
>  Labels: features
>
> Hi Lucene Team,
> I would like to explain more on about issue facing after querying using 
> QueryParser API.
> Here I am just giving examples of our project with field names (Not related 
> with Lucene):
> In our document which needs to be archived having key fields and non-key 
> fields.
> A> Key fields: 
> 1. LocationCode (DataType=long)
> 2. CollectionObjectID (DataType=long)
> B> Non-key fields
> Category (DataType=string)
> Steps we followed:
> 1. Step 1: We stored multiple document records with category values as below 
> in index files.
>  LocationCode = 1  Category =b
>  LocationCode = 2  Category =BC
>  LocationCode =3  Category =bcd
> 2. In Step 2: we query for records and we pass query parameters as below.
> a) LocationCode=1 and  Category =a  
>  Result= all records displayed
> b) LocationCode=1 and  Category =A  result= all records displayed
>  Result= all records displayed
> I faced above issue in Lucene 5.3.
> Llater I found even Lucene 6.6 is also having same issue.
> Kindly consider this bug on top priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11268) AtomicUpdateProcessor complains missing UpdateLog

2017-08-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11268:
--
Attachment: SOLR-11268.patch

> AtomicUpdateProcessor complains missing UpdateLog
> -
>
> Key: SOLR-11268
> URL: https://issues.apache.org/jira/browse/SOLR-11268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: Screenshot from 2017-08-21 08-59-34.png, SOLR-11268.patch
>
>
> AtomicURP seems to be broken, complains about:
> Atomic document updates are not supported unless  is configured.
> This is already configured and regular atomic update operations work fine.
> Request:
> {{/solr/collectionname/update?processor=atomic=add}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11268) AtomicUpdateProcessor complains missing UpdateLog

2017-08-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11268:
--
Fix Version/s: 7.0

> AtomicUpdateProcessor complains missing UpdateLog
> -
>
> Key: SOLR-11268
> URL: https://issues.apache.org/jira/browse/SOLR-11268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: Screenshot from 2017-08-21 08-59-34.png
>
>
> AtomicURP seems to be broken, complains about:
> Atomic document updates are not supported unless  is configured.
> This is already configured and regular atomic update operations work fine.
> Request:
> {{/solr/collectionname/update?processor=atomic=add}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11268) AtomicUpdateProcessor complains missing UpdateLog

2017-08-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11268:
--
Priority: Blocker  (was: Major)

> AtomicUpdateProcessor complains missing UpdateLog
> -
>
> Key: SOLR-11268
> URL: https://issues.apache.org/jira/browse/SOLR-11268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: Screenshot from 2017-08-21 08-59-34.png
>
>
> AtomicURP seems to be broken, complains about:
> Atomic document updates are not supported unless  is configured.
> This is already configured and regular atomic update operations work fine.
> Request:
> {{/solr/collectionname/update?processor=atomic=add}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11268) AtomicUpdateProcessor complains missing UpdateLog

2017-08-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-11268:
-

Assignee: Noble Paul

> AtomicUpdateProcessor complains missing UpdateLog
> -
>
> Key: SOLR-11268
> URL: https://issues.apache.org/jira/browse/SOLR-11268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Fix For: 7.0
>
> Attachments: Screenshot from 2017-08-21 08-59-34.png
>
>
> AtomicURP seems to be broken, complains about:
> Atomic document updates are not supported unless  is configured.
> This is already configured and regular atomic update operations work fine.
> Request:
> {{/solr/collectionname/update?processor=atomic=add}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11268) AtomicUpdateProcessor complains missing UpdateLog

2017-08-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11268:
--
Affects Version/s: 7.0

> AtomicUpdateProcessor complains missing UpdateLog
> -
>
> Key: SOLR-11268
> URL: https://issues.apache.org/jira/browse/SOLR-11268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: Screenshot from 2017-08-21 08-59-34.png
>
>
> AtomicURP seems to be broken, complains about:
> Atomic document updates are not supported unless  is configured.
> This is already configured and regular atomic update operations work fine.
> Request:
> {{/solr/collectionname/update?processor=atomic=add}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11268) AtomicUpdateProcessor complains missing UpdateLog

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11268:

Description: 
AtomicURP seems to be broken, complains about:
Atomic document updates are not supported unless  is configured.

This is already configured and regular atomic update operations work fine.

Request:
{{/solr/collectionname/update?processor=atomic=add}}

  was:
AtomicURP seems to be broken, complains about:
Atomic document updates are not supported unless  AtomicUpdateProcessor complains missing UpdateLog
> -
>
> Key: SOLR-11268
> URL: https://issues.apache.org/jira/browse/SOLR-11268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: Screenshot from 2017-08-21 08-59-34.png
>
>
> AtomicURP seems to be broken, complains about:
> Atomic document updates are not supported unless  is configured.
> This is already configured and regular atomic update operations work fine.
> Request:
> {{/solr/collectionname/update?processor=atomic=add}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11268) AtomicUpdateProcessor complains missing UpdateLog

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11268:

Attachment: Screenshot from 2017-08-21 08-59-34.png

> AtomicUpdateProcessor complains missing UpdateLog
> -
>
> Key: SOLR-11268
> URL: https://issues.apache.org/jira/browse/SOLR-11268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: Screenshot from 2017-08-21 08-59-34.png
>
>
> AtomicURP seems to be broken, complains about:
> Atomic document updates are not supported unless  This is already configured and regular atomic update operations work fine.
> Request:
> {{/solr/collectionname/update?processor=atomic=add}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11268) AtomicUpdateProcessor complains missing UpdateLog

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-11268:
---

 Summary: AtomicUpdateProcessor complains missing UpdateLog
 Key: SOLR-11268
 URL: https://issues.apache.org/jira/browse/SOLR-11268
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya
 Attachments: Screenshot from 2017-08-21 08-59-34.png

AtomicURP seems to be broken, complains about:
Atomic document updates are not supported unless 

[jira] [Commented] (LUCENE-7932) Search record with field value='a' or 'A' returns all records along with one more field value

2017-08-20 Thread Rohit Balekundri (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134699#comment-16134699
 ] 

Rohit Balekundri commented on LUCENE-7932:
--

Hi Steve,
I updated exact steps in description on how we are getting issue after making 
query from org.apache.lucene.search.IndexSearcher. This class is found in 
lucene-core-5.3.jar


> Search record with field value='a' or 'A' returns all records along with one 
> more field value
> -
>
> Key: LUCENE-7932
> URL: https://issues.apache.org/jira/browse/LUCENE-7932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.6
> Environment: Windows and Linux
>Reporter: Rohit Balekundri
>Priority: Critical
>  Labels: features
>
> Hi Lucene Team,
> I would like to explain more on about issue facing after querying using 
> QueryParser API.
> Here I am just giving examples of our project with field names (Not related 
> with Lucene):
> In our document which needs to be archived having key fields and non-key 
> fields.
> A> Key fields: 
> 1. LocationCode (DataType=long)
> 2. CollectionObjectID (DataType=long)
> B> Non-key fields
> Category (DataType=string)
> Steps we followed:
> 1. Step 1: We stored multiple document records with category values as below 
> in index files.
>  LocationCode = 1  Category =b
>  LocationCode = 2  Category =BC
>  LocationCode =3  Category =bcd
> 2. In Step 2: we query for records and we pass query parameters as below.
> a) LocationCode=1 and  Category =a  
>  Result= all records displayed
> b) LocationCode=1 and  Category =A  result= all records displayed
>  Result= all records displayed
> I faced above issue in Lucene 5.3.
> Llater I found even Lucene 6.6 is also having same issue.
> Kindly consider this bug on top priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7932) Search record with field value='a' or 'A' returns all records along with one more field value

2017-08-20 Thread Rohit Balekundri (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Balekundri updated LUCENE-7932:
-
Description: 
Hi Lucene Team,

I would like to explain more on about issue facing after querying using 
QueryParser API.

Here I am just giving examples of our project with field names (Not related 
with Lucene):
In our document which needs to be archived having key fields and non-key fields.
A> Key fields: 
1. LocationCode (DataType=long)
2. CollectionObjectID (DataType=long)
B> Non-key fields
Category (DataType=string)

Steps we followed:
1. Step 1: We stored multiple document records with category values as below in 
index files.
 LocationCode = 1  Category =b
 LocationCode = 2  Category =BC
 LocationCode =3  Category =bcd
2. In Step 2: we query for records and we pass query parameters as below.
a) LocationCode=1 and  Category =a  
 Result= all records displayed
b) LocationCode=1 and  Category =A  result= all records displayed
 Result= all records displayed

I faced above issue in Lucene 5.3.
Llater I found even Lucene 6.6 is also having same issue.
Kindly consider this bug on top priority.


  was:
Hi Lucene Team,

I would like to explain more on about issue facing after querying using 
QueryParser API.

Here I am just giving examples of our project with field names (Not related 
with Lucene):
In our document which needs to be archived having key fields and non-key fields.

Key fields: 
1. LocationCode (DataType=long)
2. CollectionObjectID (DataType=long)

Non-key fields
Category (DataType=string)

Steps we followed:
1. We stored multiple document records with category values as below in index 
files.
 LocationCode = 1  Category =b
 LocationCode = 2  Category =BC
 LocationCode =3  Category =bcd

2. In step 2 we query for records and we pass query parameters as below



3. 

1.  I found all records are showing result if we pass value 'a' or 'A' as 
search value along with either LocationCode=1 OR CollectionObjectID=1. Only 
seems to be happening for the character a. I feel it's having similar issue in 
older Lucene version (Ex:5.3) as well. It's Lucene code after making query it's 
returning all records.
2.  We have two solutions for above existing problem.
a)  Either to allow to show all records if user enters 'a' or 'A' in search 
field. Not correct.
b)  Set value to "null" if user enters 'a' or 'A' in search field value and 
don't show any records. Example: LocationCode=1 and Category='A' then it should 
not show any records.
But this has side effect about if real record exist with same criteria then it 
will not show that record. 
c)  Test with other new releases from Lucene and search. 

But later i found even Lucene 6.6 is also having same bug.

This bug was found by our USA team.

It's very urgent, needed to be fixed from Lucene team.
Kindly consider this bug on top priority.



> Search record with field value='a' or 'A' returns all records along with one 
> more field value
> -
>
> Key: LUCENE-7932
> URL: https://issues.apache.org/jira/browse/LUCENE-7932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.6
> Environment: Windows and Linux
>Reporter: Rohit Balekundri
>Priority: Critical
>  Labels: features
>
> Hi Lucene Team,
> I would like to explain more on about issue facing after querying using 
> QueryParser API.
> Here I am just giving examples of our project with field names (Not related 
> with Lucene):
> In our document which needs to be archived having key fields and non-key 
> fields.
> A> Key fields: 
> 1. LocationCode (DataType=long)
> 2. CollectionObjectID (DataType=long)
> B> Non-key fields
> Category (DataType=string)
> Steps we followed:
> 1. Step 1: We stored multiple document records with category values as below 
> in index files.
>  LocationCode = 1  Category =b
>  LocationCode = 2  Category =BC
>  LocationCode =3  Category =bcd
> 2. In Step 2: we query for records and we pass query parameters as below.
> a) LocationCode=1 and  Category =a  
>  Result= all records displayed
> b) LocationCode=1 and  Category =A  result= all records displayed
>  Result= all records displayed
> I faced above issue in Lucene 5.3.
> Llater I found even Lucene 6.6 is also having same issue.
> Kindly consider this bug on top priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7932) Search record with field value='a' or 'A' returns all records along with one more field value

2017-08-20 Thread Rohit Balekundri (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Balekundri updated LUCENE-7932:
-
Description: 
Hi Lucene Team,

I would like to explain more on about issue facing after querying using 
QueryParser API.

Here I am just giving examples of our project with field names (Not related 
with Lucene):
In our document which needs to be archived having key fields and non-key fields.

Key fields: 
1. LocationCode (DataType=long)
2. CollectionObjectID (DataType=long)

Non-key fields
Category (DataType=string)

Steps we followed:
1. We stored multiple document records with category values as below in index 
files.
 LocationCode = 1  Category =b
 LocationCode = 2  Category =BC
 LocationCode =3  Category =bcd

2. In step 2 we query for records and we pass query parameters as below



3. 

1.  I found all records are showing result if we pass value 'a' or 'A' as 
search value along with either LocationCode=1 OR CollectionObjectID=1. Only 
seems to be happening for the character a. I feel it's having similar issue in 
older Lucene version (Ex:5.3) as well. It's Lucene code after making query it's 
returning all records.
2.  We have two solutions for above existing problem.
a)  Either to allow to show all records if user enters 'a' or 'A' in search 
field. Not correct.
b)  Set value to "null" if user enters 'a' or 'A' in search field value and 
don't show any records. Example: LocationCode=1 and Category='A' then it should 
not show any records.
But this has side effect about if real record exist with same criteria then it 
will not show that record. 
c)  Test with other new releases from Lucene and search. 

But later i found even Lucene 6.6 is also having same bug.

This bug was found by our USA team.

It's very urgent, needed to be fixed from Lucene team.
Kindly consider this bug on top priority.


  was:
1.  I found all records are showing result if we pass value 'a' or 'A' as 
search value along with either LocationCode=1 OR CollectionObjectID=1. Only 
seems to be happening for the character a. I feel it's having similar issue in 
older Lucene version (Ex:5.3) as well. It's Lucene code after making query it's 
returning all records.
2.  We have two solutions for above existing problem.
a)  Either to allow to show all records if user enters 'a' or 'A' in search 
field. Not correct.
b)  Set value to "null" if user enters 'a' or 'A' in search field value and 
don't show any records. Example: LocationCode=1 and Category='A' then it should 
not show any records.
But this has side effect about if real record exist with same criteria then it 
will not show that record. 
c)  Test with other new releases from Lucene and search. 

But later i found even Lucene 6.6 is also having same bug.

This bug was found by our USA team.

It's very urgent, needed to be fixed from Lucene team.
Kindly consider this bug on top priority.



> Search record with field value='a' or 'A' returns all records along with one 
> more field value
> -
>
> Key: LUCENE-7932
> URL: https://issues.apache.org/jira/browse/LUCENE-7932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.6
> Environment: Windows and Linux
>Reporter: Rohit Balekundri
>Priority: Critical
>  Labels: features
>
> Hi Lucene Team,
> I would like to explain more on about issue facing after querying using 
> QueryParser API.
> Here I am just giving examples of our project with field names (Not related 
> with Lucene):
> In our document which needs to be archived having key fields and non-key 
> fields.
> Key fields: 
> 1. LocationCode (DataType=long)
> 2. CollectionObjectID (DataType=long)
> Non-key fields
> Category (DataType=string)
> Steps we followed:
> 1. We stored multiple document records with category values as below in index 
> files.
>  LocationCode = 1  Category =b
>  LocationCode = 2  Category =BC
>  LocationCode =3  Category =bcd
> 2. In step 2 we query for records and we pass query parameters as below
> 3. 
> 1.I found all records are showing result if we pass value 'a' or 'A' as 
> search value along with either LocationCode=1 OR CollectionObjectID=1. Only 
> seems to be happening for the character a. I feel it's having similar issue 
> in older Lucene version (Ex:5.3) as well. It's Lucene code after making query 
> it's returning all records.
> 2.We have two solutions for above existing problem.
> a)Either to allow to show all records if user enters 'a' or 'A' in search 
> field. Not correct.
> b)Set value to "null" if user enters 'a' or 'A' in search field value and 
> don't show any records. Example: LocationCode=1 and 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 6836 - Failure!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6836/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1100>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1100>
at 
__randomizedtesting.SeedInfo.seed([3BFC3A868531115F:EFB971DF6267A2A4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 10728 lines...]
   

[jira] [Created] (SOLR-11267) Add support for "add-distinct" atomic update operation

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-11267:
---

 Summary: Add support for "add-distinct" atomic update operation
 Key: SOLR-11267
 URL: https://issues.apache.org/jira/browse/SOLR-11267
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya


Often, a multivalued field is used as a set of values. Since multivalued fields 
are more like lists than sets, users do two consecutive operations, remove and 
add, to insert an element into the field and also maintain the set's property 
of only having unique elements.

Proposing a new single operation, called "add-distinct" (which essentially 
means "add-if-doesn't exist") for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11250) Add new LTR model which loads the model definition from the external resource

2017-08-20 Thread Yuki Yano (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134668#comment-16134668
 ] 

Yuki Yano commented on SOLR-11250:
--

[~cpoerschke]
I added new patch named "SOLR-11250_master_v2.patch". This patch is based on 
your patch and also contains the revisions which is mentioned above.

> Add new LTR model which loads the model definition from the external resource
> -
>
> Key: SOLR-11250
> URL: https://issues.apache.org/jira/browse/SOLR-11250
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Yuki Yano
>Priority: Minor
> Attachments: SOLR-11250_master.patch, SOLR-11250_master_v2.patch, 
> SOLR-11250.patch
>
>
> We add new model which contains only the location of the external model and 
> loads it during the initialization.
> By this procedure, large models which are difficult to upload to ZooKeeper 
> can be available.
> The new model works as the wrapper of existing models, and deligates APIs to 
> them.
> We add two classes by this patch:
> * {{ExternalModel}} : a base class for models with external resources.
> * {{URIExternalModel}} : an implementation of {{ExternalModel}} which loads 
> the external model from specified URI (ex. file:, http:, etc.).
> For example, if you have a model on the local disk 
> "file:///var/models/myModel.json", the definition of {{URIExternalModel}} 
> will be like the following.
> {code}
> {
>   "class" : "org.apache.solr.ltr.model.URIExternalModel",
>   "name" : "myURIExternalModel",
>   "features" : [],
>   "params" : {
> "uri" : "file:///var/models/myModel.json"
>   }
> }
> {code}
> If you use LTR with {{model=myURIExternalModel}}, the model of 
> {{myModel.json}} will be used for scoring documents.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10719) Creating a core.properties fails if the parent of core.properties is a symlinked dierctory

2017-08-20 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-10719:
-
Fix Version/s: (was: 6.7)
   6.6.1

> Creating a core.properties fails if the parent of core.properties is a 
> symlinked dierctory
> --
>
> Key: SOLR-10719
> URL: https://issues.apache.org/jira/browse/SOLR-10719
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.6.1
>
> Attachments: SOLR-10719.patch, SOLR-10719.patch
>
>
> Well, it doesn't actually fail until you try to restart the Solr instance. 
> The root is that creating core.properties fails.
> This is due to SOLR-8260. CorePropertiesLocator.writePropertiesFile changed 
> from:
> propfile.getParentFile().mkdirs();
> to
> Files.createDirectories(propfile.getParent());
> The former (apparently) thinks it's OK if a symlink points to a directory, 
> but the latter throws an exception.
> So the behavior here is that the call appears to succeed, the replica is 
> created and is functional. Until you restart the instance when it's not 
> discovered.
> I hacked in a simple test to see if the parent existed already and skip the 
> call to createDirectories if so and ADDREPLICA works just fine. Restarting 
> Solr finds the replica.
> The test "for real" would probably have to be better than this as we probably 
> really want to keep from overwriting an existing replica and the like, didn't 
> check whether that's already accounted for though.
> There's another issue here that failing to write the properties file should 
> fail the ADDREPLICA IMO.
> [~romseygeek] I'm guessing that this is an unintended side-effect of 
> SOLR-8260 but wanted to check before diving in deeper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10698) StreamHandler should allow connections to be closed early

2017-08-20 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-10698:
-
Fix Version/s: (was: 6.7)
   6.6.1

> StreamHandler should allow connections to be closed early 
> --
>
> Key: SOLR-10698
> URL: https://issues.apache.org/jira/browse/SOLR-10698
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Fix For: 7.0, 6.6.1
>
> Attachments: SOLR-10698.patch
>
>
> Before a stream is drained out, if we call close() we get an exception like 
> this:
> {code}
> at
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:215)
> at
> org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:316)
> at
> org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:128)
> at
> org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
> at
> org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
> at sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:378)
> at sun.nio.cs.StreamDecoder.close(StreamDecoder.java:193)
> at java.io.InputStreamReader.close(InputStreamReader.java:199)
> at
> org.apache.solr.client.solrj.io.stream.JSONTupleStream.close(JSONTupleStream.java:91)
> at
> org.apache.solr.client.solrj.io.stream.SolrStream.close(SolrStream.java:186)
> {code}
> As quoted from 
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg130676.html the 
> problem seems to when we hit an exception the /steam handler does not close 
> the stream.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10698) StreamHandler should allow connections to be closed early

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134664#comment-16134664
 ] 

ASF subversion and git services commented on SOLR-10698:


Commit f31c5a2906efb92900bf66373ecdd4d21ba4110e in lucene-solr's branch 
refs/heads/branch_6_6 from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f31c5a2 ]

SOLR-10698: StreamHandler should allow connections to be closed early


> StreamHandler should allow connections to be closed early 
> --
>
> Key: SOLR-10698
> URL: https://issues.apache.org/jira/browse/SOLR-10698
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7
>
> Attachments: SOLR-10698.patch
>
>
> Before a stream is drained out, if we call close() we get an exception like 
> this:
> {code}
> at
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:215)
> at
> org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:316)
> at
> org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:128)
> at
> org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
> at
> org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
> at sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:378)
> at sun.nio.cs.StreamDecoder.close(StreamDecoder.java:193)
> at java.io.InputStreamReader.close(InputStreamReader.java:199)
> at
> org.apache.solr.client.solrj.io.stream.JSONTupleStream.close(JSONTupleStream.java:91)
> at
> org.apache.solr.client.solrj.io.stream.SolrStream.close(SolrStream.java:186)
> {code}
> As quoted from 
> https://www.mail-archive.com/solr-user@lucene.apache.org/msg130676.html the 
> problem seems to when we hit an exception the /steam handler does not close 
> the stream.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release a 6.6.1

2017-08-20 Thread Erick Erickson
Uwe:

As far as I'm concerned, please do put both in. Varun is the RM and has
final say of course. He may be traveling though and be a little delayed
responding.

Erick

On Sun, Aug 20, 2017 at 5:41 AM, Uwe Schindler  wrote:

> Hi,
>
>
>
> I just noticed, that our Hadoop friends released Hadoop 2.7.4. This fixes
> the stupid Java 9 bug in their static initializer (StringIndexOutOfBounds).
> So I’d like to also get https://issues.apache.org/jira/browse/SOLR-11261
> in. If Jenkins is happy on 7.x and master, this should be easy.
>
>
>
> If you think it’s too risky (Hadoop 2.7.2 -> 2.7.4), we can live with the
> workaround in Lucene 6.6.1! But the workaround is really hacky: It changes
> the “java.version” system property temporarily on Java 9 while initializing
> Hadoop, which is not something you should ever do!
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Uwe Schindler [mailto:u...@thetaphi.de]
> *Sent:* Sunday, August 20, 2017 12:53 PM
> *To:* dev@lucene.apache.org
> *Subject:* RE: Release a 6.6.1
>
>
>
> Hi,
>
>
>
> I need to backport SOLR-10966 to branch 6.6, otherwise Jenkins does not
> pass with Java 9.
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Uwe Schindler [mailto:u...@thetaphi.de ]
> *Sent:* Saturday, August 19, 2017 12:00 AM
> *To:* dev@lucene.apache.org
> *Subject:* Re: Release a 6.6.1
>
>
>
> Hi,
>
> I enabled Jenkins jobs on . ASF was active already.
>
> Uwe
>
> Am 18. August 2017 23:34:23 MESZ schrieb Varun Thacker  >:
>
> From the bug fixes in lucene 7.0 do we need to backport any of these
> issues :  LUCENE-7859 / LUCENE-7871 / LUCENE-7914 ?
>
>
>
> I plan on backporting these three Solr fixes on Sunday
>
>
>
> SOLR-10698
>
> SOLR-10719
>
> SOLR-11228
>
>
>
> looking through the 7.0 bug fixes these two look important to get in as
> well :
>
>
>
> SOLR-10983
>
> SOLR-9262
>
>
>
> So if no one get's to it I'll try backporting them as well
>
>
>
> Can someone please enable Jenkins on the branch again?
>
>
>
>
>
> On Thu, Aug 17, 2017 at 3:18 PM, Erick Erickson 
> wrote:
>
> Right, that was the original note before we decided to backport a
> bunch of other stuff and I decided it made no sense to omit this one.
> All that has to happen is remove the " (note, not in 7.0, is in 7.1)"
> bits since it's in 6.6, 6.x, 7.0, 7.1 and master.
>
> Good catch!
>
>
>
>
> On Thu, Aug 17, 2017 at 3:10 PM, Varun Thacker  wrote:
> > Should I then go remove the note part from the CHANGES entry in
> branch_6_6 ?
> >
> > * SOLR-11177: CoreContainer.load needs to send lazily loaded core
> > descriptors to the proper list rather than send
> >   them all to the transient lists. (Erick Erickson) (note, not in 7.0,
> is in
> > 7.1)
> >
> > I see a commit for this in branch_7_0
> >
> > Commit c73b5429b722b09b9353ec82627a35e2b864b823 in lucene-solr's branch
> > refs/heads/branch_7_0 from Erick
> > [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c73b542 ]
> >
> >
> >
> > On Thu, Aug 17, 2017 at 2:48 PM, Erick Erickson  >
> > wrote:
> >>
> >> Well, it is in 7.0. Everything I moved to 6.6.1 is also in 7.0, or
> should
> >> be.
> >>
> >> On Thu, Aug 17, 2017 at 2:31 PM, Varun Thacker 
> wrote:
> >> > Hi Erick,
> >> >
> >> > I was going through the CHANGES file from the 6_6 branch and just
> >> > curious
> >> > why are we not planning on putting SOLR-11177 in 7.0 ?
> >> >
> >> > On Thu, Aug 17, 2017 at 7:45 AM, Erick Erickson
> >> > 
> >> > wrote:
> >> >>
> >> >> OK, I'm done with my changes for 7.0, I think Varun might have a few
> >> >> too.
> >> >>
> >> >> And things didn't melt down overnight so...
> >> >>
> >> >> On Wed, Aug 16, 2017 at 12:25 PM, Anshum Gupta <
> ans...@anshumgupta.net>
> >> >> wrote:
> >> >> > +1 on getting the fixes into 7.0 if you are confident with those,
> and
> >> >> > if
> >> >> > they are a part of 6.6.1.
> >> >> >
> >> >> > Thanks for taking care of this Erick.
> >> >> >
> >> >> > On Wed, Aug 16, 2017 at 12:24 PM Erick Erickson
> >> >> > 
> >> >> > wrote:
> >> >> >>
> >> >> >> FYI:
> >> >> >>
> >> >> >> I'll be backporting the following to SOLR 7.0 today:
> >> >> >>
> >> >> >> SOLR-11024: ParallelStream should set the StreamContext when
> >> >> >> constructing SolrStreams:
> >> >> >> SOLR-11177: CoreContainer.load needs to send lazily loaded core
> >> >> >> descriptors to the proper list rather than send them all to the
> >> >> >> transient lists.
> >> >> >> SOLR-11122: Creating a core should write a core.properties file
> >> >> >> first
> >> >> >> and clean up on failure
> >> >> >>
> >> >> >> and those as well as several others to 6.6.1.
> >> >> >>
> >> >> >> Since some of these 

[jira] [Updated] (SOLR-11250) Add new LTR model which loads the model definition from the external resource

2017-08-20 Thread Yuki Yano (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Yano updated SOLR-11250:
-
Attachment: SOLR-11250_master_v2.patch

> Add new LTR model which loads the model definition from the external resource
> -
>
> Key: SOLR-11250
> URL: https://issues.apache.org/jira/browse/SOLR-11250
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Yuki Yano
>Priority: Minor
> Attachments: SOLR-11250_master.patch, SOLR-11250_master_v2.patch, 
> SOLR-11250.patch
>
>
> We add new model which contains only the location of the external model and 
> loads it during the initialization.
> By this procedure, large models which are difficult to upload to ZooKeeper 
> can be available.
> The new model works as the wrapper of existing models, and deligates APIs to 
> them.
> We add two classes by this patch:
> * {{ExternalModel}} : a base class for models with external resources.
> * {{URIExternalModel}} : an implementation of {{ExternalModel}} which loads 
> the external model from specified URI (ex. file:, http:, etc.).
> For example, if you have a model on the local disk 
> "file:///var/models/myModel.json", the definition of {{URIExternalModel}} 
> will be like the following.
> {code}
> {
>   "class" : "org.apache.solr.ltr.model.URIExternalModel",
>   "name" : "myURIExternalModel",
>   "features" : [],
>   "params" : {
> "uri" : "file:///var/models/myModel.json"
>   }
> }
> {code}
> If you use LTR with {{model=myURIExternalModel}}, the model of 
> {{myModel.json}} will be used for scoring documents.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11266) V2 API returning wrong content-type

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-11266:
---

 Summary: V2 API returning wrong content-type
 Key: SOLR-11266
 URL: https://issues.apache.org/jira/browse/SOLR-11266
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya


The content-type of the returned value is wrong in many places. It should 
return "application/json", but instead returns "application/text-plan".

Here's an example:
{code}
[ishan@t430 ~] $ curl -v 
"http://localhost:8983/api/collections/products/select?q=*:*=0;
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /api/collections/products/select?q=*:*=0 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.51.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Content-Type: text/plain;charset=utf-8
< Content-Length: 184
< 
{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":1,
"params":{
  "q":"*:*",
  "rows":"0"}},
  "response":{"numFound":260,"start":0,"docs":[]
  }}
* Curl_http_done: called premature == 0
* Connection #0 to host localhost left intact
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10719) Creating a core.properties fails if the parent of core.properties is a symlinked dierctory

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134655#comment-16134655
 ] 

ASF subversion and git services commented on SOLR-10719:


Commit 425af4f658de763821fea41b763fb3fda8316ad0 in lucene-solr's branch 
refs/heads/branch_6_6 from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=425af4f ]

SOLR-10719: Creating a core.properties fails if the parent of core.properties 
is a symlinked dierctory

(cherry picked from commit ee10c45)


> Creating a core.properties fails if the parent of core.properties is a 
> symlinked dierctory
> --
>
> Key: SOLR-10719
> URL: https://issues.apache.org/jira/browse/SOLR-10719
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7
>
> Attachments: SOLR-10719.patch, SOLR-10719.patch
>
>
> Well, it doesn't actually fail until you try to restart the Solr instance. 
> The root is that creating core.properties fails.
> This is due to SOLR-8260. CorePropertiesLocator.writePropertiesFile changed 
> from:
> propfile.getParentFile().mkdirs();
> to
> Files.createDirectories(propfile.getParent());
> The former (apparently) thinks it's OK if a symlink points to a directory, 
> but the latter throws an exception.
> So the behavior here is that the call appears to succeed, the replica is 
> created and is functional. Until you restart the instance when it's not 
> discovered.
> I hacked in a simple test to see if the parent existed already and skip the 
> call to createDirectories if so and ADDREPLICA works just fine. Restarting 
> Solr finds the replica.
> The test "for real" would probably have to be better than this as we probably 
> really want to keep from overwriting an existing replica and the like, didn't 
> check whether that's already accounted for though.
> There's another issue here that failing to write the properties file should 
> fail the ADDREPLICA IMO.
> [~romseygeek] I'm guessing that this is an unintended side-effect of 
> SOLR-8260 but wanted to check before diving in deeper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11265) Atomic updates broken with V2 APIs

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11265:

Description: 
The maps used for set, inc, etc. operations are confusing the V2 handler.

Steps to reproduce:
{code}
$ curl http://localhost:8983/api/collections/demo/update -d '
[
 {"id" : "book1",
  "title_t" : "Snow Crash",// text field
  "copies_i" : 5,
  "cat_ss" : "Science Fiction" // multi-valued string field
 }
]'

$ curl http://localhost:8983/api/collections/demo/update -d '
[
 {"id" : "book1",
  "author_s"   : {"set":"Neal Stephenson"},
  "copies_i"   : {"inc":3},
  "cat_ss" : {"add":"Cyberpunk"}
 }
]'
{code}

This results in the following document:
{code}
{
"id":"book1",
"author_s.set":["Neal Stephenson"],
"copies_i.inc":[3],
"cat_ss.add":["Cyberpunk"],
"_version_":1576306836595802112,
"cat_ss.add_str":["Cyberpunk"],
"author_s.set_str":["Neal Stephenson"]}]
  }
{code}

Example from Yonik's blog: http://yonik.com/solr/atomic-updates/

  was:
The maps used for set, inc, etc. operations are confusing the V2 handler.

Steps to reproduce:
{code}
$ curl http://localhost:8983/solr/demo/update -d '
[
 {"id" : "book1",
  "title_t" : "Snow Crash",// text field
  "copies_i" : 5,
  "cat_ss" : "Science Fiction" // multi-valued string field
 }
]'

$ curl http://localhost:8983/solr/demo/update -d '
[
 {"id" : "book1",
  "author_s"   : {"set":"Neal Stephenson"},
  "copies_i"   : {"inc":3},
  "cat_ss" : {"add":"Cyberpunk"}
 }
]'
{code}

This results in the following document:
{code}
{
"id":"book1",
"author_s.set":["Neal Stephenson"],
"copies_i.inc":[3],
"cat_ss.add":["Cyberpunk"],
"_version_":1576306836595802112,
"cat_ss.add_str":["Cyberpunk"],
"author_s.set_str":["Neal Stephenson"]}]
  }
{code}


> Atomic updates broken with V2 APIs
> --
>
> Key: SOLR-11265
> URL: https://issues.apache.org/jira/browse/SOLR-11265
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>
> The maps used for set, inc, etc. operations are confusing the V2 handler.
> Steps to reproduce:
> {code}
> $ curl http://localhost:8983/api/collections/demo/update -d '
> [
>  {"id" : "book1",
>   "title_t" : "Snow Crash",// text field
>   "copies_i" : 5,
>   "cat_ss" : "Science Fiction" // multi-valued string field
>  }
> ]'
> $ curl http://localhost:8983/api/collections/demo/update -d '
> [
>  {"id" : "book1",
>   "author_s"   : {"set":"Neal Stephenson"},
>   "copies_i"   : {"inc":3},
>   "cat_ss" : {"add":"Cyberpunk"}
>  }
> ]'
> {code}
> This results in the following document:
> {code}
> {
> "id":"book1",
> "author_s.set":["Neal Stephenson"],
> "copies_i.inc":[3],
> "cat_ss.add":["Cyberpunk"],
> "_version_":1576306836595802112,
> "cat_ss.add_str":["Cyberpunk"],
> "author_s.set_str":["Neal Stephenson"]}]
>   }
> {code}
> Example from Yonik's blog: http://yonik.com/solr/atomic-updates/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11265) Atomic updates broken with V2 APIs

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-11265:
---

 Summary: Atomic updates broken with V2 APIs
 Key: SOLR-11265
 URL: https://issues.apache.org/jira/browse/SOLR-11265
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya


The maps used for set, inc, etc. operations are confusing the V2 handler.

Steps to reproduce:
{code}
$ curl http://localhost:8983/solr/demo/update -d '
[
 {"id" : "book1",
  "title_t" : "Snow Crash",// text field
  "copies_i" : 5,
  "cat_ss" : "Science Fiction" // multi-valued string field
 }
]'

$ curl http://localhost:8983/solr/demo/update -d '
[
 {"id" : "book1",
  "author_s"   : {"set":"Neal Stephenson"},
  "copies_i"   : {"inc":3},
  "cat_ss" : {"add":"Cyberpunk"}
 }
]'
{code}

This results in the following document:
{code}
{
"id":"book1",
"author_s.set":["Neal Stephenson"],
"copies_i.inc":[3],
"cat_ss.add":["Cyberpunk"],
"_version_":1576306836595802112,
"cat_ss.add_str":["Cyberpunk"],
"author_s.set_str":["Neal Stephenson"]}]
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.0 - Build # 32 - Failure

2017-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.0/32/

No tests ran.

Build Log:
[...truncated 2079 lines...]
ERROR: command execution failed.
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-NightlyTests-7.0 #32
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-NightlyTests-7.0 #32
ERROR: lucene2 is offline; cannot locate JDK 1.8 (latest)
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
ERROR: lucene2 is offline; cannot locate JDK 1.8 (latest)
ERROR: lucene2 is offline; cannot locate JDK 1.8 (latest)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-7.x-Linux (32bit/jdk-9-ea+181) - Build # 295 - Still Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/295/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseParallelGC --illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:35191/collMinRf_1x3 due to: Path 
not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:35191/collMinRf_1x3 due to: Path not found: /id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([8E3A470CE083B308:66E78D64E7FDEF0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Updated] (LUCENE-7931) SpanNotQuery has bug?

2017-08-20 Thread jin jing (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin jing updated LUCENE-7931:
-
Description: 
i find when use SpanNotQuery and the exclud key word like  "not"  "or"  will 
give a error result

example:
doc1:the quick brown fox jumps over the lazy dog
doc2:the quick red fox jumps over the sleepy cat
doc3:the quick brown fox jumps over the lazy NOT dog

String queryStringStart = "dog";  
String queryStringEnd = "quick";  
String excludeString = "NOT";  
SpanQuery queryStart = new SpanTermQuery(new Term("text",queryStringStart));  
SpanQuery queryEnd = new SpanTermQuery(new Term("text",queryStringEnd));  
SpanQuery excludeQuery = new SpanTermQuery(new Term("text",excludeString));  
SpanQuery spanNearQuery = new SpanNearQuery(  
new SpanQuery[] {queryStart,queryEnd}, 7, false, false);  
  
 SpanNotQuery spanNotQuery = new SpanNotQuery(spanNearQuery, excludeQuery, 
4,3); 

then  this will return doc1 and doc3.  so i think it is a bug. 

  was:i find when use SpanNotQuery and the exclud key word like  "not"  "or"  
will give a error result


> SpanNotQuery  has bug?
> --
>
> Key: LUCENE-7931
> URL: https://issues.apache.org/jira/browse/LUCENE-7931
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.3.1
>Reporter: jin jing
>
> i find when use SpanNotQuery and the exclud key word like  "not"  "or"  will 
> give a error result
> example:
> doc1:the quick brown fox jumps over the lazy dog
> doc2:the quick red fox jumps over the sleepy cat
> doc3:the quick brown fox jumps over the lazy NOT dog
> String queryStringStart = "dog";  
> String queryStringEnd = "quick";  
> String excludeString = "NOT";  
> SpanQuery queryStart = new SpanTermQuery(new Term("text",queryStringStart));  
> SpanQuery queryEnd = new SpanTermQuery(new Term("text",queryStringEnd));  
> SpanQuery excludeQuery = new SpanTermQuery(new Term("text",excludeString));  
> SpanQuery spanNearQuery = new SpanNearQuery(  
> new SpanQuery[] {queryStart,queryEnd}, 7, false, false);  
>   
>  SpanNotQuery spanNotQuery = new SpanNotQuery(spanNearQuery, excludeQuery, 
> 4,3); 
> then  this will return doc1 and doc3.  so i think it is a bug. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7.0 Release Update

2017-08-20 Thread Anshum Gupta
Let's not commit more stuff to 7.0, unless it's a blocker as it gets hard
to track.
At this time, the only commits that would be going in to 7.0 are the ones
that Varun spoke to me about back porting.
Once that is done, I'll cut an RC (most likely tomorrow). In the meanwhile,
I'll work on the release notes, and making sure that the CHANGES are good
for 7.0.

Anshum

On Sun, Aug 20, 2017 at 8:33 AM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> I've added SOLR-11183 to the release branch. Please let me know if someone
> has any concerns.
> Thanks,
> Ishan
>
> On Sun, Aug 20, 2017 at 5:55 PM, Yonik Seeley  wrote:
>
>> I opened https://issues.apache.org/jira/browse/SOLR-11262
>> I don't know if it has implications for 7.0 or not.
>>
>> From the issue:
>> """This means that any code using PushWriter (via MapWriter or
>> IteratorWriter) will be broken if one tries to use XML response
>> format. This may easily go unnoticed if one is not using XML response
>> format in testing (JSON or binary is frequently used)."""
>>
>>
>> -Yonik
>>
>>
>> On Tue, Aug 15, 2017 at 5:14 AM, Noble Paul  wrote:
>> > sorry for the last minute notice. I need to fix the folowing as well.
>> > It may take a few hours
>> > https://issues.apache.org/jira/browse/SOLR-11239
>> >
>> > On Tue, Aug 15, 2017 at 6:41 AM, Andrzej Białecki
>> >  wrote:
>> >> Then, if I may be so bold, I’d like to slip in SOLR-11235, which is a
>> simple
>> >> AlreadyClosedException prevention fix. Patch is ready, tests are
>> passing.
>> >>
>> >> On 14 Aug 2017, at 19:17, Anshum Gupta  wrote:
>> >>
>> >> Thanks Ab.
>> >>
>> >> I'll cut an RC on Wednesday, so that both, I get the time, and also
>> that the
>> >> tests get some time on Jenkins.
>> >>
>> >> Anshum
>> >>
>> >> On Mon, Aug 14, 2017 at 5:29 AM Andrzej Białecki
>> >>  wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> I’ve committed the fix for SOLR-11221 to branch_7_0 (and branch_7x and
>> >>> master).
>> >>>
>> >>> On 12 Aug 2017, at 02:20, Andrzej Białecki
>> >>>  wrote:
>> >>>
>> >>> Hi Anshum,
>> >>>
>> >>> The patch for SOLR-11221 is ready, with one caveat - it required
>> larger
>> >>> changes than I thought, so there’s a sizeable chunk of new code that
>> is not
>> >>> so well tested… I added a test that used to fail without this change,
>> and
>> >>> manual testing confirms that metrics are now correctly reported after
>> core
>> >>> reloads.
>> >>>
>> >>> We could postpone this fix to 7.0.1 if there are objections, but I
>> think
>> >>> it should go in to 7.0 - without the fix JMX reporting is surely
>> broken,
>> >>> with the fix it’s only a possibility ;)
>> >>>
>> >>>
>> >>> On 11 Aug 2017, at 19:59, Anshum Gupta 
>> wrote:
>> >>>
>> >>> Thanks for the report Mark!
>> >>>
>> >>> and yes, I'll wait until the JMX issue is fixed.
>> >>>
>> >>> Anshum
>> >>>
>> >>> On Fri, Aug 11, 2017 at 9:49 AM Mark Miller 
>> wrote:
>> 
>>  Yeah, let's not release a major version with JMX monitoring broken.
>> 
>>  Here is a 30 run test report for the 7.0 branch:
>>  http://apache-solr-7-0.bitballoon.com/20170811
>> 
>>  - Mark
>> 
>>  On Thu, Aug 10, 2017 at 4:02 PM Tomas Fernandez Lobbe <
>> tflo...@apple.com>
>>  wrote:
>> >
>> > Lets fix it before releasing. I’d hate to release with a known
>> critical
>> > bug.
>> >
>> > On Aug 10, 2017, at 12:54 PM, Anshum Gupta 
>> > wrote:
>> >
>> > Hi Ab,
>> >
>> > How quickly are we talking about? If you suggest, we could wait,
>> > depending upon the impact, and the time required to fix it.
>> >
>> > Anshum
>> >
>> > On Thu, Aug 10, 2017 at 12:28 PM Andrzej Białecki
>> >  wrote:
>> >>
>> >> I just discovered SOLR-11221, which basically breaks JMX
>> monitoring. We
>> >> could either release with “known issues” and then quickly do
>> 7.0.1, or wait
>> >> until it’s fixed.
>> >>
>> >> On 10 Aug 2017, at 18:55, Mark Miller 
>> wrote:
>> >>
>> >> I'll generate a test report for the 7.0 branch tonight so we can
>> >> evaluate that for an rc as well.
>> >>
>> >> - Mark
>> >>
>> >> On Mon, Aug 7, 2017 at 1:32 PM Anshum Gupta <
>> ans...@anshumgupta.net>
>> >> wrote:
>> >>>
>> >>> Good news!
>> >>>
>> >>> I don't see any 'blockers' for 7.0 anymore, which means, after
>> giving
>> >>> Jenkins a couple of days, I'll cut out an RC. I intend to do this
>> on
>> >>> Wednesday/Thursday, unless a blocker comes up, which I hope
>> shouldn't be the
>> >>> case.
>> >>>
>> >>> Anshum
>> >>>
>> >>>
>> >>> On Tue, Jul 25, 2017 at 4:02 PM Steve Rowe 
>> wrote:
>> 
>> 

[JENKINS-EA] Lucene-Solr-6.6-Linux (32bit/jdk-9-ea+181) - Build # 73 - Still Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/73/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseParallelGC --illegal-access=deny

1 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateWatcherIsRetainedOnPredicateFailure

Error Message:
Did not see a fully active cluster after 30 seconds

Stack Trace:
java.lang.AssertionError: Did not see a fully active cluster after 30 seconds
at 
__randomizedtesting.SeedInfo.seed([EFF932D00BBAF2D6:67CF9083D3151AC4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateWatcherIsRetainedOnPredicateFailure(TestCollectionStateWatchers.java:250)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13901 lines...]
   [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 840 - Still Failing

2017-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/840/

No tests ran.

Build Log:
[...truncated 25618 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (29.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 29.0 MB in 0.02 sec (1194.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 69.0 MB in 0.06 sec (1234.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 79.2 MB in 0.07 sec (1199.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6138 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6138 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (278.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 50.7 MB in 0.05 sec (1067.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 142.7 MB in 0.12 sec (1161.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 143.7 MB in 0.12 sec (1157.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]  
   [smoker] Started Solr 

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 124 - Still unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/124/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

18 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([9FCE348E3E843E85:179A0B549078537D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:908)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20351 - Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20351/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) 
Thread[id=22880, name=jetty-launcher-3094-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 
   1) Thread[id=22880, name=jetty-launcher-3094-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)
at __randomizedtesting.SeedInfo.seed([41962AB239229C3B]:0)


FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1100>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1100>
at 
__randomizedtesting.SeedInfo.seed([41962AB239229C3B:95D361EBDE742FC0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

[jira] [Updated] (SOLR-8689) bin/solr.cmd does not start with recent Verona builds of Java 9 because of version parsing issue

2017-08-20 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8689:

Attachment: SOLR-8689.patch

> bin/solr.cmd does not start with recent Verona builds of Java 9 because of 
> version parsing issue
> 
>
> Key: SOLR-8689
> URL: https://issues.apache.org/jira/browse/SOLR-8689
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
> Environment: Windows 7
>Reporter: Uwe Schindler
>Priority: Blocker
>  Labels: Java9
> Fix For: 7.0, master (8.0), 7.1
>
> Attachments: SOLR-8689.patch, SOLR-8689.patch
>
>
> At least on Windows, Solr 5.5 does not start with the shell script using a 
> Verona-Java-9 JDK:
> {noformat}
> *
> JAVA_HOME = C:\Program Files\Java\jdk-9
> java version "9-ea"
> Java(TM) SE Runtime Environment (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc)
> Java HotSpot(TM) 64-Bit Server VM (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc, mixed mode)
> *
> C:\Users\Uwe Schindler\Desktop\solr-5.5.0\bin>solr start
> ERROR: Java 1.7 or later is required to run Solr. Current Java version is: 
> 9-ea
> {noformat}
> I don't know if this is better with Linux, but I assume the version parsing 
> is broken (e.g., String#startsWith, interpret as floating point number,...)
> We should fix this before Java 9 gets released! The version numbering scheme 
> changed completely: http://openjdk.java.net/jeps/223



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved SOLR-11261.
--
Resolution: Fixed

> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134578#comment-16134578
 ] 

ASF subversion and git services commented on SOLR-10966:


Commit e0b54e6552775e2f71591e772bceb758c8428783 in lucene-solr's branch 
refs/heads/branch_6_6 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e0b54e6 ]

SOLR-11261, SOLR-10966: Upgrade to Hadoop 2.7.4 to fix incompatibility with 
Java 9.
This also reverts commit 85a27a231fdddb118ee178baac170da0097a02c0.

# Conflicts:
#   solr/CHANGES.txt


> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: 7.0, 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch, SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134577#comment-16134577
 ] 

ASF subversion and git services commented on SOLR-11261:


Commit e0b54e6552775e2f71591e772bceb758c8428783 in lucene-solr's branch 
refs/heads/branch_6_6 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e0b54e6 ]

SOLR-11261, SOLR-10966: Upgrade to Hadoop 2.7.4 to fix incompatibility with 
Java 9.
This also reverts commit 85a27a231fdddb118ee178baac170da0097a02c0.

# Conflicts:
#   solr/CHANGES.txt


> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134579#comment-16134579
 ] 

ASF subversion and git services commented on SOLR-11261:


Commit 0ad7440bb7476d059a5717e1cb18c3f45a52825e in lucene-solr's branch 
refs/heads/branch_6_6 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0ad7440 ]

SOLR-11261: Fix missing dependency & add new thread filter


> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.6-Windows (64bit/jdk1.8.0_144) - Build # 23 - Still unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Windows/23/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1100>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1100>
at 
__randomizedtesting.SeedInfo.seed([726A08F8FAECD10E:A62F43A11DBA62F5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  

[JENKINS] Lucene-Solr-6.6-Linux (64bit/jdk1.8.0_144) - Build # 72 - Still Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/72/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateWatcherIsRetainedOnPredicateFailure

Error Message:
Did not see a fully active cluster after 30 seconds

Stack Trace:
java.lang.AssertionError: Did not see a fully active cluster after 30 seconds
at 
__randomizedtesting.SeedInfo.seed([22607AD7AE46FF01:AA56D88476E91713]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateWatcherIsRetainedOnPredicateFailure(TestCollectionStateWatchers.java:250)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14112 lines...]
   [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers
   [junit4]   2> 

[jira] [Commented] (SOLR-8689) bin/solr.cmd does not start with recent Verona builds of Java 9 because of version parsing issue

2017-08-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134558#comment-16134558
 ] 

Uwe Schindler commented on SOLR-8689:
-

BTW, I set this as blocker, as we cannot release Solr 7 at the same time like 
Java 9 and it won't work.
If we cannot solve the logging issue, I will comment out the separate log file 
on windows...

> bin/solr.cmd does not start with recent Verona builds of Java 9 because of 
> version parsing issue
> 
>
> Key: SOLR-8689
> URL: https://issues.apache.org/jira/browse/SOLR-8689
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
> Environment: Windows 7
>Reporter: Uwe Schindler
>Priority: Blocker
>  Labels: Java9
> Fix For: 7.0, master (8.0), 7.1
>
> Attachments: SOLR-8689.patch
>
>
> At least on Windows, Solr 5.5 does not start with the shell script using a 
> Verona-Java-9 JDK:
> {noformat}
> *
> JAVA_HOME = C:\Program Files\Java\jdk-9
> java version "9-ea"
> Java(TM) SE Runtime Environment (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc)
> Java HotSpot(TM) 64-Bit Server VM (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc, mixed mode)
> *
> C:\Users\Uwe Schindler\Desktop\solr-5.5.0\bin>solr start
> ERROR: Java 1.7 or later is required to run Solr. Current Java version is: 
> 9-ea
> {noformat}
> I don't know if this is better with Linux, but I assume the version parsing 
> is broken (e.g., String#startsWith, interpret as floating point number,...)
> We should fix this before Java 9 gets released! The version numbering scheme 
> changed completely: http://openjdk.java.net/jeps/223



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8689) bin/solr.cmd does not start with recent Verona builds of Java 9 because of version parsing issue

2017-08-20 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8689:

Attachment: SOLR-8689.patch

Here my current patch, which breaks, because of the absolute path issue.

> bin/solr.cmd does not start with recent Verona builds of Java 9 because of 
> version parsing issue
> 
>
> Key: SOLR-8689
> URL: https://issues.apache.org/jira/browse/SOLR-8689
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
> Environment: Windows 7
>Reporter: Uwe Schindler
>Priority: Blocker
>  Labels: Java9
> Fix For: 7.0, master (8.0), 7.1
>
> Attachments: SOLR-8689.patch
>
>
> At least on Windows, Solr 5.5 does not start with the shell script using a 
> Verona-Java-9 JDK:
> {noformat}
> *
> JAVA_HOME = C:\Program Files\Java\jdk-9
> java version "9-ea"
> Java(TM) SE Runtime Environment (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc)
> Java HotSpot(TM) 64-Bit Server VM (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc, mixed mode)
> *
> C:\Users\Uwe Schindler\Desktop\solr-5.5.0\bin>solr start
> ERROR: Java 1.7 or later is required to run Solr. Current Java version is: 
> 9-ea
> {noformat}
> I don't know if this is better with Linux, but I assume the version parsing 
> is broken (e.g., String#startsWith, interpret as floating point number,...)
> We should fix this before Java 9 gets released! The version numbering scheme 
> changed completely: http://openjdk.java.net/jeps/223



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8689) bin/solr.cmd does not start with recent Verona builds of Java 9 because of version parsing issue

2017-08-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134556#comment-16134556
 ] 

Uwe Schindler commented on SOLR-8689:
-

I rewrote the WIndows startup script, but I stumbled on an issue with the Java 
9 command line parser. I asked on the Hotspot Mailing List:
http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-August/027962.html

{quote}
I am currently adapting Apache Solr's startup scripts for Java 9. Linux was 
already done at the beginning of this year and works perfectly, but Windows 
brings some problems. I already fixed version number parsing, but I stumbled on 
the following: In the Windows ".cmd" shell script it uses the following to 
enable Garbage collection logging to a separate file, if Java 9 is detected:
set 
GC_LOG_OPTS="-Xlog:gc*:file=!SOLR_LOGS_DIR!\solr_gc.log:time,uptime:filecount=9,filesize=2"

The problem is now that "!SOLR_LOGS_DIR!" is already expanded to an absolute 
Windows Path by the shell and therefore starts with "C:\". The problem is now 
the colon, which breaks the log parsing. When Java 9 starts it exits with the 
following parsing error:
Invalid -Xlog option '-Xlog:gc*:file=C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr1\solr\server\logs\solr_gc.log:time,uptime:filecount=9,filesize=2'

If I replace with a simple file name, without path/drive letter it works. How 
to escape the colon in the drive letter correctly, to me this looks like a 
bummer?
{quote}

> bin/solr.cmd does not start with recent Verona builds of Java 9 because of 
> version parsing issue
> 
>
> Key: SOLR-8689
> URL: https://issues.apache.org/jira/browse/SOLR-8689
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
> Environment: Windows 7
>Reporter: Uwe Schindler
>Priority: Blocker
>  Labels: Java9
> Fix For: 7.0, master (8.0), 7.1
>
>
> At least on Windows, Solr 5.5 does not start with the shell script using a 
> Verona-Java-9 JDK:
> {noformat}
> *
> JAVA_HOME = C:\Program Files\Java\jdk-9
> java version "9-ea"
> Java(TM) SE Runtime Environment (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc)
> Java HotSpot(TM) 64-Bit Server VM (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc, mixed mode)
> *
> C:\Users\Uwe Schindler\Desktop\solr-5.5.0\bin>solr start
> ERROR: Java 1.7 or later is required to run Solr. Current Java version is: 
> 9-ea
> {noformat}
> I don't know if this is better with Linux, but I assume the version parsing 
> is broken (e.g., String#startsWith, interpret as floating point number,...)
> We should fix this before Java 9 gets released! The version numbering scheme 
> changed completely: http://openjdk.java.net/jeps/223



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7863) Don't repeat postings (and perhaps positions) on ReverseWF, EdgeNGram, etc

2017-08-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-7863:
-
Attachment: LUCENE-7863.patch

WIP [^LUCENE-7863.patch]
It introduces a codec with two posting formats:
# hijacking PF which stores posting offsets for original terms
# injecting PF which reverses terms and supplies offset to the original terms 
postings (here is the only file format is changed - it's written with Zlong 
since these offset deltas are negative)
It has to break into any private and final members that blow up the patch.

> Don't repeat postings (and perhaps positions) on ReverseWF, EdgeNGram, etc  
> 
>
> Key: LUCENE-7863
> URL: https://issues.apache.org/jira/browse/LUCENE-7863
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Mikhail Khludnev
> Attachments: LUCENE-7863.hazard, LUCENE-7863.patch
>
>
> h2. Context
> \*suffix and \*infix\* searches on large indexes. 
> h2. Problem
> Obviously applying {{ReversedWildcardFilter}} doubles an index size, and I'm 
> shuddering to think about EdgeNGrams...
> h2. Proposal 
> _DRY_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-6596) Make width of unordered near spans consistent with ordered

2017-08-20 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot closed LUCENE-6596.

Resolution: Fixed

Closing, not enough interest.

> Make width of unordered near spans consistent with ordered
> --
>
> Key: LUCENE-6596
> URL: https://issues.apache.org/jira/browse/LUCENE-6596
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 6.0
>Reporter: Paul Elschot
>Priority: Minor
> Fix For: 6.0
>
> Attachments: LUCENE-6596.patch, LUCENE-6596.patch
>
>
> Use actual slop for width in NearSpansUnordered.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-6453) Specialize SpanPositionQueue similar to DisiPriorityQueue

2017-08-20 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot closed LUCENE-6453.

Resolution: Fixed

Closing, not enough interest.

> Specialize SpanPositionQueue similar to DisiPriorityQueue
> -
>
> Key: LUCENE-6453
> URL: https://issues.apache.org/jira/browse/LUCENE-6453
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Paul Elschot
>Priority: Minor
> Fix For: 6.x, 6.0
>
> Attachments: LUCENE-6453.patch, LUCENE-6453.patch, LUCENE-6453.patch, 
> LUCENE-6453.patch
>
>
> Inline the position comparison function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-7602) Fix compiler warnings for ant clean compile

2017-08-20 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot closed LUCENE-7602.


> Fix compiler warnings for ant clean compile
> ---
>
> Key: LUCENE-7602
> URL: https://issues.apache.org/jira/browse/LUCENE-7602
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Paul Elschot
>Priority: Minor
>  Labels: build
> Fix For: trunk
>
> Attachments: LUCENE-7602-ContextMap-lucene.patch, 
> LUCENE-7602-ContextMap-solr.patch, LUCENE-7602.patch, LUCENE-7602.patch, 
> LUCENE-7602.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134535#comment-16134535
 ] 

ASF subversion and git services commented on SOLR-11261:


Commit 3916fe793a507160b937e5de426da58892f4cf9c in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3916fe7 ]

SOLR-11261: Fix missing dependency & add new thread filter


> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134533#comment-16134533
 ] 

ASF subversion and git services commented on SOLR-11261:


Commit c221a596fe23088ae8cee1ff41e7dcf186e3b402 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c221a59 ]

SOLR-11261, SOLR-10966: Upgrade to Hadoop 2.7.4 to fix incompatibility with 
Java 9.
This also reverts commit 85a27a231fdddb118ee178baac170da0097a02c0.


> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134534#comment-16134534
 ] 

ASF subversion and git services commented on SOLR-10966:


Commit c221a596fe23088ae8cee1ff41e7dcf186e3b402 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c221a59 ]

SOLR-11261, SOLR-10966: Upgrade to Hadoop 2.7.4 to fix incompatibility with 
Java 9.
This also reverts commit 85a27a231fdddb118ee178baac170da0097a02c0.


> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: 7.0, 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch, SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134532#comment-16134532
 ] 

ASF subversion and git services commented on SOLR-11261:


Commit 75141efada4520c36b4b87a4e05b4ef1eff886a0 in lucene-solr's branch 
refs/heads/branch_7_0 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=75141ef ]

SOLR-11261: Fix missing dependency & add new thread filter


> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10966) Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134531#comment-16134531
 ] 

ASF subversion and git services commented on SOLR-10966:


Commit 04c63953cb35b9e921544be7989d2d67a707c159 in lucene-solr's branch 
refs/heads/branch_7_0 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=04c6395 ]

SOLR-11261, SOLR-10966: Upgrade to Hadoop 2.7.4 to fix incompatibility with 
Java 9.
This also reverts commit 85a27a231fdddb118ee178baac170da0097a02c0.


> Add workaround for Hadoop-Common 2.7.2 incompatibility with Java 9
> --
>
> Key: SOLR-10966
> URL: https://issues.apache.org/jira/browse/SOLR-10966
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
> Fix For: 7.0, 6.7, 6.6.1
>
> Attachments: SOLR-10966.patch, SOLR-10966.patch
>
>
> I did some testing to work around HADOOP-14586 and found a temporary 
> solution. All tests pass with Java 9 build 175 (HDFS, Hadoop Auth / Kerberos).
> This is a temporary workaround until we can upgrade Hadoop, see SOLR-10951
> The trick here is a hack: The Hadoop Shell class  tries to parse 
> {{java.version}} system property, which is simply {{"9"}} on the Java 9 GA / 
> release candidate. It contains no dots and is shorter than 3 characters. 
> Hadoop tries to get the {{substring(0,3)}} and fails with an 
> IndexOutOfBoundsException in clinit. To work around this, we do the following 
> on early Solr startup / test startup (in a static analyzer, like we do for 
> logging initialization):
> - set {{java.version}} system property to {{"1.9"}}
> - initialize the Shell class in Hadoop
> - restore the old value of {{java.version}}
> The whole thing is done in a doPrivileged. I ran some tests on Policeman 
> Jenkins, everything works. The hack is only done, if _we_ detect Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134530#comment-16134530
 ] 

ASF subversion and git services commented on SOLR-11261:


Commit 04c63953cb35b9e921544be7989d2d67a707c159 in lucene-solr's branch 
refs/heads/branch_7_0 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=04c6395 ]

SOLR-11261, SOLR-10966: Upgrade to Hadoop 2.7.4 to fix incompatibility with 
Java 9.
This also reverts commit 85a27a231fdddb118ee178baac170da0097a02c0.


> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11264) Multivalued solr.UUIDField throws exception but updates field

2017-08-20 Thread Adam Holley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Holley updated SOLR-11264:
---
Description: 
Using the add operator on a multiValued UUID field throws an 
exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
try implementing ObjectResolver?
However even with the exception the field is updated.

>From schema.xml:
{quote}

{quote}


Perform an update request to set a single UUID: (works fine)
{quote}{{doc.setField("uuid_uuids","new 
HashMap(1){{put("set",UUID.randomUUID().toString());}});}}{quote}

Perform an update request to add an additional UUID: (throws exception)
{quote}{{doc.setField("uuid_uuids","new 
HashMap(1){{put("add",UUID.randomUUID().toString();}});}}
{quote}


  was:
Using the add operator on a multiValued UUID field throws an 
exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
try implementing ObjectResolver?
However even with the exception the field is updated.

>From schema.xml:
{quote}

{quote}


Perform an update request to set a single UUID: (works fine)
{quote}{{doc.setField("uuid_uuids","new 
HashMap(1){{put("set",UUID.randomUUID().toString());}});}}{quote}

Perform an update request to add an additional UUID: (throws exception)
{quote}doc.setField("uuid_uuids","new 
HashMap(1){{put("add",UUID.randomUUID().toString();}});
{quote}



> Multivalued solr.UUIDField throws exception but updates field
> -
>
> Key: SOLR-11264
> URL: https://issues.apache.org/jira/browse/SOLR-11264
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Adam Holley
>Priority: Minor
>
> Using the add operator on a multiValued UUID field throws an 
> exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
> try implementing ObjectResolver?
> However even with the exception the field is updated.
> From schema.xml:
> {quote}
> 
>  multiValued="true"/>{quote}
> Perform an update request to set a single UUID: (works fine)
> {quote}{{doc.setField("uuid_uuids","new 
> HashMap(1){{put("set",UUID.randomUUID().toString());}});}}{quote}
> Perform an update request to add an additional UUID: (throws exception)
> {quote}{{doc.setField("uuid_uuids","new 
> HashMap(1){{put("add",UUID.randomUUID().toString();}});}}
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11264) Multivalued solr.UUIDField throws exception but updates field

2017-08-20 Thread Adam Holley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Holley updated SOLR-11264:
---
Description: 
Using the add operator on a multiValued UUID field throws an 
exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
try implementing ObjectResolver?
However even with the exception the field is updated.

>From schema.xml:
{quote}

{quote}


Perform an update request to set a single UUID: (works fine)
{quote}{{doc.setField("uuid_uuids","new 
HashMap(1){{put("set",UUID.randomUUID().toString());}});}}{quote}

Perform an update request to add an additional UUID: (throws exception)
{quote}doc.setField("uuid_uuids","new 
HashMap(1){{put("add",UUID.randomUUID().toString();}});
{quote}


  was:
Using the add operator on a multiValued UUID field throws an 
exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
try implementing ObjectResolver?
However even with the exception the field is updated.

>From schema.xml:
{quote}

{quote}


Perform an update request to set a single UUID: (works fine)
{quote}doc.setField("uuid_uuids","new 
HashMap(1){{put("set",UUID.randomUUID().toString());}});{quote}

Perform an update request to add an additional UUID: (throws exception)
{quote}doc.setField("uuid_uuids","new 
HashMap(1){{put("add",UUID.randomUUID().toString();}});
{quote}



> Multivalued solr.UUIDField throws exception but updates field
> -
>
> Key: SOLR-11264
> URL: https://issues.apache.org/jira/browse/SOLR-11264
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Adam Holley
>Priority: Minor
>
> Using the add operator on a multiValued UUID field throws an 
> exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
> try implementing ObjectResolver?
> However even with the exception the field is updated.
> From schema.xml:
> {quote}
> 
>  multiValued="true"/>{quote}
> Perform an update request to set a single UUID: (works fine)
> {quote}{{doc.setField("uuid_uuids","new 
> HashMap(1){{put("set",UUID.randomUUID().toString());}});}}{quote}
> Perform an update request to add an additional UUID: (throws exception)
> {quote}doc.setField("uuid_uuids","new 
> HashMap(1){{put("add",UUID.randomUUID().toString();}});
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11264) Multivalued solr.UUIDField throws exception but updates field

2017-08-20 Thread Adam Holley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Holley updated SOLR-11264:
---
Description: 
Using the add operator on a multiValued UUID field throws an 
exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
try implementing ObjectResolver?
However even with the exception the field is updated.

>From schema.xml:
{quote}

{quote}


Perform an update request to set a single UUID: (works fine)
{quote}doc.setField("uuid_uuids","new 
HashMap(1){{put("set",UUID.randomUUID().toString());}});{quote}

Perform an update request to add an additional UUID: (throws exception)
{quote}doc.setField("uuid_uuids","new 
HashMap(1){{put("add",UUID.randomUUID().toString();}});
{quote}


  was:
Using the add operator on a multiValued UUID field throws an 
exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
try implementing ObjectResolver?
However even with the exception the field is updated.

>From schema.xml:





Perform an update request to set a single UUID: (works fine)
doc.setField("uuid_uuids","new 
HashMap(1){{put("set",UUID.randomUUID().toString());}});

Perform an update request to add an additional UUID: (throws exception)
doc.setField("uuid_uuids","new 
HashMap(1){{put("add",UUID.randomUUID().toString();}});




> Multivalued solr.UUIDField throws exception but updates field
> -
>
> Key: SOLR-11264
> URL: https://issues.apache.org/jira/browse/SOLR-11264
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Adam Holley
>Priority: Minor
>
> Using the add operator on a multiValued UUID field throws an 
> exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
> try implementing ObjectResolver?
> However even with the exception the field is updated.
> From schema.xml:
> {quote}
> 
>  multiValued="true"/>{quote}
> Perform an update request to set a single UUID: (works fine)
> {quote}doc.setField("uuid_uuids","new 
> HashMap(1){{put("set",UUID.randomUUID().toString());}});{quote}
> Perform an update request to add an additional UUID: (throws exception)
> {quote}doc.setField("uuid_uuids","new 
> HashMap(1){{put("add",UUID.randomUUID().toString();}});
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11264) Multivalued solr.UUIDField throws exception but updates field

2017-08-20 Thread Adam Holley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Holley updated SOLR-11264:
---
Description: 
Using the add operator on a multiValued UUID field throws an 
exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
try implementing ObjectResolver?
However even with the exception the field is updated.

>From schema.xml:





Perform an update request to set a single UUID: (works fine)
doc.setField("uuid_uuids","new 
HashMap(1){{put("set",UUID.randomUUID().toString());}});

Perform an update request to add an additional UUID: (throws exception)
doc.setField("uuid_uuids","new 
HashMap(1){{put("add",UUID.randomUUID().toString();}});



  was:
Using the add operator on a multiValued UUID field throws an 
exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
try implementing ObjectResolver?
However even with the exception the field is updated.

>From schema.xml:
{{


}}

Perform an update request to set a single UUID: (works fine)
{{doc.setField("uuid_uuids","new 
HashMap(1){{put("set",UUID.randomUUID().toString());}});}}

Perform an update request to add an additional UUID: (throws exception)
{{doc.setField("uuid_uuids","new 
HashMap(1){{put("add",UUID.randomUUID().toString();}});}}




> Multivalued solr.UUIDField throws exception but updates field
> -
>
> Key: SOLR-11264
> URL: https://issues.apache.org/jira/browse/SOLR-11264
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Adam Holley
>Priority: Minor
>
> Using the add operator on a multiValued UUID field throws an 
> exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
> try implementing ObjectResolver?
> However even with the exception the field is updated.
> From schema.xml:
> 
> 
>  multiValued="true"/>
> Perform an update request to set a single UUID: (works fine)
> doc.setField("uuid_uuids","new 
> HashMap(1){{put("set",UUID.randomUUID().toString());}});
> Perform an update request to add an additional UUID: (throws exception)
> doc.setField("uuid_uuids","new 
> HashMap(1){{put("add",UUID.randomUUID().toString();}});



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11264) Multivalued solr.UUIDField throws exception but updates field

2017-08-20 Thread Adam Holley (JIRA)
Adam Holley created SOLR-11264:
--

 Summary: Multivalued solr.UUIDField throws exception but updates 
field
 Key: SOLR-11264
 URL: https://issues.apache.org/jira/browse/SOLR-11264
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6
Reporter: Adam Holley
Priority: Minor


Using the add operator on a multiValued UUID field throws an 
exception:TransactionLog doesn't know how to serialize class java.util.UUID; 
try implementing ObjectResolver?
However even with the exception the field is updated.

>From schema.xml:
{{


}}

Perform an update request to set a single UUID: (works fine)
{{doc.setField("uuid_uuids","new 
HashMap(1){{put("set",UUID.randomUUID().toString());}});}}

Perform an update request to add an additional UUID: (throws exception)
{{doc.setField("uuid_uuids","new 
HashMap(1){{put("add",UUID.randomUUID().toString();}});}}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+181) - Build # 294 - Still Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/294/
Java: 64bit/jdk-9-ea+181 -XX:-UseCompressedOops -XX:+UseG1GC 
--illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:46229/collMinRf_1x3 due to: Path 
not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:46229/collMinRf_1x3 due to: Path not found: /id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([2A19EEC601BA7C4E:A24DD11CAF4611B6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 142 - Still Unstable

2017-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/142/

1 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1100>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1100>
at 
__randomizedtesting.SeedInfo.seed([4FD1033D7564F214:9B944864923241EF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12423 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrBootstrapTest
   

[JENKINS-EA] Lucene-Solr-7.0-Windows (32bit/jdk-9-ea+181) - Build # 99 - Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Windows/99/
Java: 32bit/jdk-9-ea+181 -server -XX:+UseConcMarkSweepGC --illegal-access=deny

3 tests failed.
FAILED:  org.apache.solr.core.TestJmxIntegration.testJmxRegistration

Error Message:
java.lang.InternalError: Memory Pool not found

Stack Trace:
javax.management.RuntimeErrorException: java.lang.InternalError: Memory Pool 
not found
at 
__randomizedtesting.SeedInfo.seed([1573693256F3E93B:9BA20D083BB2B15E]:0)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:831)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:842)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:645)
at 
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
org.apache.solr.core.TestJmxIntegration.testJmxRegistration(TestJmxIntegration.java:121)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8689) bin/solr.cmd does not start with recent Verona builds of Java 9 because of version parsing issue

2017-08-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134514#comment-16134514
 ] 

Uwe Schindler commented on SOLR-8689:
-

I am fixing this issue right now. I got the version parsing working. I will now 
add the GC settings (as Solr does not start because of the GC options).

> bin/solr.cmd does not start with recent Verona builds of Java 9 because of 
> version parsing issue
> 
>
> Key: SOLR-8689
> URL: https://issues.apache.org/jira/browse/SOLR-8689
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
> Environment: Windows 7
>Reporter: Uwe Schindler
>Priority: Blocker
>  Labels: Java9
> Fix For: 7.0, master (8.0), 7.1
>
>
> At least on Windows, Solr 5.5 does not start with the shell script using a 
> Verona-Java-9 JDK:
> {noformat}
> *
> JAVA_HOME = C:\Program Files\Java\jdk-9
> java version "9-ea"
> Java(TM) SE Runtime Environment (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc)
> Java HotSpot(TM) 64-Bit Server VM (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc, mixed mode)
> *
> C:\Users\Uwe Schindler\Desktop\solr-5.5.0\bin>solr start
> ERROR: Java 1.7 or later is required to run Solr. Current Java version is: 
> 9-ea
> {noformat}
> I don't know if this is better with Linux, but I assume the version parsing 
> is broken (e.g., String#startsWith, interpret as floating point number,...)
> We should fix this before Java 9 gets released! The version numbering scheme 
> changed completely: http://openjdk.java.net/jeps/223



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8689) bin/solr.cmd does not start with recent Verona builds of Java 9 because of version parsing issue

2017-08-20 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8689:

Priority: Blocker  (was: Major)

> bin/solr.cmd does not start with recent Verona builds of Java 9 because of 
> version parsing issue
> 
>
> Key: SOLR-8689
> URL: https://issues.apache.org/jira/browse/SOLR-8689
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
> Environment: Windows 7
>Reporter: Uwe Schindler
>Priority: Blocker
>  Labels: Java9
> Fix For: 7.0, master (8.0), 7.1
>
>
> At least on Windows, Solr 5.5 does not start with the shell script using a 
> Verona-Java-9 JDK:
> {noformat}
> *
> JAVA_HOME = C:\Program Files\Java\jdk-9
> java version "9-ea"
> Java(TM) SE Runtime Environment (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc)
> Java HotSpot(TM) 64-Bit Server VM (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc, mixed mode)
> *
> C:\Users\Uwe Schindler\Desktop\solr-5.5.0\bin>solr start
> ERROR: Java 1.7 or later is required to run Solr. Current Java version is: 
> 9-ea
> {noformat}
> I don't know if this is better with Linux, but I assume the version parsing 
> is broken (e.g., String#startsWith, interpret as floating point number,...)
> We should fix this before Java 9 gets released! The version numbering scheme 
> changed completely: http://openjdk.java.net/jeps/223



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8689) bin/solr.cmd does not start with recent Verona builds of Java 9 because of version parsing issue

2017-08-20 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8689:

Fix Version/s: 7.1
   master (8.0)
   7.0

> bin/solr.cmd does not start with recent Verona builds of Java 9 because of 
> version parsing issue
> 
>
> Key: SOLR-8689
> URL: https://issues.apache.org/jira/browse/SOLR-8689
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
> Environment: Windows 7
>Reporter: Uwe Schindler
>Priority: Blocker
>  Labels: Java9
> Fix For: 7.0, master (8.0), 7.1
>
>
> At least on Windows, Solr 5.5 does not start with the shell script using a 
> Verona-Java-9 JDK:
> {noformat}
> *
> JAVA_HOME = C:\Program Files\Java\jdk-9
> java version "9-ea"
> Java(TM) SE Runtime Environment (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc)
> Java HotSpot(TM) 64-Bit Server VM (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc, mixed mode)
> *
> C:\Users\Uwe Schindler\Desktop\solr-5.5.0\bin>solr start
> ERROR: Java 1.7 or later is required to run Solr. Current Java version is: 
> 9-ea
> {noformat}
> I don't know if this is better with Linux, but I assume the version parsing 
> is broken (e.g., String#startsWith, interpret as floating point number,...)
> We should fix this before Java 9 gets released! The version numbering scheme 
> changed completely: http://openjdk.java.net/jeps/223



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.6 - Build # 33 - Unstable

2017-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.6/33/

6 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1100>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1100>
at 
__randomizedtesting.SeedInfo.seed([143B5CC779620DB:D506FE9590C09320]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:

[jira] [Updated] (SOLR-11263) Payload function throws NPE for undefined fields

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11263:

Affects Version/s: 6.6

> Payload function throws NPE for undefined fields
> 
>
> Key: SOLR-11263
> URL: https://issues.apache.org/jira/browse/SOLR-11263
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Ishan Chattopadhyaya
>
> With an undefined field used with the payload function query, there's an NPE 
> thrown. Maybe we should throw a meaningful message instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11263) Payload function throws NPE for undefined fields

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-11263:
---

 Summary: Payload function throws NPE for undefined fields
 Key: SOLR-11263
 URL: https://issues.apache.org/jira/browse/SOLR-11263
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya


With an undefined field used with the payload function query, there's an NPE 
thrown. Maybe we should throw a meaningful message instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 6835 - Still Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6835/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1100>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1100>
at 
__randomizedtesting.SeedInfo.seed([C02FF7B828655068:146ABCE1CF33E393]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  

[jira] [Commented] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134483#comment-16134483
 ] 

ASF subversion and git services commented on SOLR-11261:


Commit 6a8197619f803a131ab45f05ee26be1e69c062b3 in lucene-solr's branch 
refs/heads/branch_7x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6a81976 ]

SOLR-11261: Fix missing dependency & add new thread filter


> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134481#comment-16134481
 ] 

ASF subversion and git services commented on SOLR-11261:


Commit cf051abcb9f8385e5de65de6c08468e31707f2d2 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cf051ab ]

SOLR-11261: Fix missing dependency & add new thread filter


> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-11261:
-
Attachment: SOLR-11261-2.patch

This patch fixes the remaining issue. It also adds another thread filter.

> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261-2.patch, SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7.0 Release Update

2017-08-20 Thread Ishan Chattopadhyaya
I've added SOLR-11183 to the release branch. Please let me know if someone
has any concerns.
Thanks,
Ishan

On Sun, Aug 20, 2017 at 5:55 PM, Yonik Seeley  wrote:

> I opened https://issues.apache.org/jira/browse/SOLR-11262
> I don't know if it has implications for 7.0 or not.
>
> From the issue:
> """This means that any code using PushWriter (via MapWriter or
> IteratorWriter) will be broken if one tries to use XML response
> format. This may easily go unnoticed if one is not using XML response
> format in testing (JSON or binary is frequently used)."""
>
>
> -Yonik
>
>
> On Tue, Aug 15, 2017 at 5:14 AM, Noble Paul  wrote:
> > sorry for the last minute notice. I need to fix the folowing as well.
> > It may take a few hours
> > https://issues.apache.org/jira/browse/SOLR-11239
> >
> > On Tue, Aug 15, 2017 at 6:41 AM, Andrzej Białecki
> >  wrote:
> >> Then, if I may be so bold, I’d like to slip in SOLR-11235, which is a
> simple
> >> AlreadyClosedException prevention fix. Patch is ready, tests are
> passing.
> >>
> >> On 14 Aug 2017, at 19:17, Anshum Gupta  wrote:
> >>
> >> Thanks Ab.
> >>
> >> I'll cut an RC on Wednesday, so that both, I get the time, and also
> that the
> >> tests get some time on Jenkins.
> >>
> >> Anshum
> >>
> >> On Mon, Aug 14, 2017 at 5:29 AM Andrzej Białecki
> >>  wrote:
> >>>
> >>> Hi,
> >>>
> >>> I’ve committed the fix for SOLR-11221 to branch_7_0 (and branch_7x and
> >>> master).
> >>>
> >>> On 12 Aug 2017, at 02:20, Andrzej Białecki
> >>>  wrote:
> >>>
> >>> Hi Anshum,
> >>>
> >>> The patch for SOLR-11221 is ready, with one caveat - it required larger
> >>> changes than I thought, so there’s a sizeable chunk of new code that
> is not
> >>> so well tested… I added a test that used to fail without this change,
> and
> >>> manual testing confirms that metrics are now correctly reported after
> core
> >>> reloads.
> >>>
> >>> We could postpone this fix to 7.0.1 if there are objections, but I
> think
> >>> it should go in to 7.0 - without the fix JMX reporting is surely
> broken,
> >>> with the fix it’s only a possibility ;)
> >>>
> >>>
> >>> On 11 Aug 2017, at 19:59, Anshum Gupta  wrote:
> >>>
> >>> Thanks for the report Mark!
> >>>
> >>> and yes, I'll wait until the JMX issue is fixed.
> >>>
> >>> Anshum
> >>>
> >>> On Fri, Aug 11, 2017 at 9:49 AM Mark Miller 
> wrote:
> 
>  Yeah, let's not release a major version with JMX monitoring broken.
> 
>  Here is a 30 run test report for the 7.0 branch:
>  http://apache-solr-7-0.bitballoon.com/20170811
> 
>  - Mark
> 
>  On Thu, Aug 10, 2017 at 4:02 PM Tomas Fernandez Lobbe <
> tflo...@apple.com>
>  wrote:
> >
> > Lets fix it before releasing. I’d hate to release with a known
> critical
> > bug.
> >
> > On Aug 10, 2017, at 12:54 PM, Anshum Gupta 
> > wrote:
> >
> > Hi Ab,
> >
> > How quickly are we talking about? If you suggest, we could wait,
> > depending upon the impact, and the time required to fix it.
> >
> > Anshum
> >
> > On Thu, Aug 10, 2017 at 12:28 PM Andrzej Białecki
> >  wrote:
> >>
> >> I just discovered SOLR-11221, which basically breaks JMX
> monitoring. We
> >> could either release with “known issues” and then quickly do 7.0.1,
> or wait
> >> until it’s fixed.
> >>
> >> On 10 Aug 2017, at 18:55, Mark Miller 
> wrote:
> >>
> >> I'll generate a test report for the 7.0 branch tonight so we can
> >> evaluate that for an rc as well.
> >>
> >> - Mark
> >>
> >> On Mon, Aug 7, 2017 at 1:32 PM Anshum Gupta  >
> >> wrote:
> >>>
> >>> Good news!
> >>>
> >>> I don't see any 'blockers' for 7.0 anymore, which means, after
> giving
> >>> Jenkins a couple of days, I'll cut out an RC. I intend to do this
> on
> >>> Wednesday/Thursday, unless a blocker comes up, which I hope
> shouldn't be the
> >>> case.
> >>>
> >>> Anshum
> >>>
> >>>
> >>> On Tue, Jul 25, 2017 at 4:02 PM Steve Rowe 
> wrote:
> 
>  I worked through the list of issues with the
>  "numeric-tries-to-points” label and marked those as 7.0 Blocker
> that seemed
>  reasonable, on the assumption that we should at a minimum give
> clear error
>  messages for points non-compatibility.
> 
>  If others don’t agree with the Blocker assessments I’ve made, I’m
>  willing to discuss on the issues.
> 
>  I plan on starting to work on the remaining 7.0 blockers now.  I
>  would welcome assistance in clearing them up.
> 
>  Here’s a JIRA query to see just the 

[jira] [Updated] (SOLR-11183) Prefix V2 APIs with /api

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11183:

Fix Version/s: (was: 7.1)
   master (8.0)
   7.0

> Prefix V2 APIs with /api
> 
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11183.patch, SOLR-11183.patch, SOLR-11183.patch, 
> SOLR-11183.patch
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11183) Prefix V2 APIs with /api

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved SOLR-11183.
-
Resolution: Fixed
  Assignee: Ishan Chattopadhyaya  (was: Noble Paul)

> Prefix V2 APIs with /api
> 
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Ishan Chattopadhyaya
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11183.patch, SOLR-11183.patch, SOLR-11183.patch, 
> SOLR-11183.patch
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11183) Prefix V2 APIs with /api

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134476#comment-16134476
 ] 

ASF subversion and git services commented on SOLR-11183:


Commit a1375432119adcde39dbaf52047f7136e1930be5 in lucene-solr's branch 
refs/heads/branch_7_0 from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a137543 ]

SOLR-11183: V2 APIs are now available at /api endpoint


> Prefix V2 APIs with /api
> 
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.1
>
> Attachments: SOLR-11183.patch, SOLR-11183.patch, SOLR-11183.patch, 
> SOLR-11183.patch
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11183) Prefix V2 APIs with /api

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134474#comment-16134474
 ] 

ASF subversion and git services commented on SOLR-11183:


Commit c8e0e939e496d0e77994e010d1eb436613dd66b7 in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c8e0e93 ]

SOLR-11183: V2 APIs are now available at /api endpoint


> Prefix V2 APIs with /api
> 
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.1
>
> Attachments: SOLR-11183.patch, SOLR-11183.patch, SOLR-11183.patch, 
> SOLR-11183.patch
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11183) Prefix V2 APIs with /api

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134475#comment-16134475
 ] 

ASF subversion and git services commented on SOLR-11183:


Commit 12bb39cee8c3f18914285fbdca08efa066ac4851 in lucene-solr's branch 
refs/heads/branch_7x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=12bb39c ]

SOLR-11183: V2 APIs are now available at /api endpoint


> Prefix V2 APIs with /api
> 
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.1
>
> Attachments: SOLR-11183.patch, SOLR-11183.patch, SOLR-11183.patch, 
> SOLR-11183.patch
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.6-Linux (32bit/jdk1.8.0_144) - Build # 71 - Still Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/71/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([C8CD88DCAD20A8B2:80B8FC68AB138727]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:522)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes 

[jira] [Commented] (SOLR-11261) Update to Hadoop 2.7.4

2017-08-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134463#comment-16134463
 ] 

Uwe Schindler commented on SOLR-11261:
--

Unfortunately, with this commit they introduced a new dependency for tests: - 
sorry for not seeing this (I did run tests, but on Windows not everything is 
executed) 
https://github.com/apache/hadoop/commit/1d017040605b64c7092d8e83d057f4427044aa87

I have to add another old Mortbay-Jetty dependency for tests. I will check 
again.

> Update to Hadoop 2.7.4
> --
>
> Key: SOLR-11261
> URL: https://issues.apache.org/jira/browse/SOLR-11261
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11261.patch
>
>
> In SOLR-10966 we added a "bad" workaround to prevent Hadoop's Shell class 
> from breaking with Java 9 (StringIndexOutOfBoundsException). This was 
> resolved in Hadoop 2.7.4, released a few weeks ago. We should revert the bad 
> hack and update Hadoop.
> After running tests, I see no issue with the bugfix relaese.
> I will commit to master and 7.x and once it settled, I will backport. If we 
> can't get this into 6.6.1, it's not so bad, but then we have to live with the 
> "bad" hack there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4134 - Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4134/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

8 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPISolrJTest.testAddAndDeleteReplicaProp

Error Message:
Could not find collection : replicaProperties

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
replicaProperties
at 
__randomizedtesting.SeedInfo.seed([546CF73DF2010524:90B748D5492EF648]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:109)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:247)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testAddAndDeleteReplicaProp(CollectionsAPISolrJTest.java:365)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.testSplitShard

Error Message:
Error from server at 

[jira] [Updated] (SOLR-11183) Prefix V2 APIs with /api

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11183:

Summary: Prefix V2 APIs with /api  (was: why call the API end point /v2 
will there ever be a /v3)

> Prefix V2 APIs with /api
> 
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.1
>
> Attachments: SOLR-11183.patch, SOLR-11183.patch, SOLR-11183.patch, 
> SOLR-11183.patch
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11183) why call the API end point /v2 will there ever be a /v3

2017-08-20 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11183:

Attachment: SOLR-11183.patch

Updating [~noble.paul]'s latest patch to replace some more documentation 
references.

I'm planning to commit this to the 7.0 branch in an hour or so, unless there 
are any objections. I'll am okay with reverting this after the commit if there 
are any objections.

> why call the API end point /v2 will there ever be a /v3
> ---
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.1
>
> Attachments: SOLR-11183.patch, SOLR-11183.patch, SOLR-11183.patch, 
> SOLR-11183.patch
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+181) - Build # 293 - Still Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/293/
Java: 64bit/jdk-9-ea+181 -XX:-UseCompressedOops -XX:+UseParallelGC 
--illegal-access=deny

10 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.index.hdfs.CheckHdfsIndexTest

Error Message:
org/mortbay/jetty/security/SslSelectChannelConnector

Stack Trace:
java.lang.NoClassDefFoundError: 
org/mortbay/jetty/security/SslSelectChannelConnector
at __randomizedtesting.SeedInfo.seed([93C2FC57DFD3175D]:0)
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1007)
at 
java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:801)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.access$400(BuiltinClassLoader.java:95)
at 
java.base/jdk.internal.loader.BuiltinClassLoader$4.run(BuiltinClassLoader.java:712)
at 
java.base/jdk.internal.loader.BuiltinClassLoader$4.run(BuiltinClassLoader.java:707)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:720)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:622)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:580)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
at 
org.apache.hadoop.hdfs.DFSUtil.httpServerTemplateForNNAndJN(DFSUtil.java:1738)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:121)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:760)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:819)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:803)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1500)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1115)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:986)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:815)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:746)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:616)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:105)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:63)
at 
org.apache.solr.index.hdfs.CheckHdfsIndexTest.setupClass(CheckHdfsIndexTest.java:62)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-6.6-Windows (32bit/jdk1.8.0_144) - Build # 22 - Failure!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Windows/22/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/5)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"http://127.0.0.1:65123/solr;,   
"node_name":"127.0.0.1:65123_solr",   "state":"down"}, 
"core_node2":{   "core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"http://127.0.0.1:65124/solr;,   
"node_name":"127.0.0.1:65124_solr",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/5)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"http://127.0.0.1:65123/solr;,
  "node_name":"127.0.0.1:65123_solr",
  "state":"down"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"http://127.0.0.1:65124/solr;,
  "node_name":"127.0.0.1:65124_solr",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([CD6ECB74B7EA62BC:9D3B5377EECBD4A1]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Updated] (SOLR-11183) why call the API end point /v2 will there ever be a /v3

2017-08-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11183:
--
Attachment: SOLR-11183.patch

> why call the API end point /v2 will there ever be a /v3
> ---
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.1
>
> Attachments: SOLR-11183.patch, SOLR-11183.patch, SOLR-11183.patch
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20349 - Still Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20349/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestHdfsBackupRestoreCore

Error Message:
org/mortbay/jetty/security/SslSelectChannelConnector

Stack Trace:
java.lang.NoClassDefFoundError: 
org/mortbay/jetty/security/SslSelectChannelConnector
at __randomizedtesting.SeedInfo.seed([B619AB572762494C]:0)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.hadoop.hdfs.DFSUtil.httpServerTemplateForNNAndJN(DFSUtil.java:1738)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:121)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:760)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:819)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:803)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1500)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1115)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:986)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:815)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:746)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:616)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:105)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:63)
at 
org.apache.solr.handler.TestHdfsBackupRestoreCore.setupClass(TestHdfsBackupRestoreCore.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-SmokeRelease-6.6 - Build # 22 - Still Failing

2017-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.6/22/

No tests ran.

Build Log:
[...truncated 25892 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.03 sec (8.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.6.1-src.tgz...
   [smoker] 29.6 MB in 0.08 sec (387.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.6.1.tgz...
   [smoker] 67.6 MB in 0.20 sec (342.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.6.1.zip...
   [smoker] 78.0 MB in 0.20 sec (381.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.6.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6252 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.6.1.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6252 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.6.1-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 229 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (68.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/dev-tools/scripts/smokeTestRelease.py",
 line 1478, in 
   [smoker] main()
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/dev-tools/scripts/smokeTestRelease.py",
 line 1422, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/dev-tools/scripts/smokeTestRelease.py",
 line 1464, in smokeTest
   [smoker] checkSigs('solr', solrPath, version, tmpDir, isSigned)
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/dev-tools/scripts/smokeTestRelease.py",
 line 370, in checkSigs
   [smoker] testChanges(project, version, changesURL)
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/dev-tools/scripts/smokeTestRelease.py",
 line 418, in testChanges
   [smoker] checkChangesContent(s, version, changesURL, project, True)
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/dev-tools/scripts/smokeTestRelease.py",
 line 477, in checkChangesContent
   [smoker] raise RuntimeError('%s has duplicate section "%s" under release 
"%s"' % (name, text, release))
   [smoker] RuntimeError: 
file:///home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/lucene/build/smokeTestRelease/dist/solr/changes/Changes.html
 has duplicate section "Release 6.6.1 " under release "6.6.1"

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.6/build.xml:571:
 exec returned: 1

Total time: 132 minutes 35 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: 7.0 Release Update

2017-08-20 Thread Yonik Seeley
I opened https://issues.apache.org/jira/browse/SOLR-11262
I don't know if it has implications for 7.0 or not.

>From the issue:
"""This means that any code using PushWriter (via MapWriter or
IteratorWriter) will be broken if one tries to use XML response
format. This may easily go unnoticed if one is not using XML response
format in testing (JSON or binary is frequently used)."""


-Yonik


On Tue, Aug 15, 2017 at 5:14 AM, Noble Paul  wrote:
> sorry for the last minute notice. I need to fix the folowing as well.
> It may take a few hours
> https://issues.apache.org/jira/browse/SOLR-11239
>
> On Tue, Aug 15, 2017 at 6:41 AM, Andrzej Białecki
>  wrote:
>> Then, if I may be so bold, I’d like to slip in SOLR-11235, which is a simple
>> AlreadyClosedException prevention fix. Patch is ready, tests are passing.
>>
>> On 14 Aug 2017, at 19:17, Anshum Gupta  wrote:
>>
>> Thanks Ab.
>>
>> I'll cut an RC on Wednesday, so that both, I get the time, and also that the
>> tests get some time on Jenkins.
>>
>> Anshum
>>
>> On Mon, Aug 14, 2017 at 5:29 AM Andrzej Białecki
>>  wrote:
>>>
>>> Hi,
>>>
>>> I’ve committed the fix for SOLR-11221 to branch_7_0 (and branch_7x and
>>> master).
>>>
>>> On 12 Aug 2017, at 02:20, Andrzej Białecki
>>>  wrote:
>>>
>>> Hi Anshum,
>>>
>>> The patch for SOLR-11221 is ready, with one caveat - it required larger
>>> changes than I thought, so there’s a sizeable chunk of new code that is not
>>> so well tested… I added a test that used to fail without this change, and
>>> manual testing confirms that metrics are now correctly reported after core
>>> reloads.
>>>
>>> We could postpone this fix to 7.0.1 if there are objections, but I think
>>> it should go in to 7.0 - without the fix JMX reporting is surely broken,
>>> with the fix it’s only a possibility ;)
>>>
>>>
>>> On 11 Aug 2017, at 19:59, Anshum Gupta  wrote:
>>>
>>> Thanks for the report Mark!
>>>
>>> and yes, I'll wait until the JMX issue is fixed.
>>>
>>> Anshum
>>>
>>> On Fri, Aug 11, 2017 at 9:49 AM Mark Miller  wrote:

 Yeah, let's not release a major version with JMX monitoring broken.

 Here is a 30 run test report for the 7.0 branch:
 http://apache-solr-7-0.bitballoon.com/20170811

 - Mark

 On Thu, Aug 10, 2017 at 4:02 PM Tomas Fernandez Lobbe 
 wrote:
>
> Lets fix it before releasing. I’d hate to release with a known critical
> bug.
>
> On Aug 10, 2017, at 12:54 PM, Anshum Gupta 
> wrote:
>
> Hi Ab,
>
> How quickly are we talking about? If you suggest, we could wait,
> depending upon the impact, and the time required to fix it.
>
> Anshum
>
> On Thu, Aug 10, 2017 at 12:28 PM Andrzej Białecki
>  wrote:
>>
>> I just discovered SOLR-11221, which basically breaks JMX monitoring. We
>> could either release with “known issues” and then quickly do 7.0.1, or 
>> wait
>> until it’s fixed.
>>
>> On 10 Aug 2017, at 18:55, Mark Miller  wrote:
>>
>> I'll generate a test report for the 7.0 branch tonight so we can
>> evaluate that for an rc as well.
>>
>> - Mark
>>
>> On Mon, Aug 7, 2017 at 1:32 PM Anshum Gupta 
>> wrote:
>>>
>>> Good news!
>>>
>>> I don't see any 'blockers' for 7.0 anymore, which means, after giving
>>> Jenkins a couple of days, I'll cut out an RC. I intend to do this on
>>> Wednesday/Thursday, unless a blocker comes up, which I hope shouldn't 
>>> be the
>>> case.
>>>
>>> Anshum
>>>
>>>
>>> On Tue, Jul 25, 2017 at 4:02 PM Steve Rowe  wrote:

 I worked through the list of issues with the
 "numeric-tries-to-points” label and marked those as 7.0 Blocker that 
 seemed
 reasonable, on the assumption that we should at a minimum give clear 
 error
 messages for points non-compatibility.

 If others don’t agree with the Blocker assessments I’ve made, I’m
 willing to discuss on the issues.

 I plan on starting to work on the remaining 7.0 blockers now.  I
 would welcome assistance in clearing them up.

 Here’s a JIRA query to see just the remaining 7.0 blockers, of which
 there are currently 12:


 

 --
 Steve
 www.lucidworks.com

 > On Jul 25, 2017, at 2:41 PM, Anshum Gupta 
 > wrote:
 >
 > I will *try* to get to 

[JENKINS-EA] Lucene-Solr-7.x-Linux (32bit/jdk-9-ea+181) - Build # 292 - Unstable!

2017-08-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/292/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseConcMarkSweepGC --illegal-access=deny

9 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.MoveReplicaHDFSUlogDirTest

Error Message:
org/mortbay/jetty/security/SslSelectChannelConnector

Stack Trace:
java.lang.NoClassDefFoundError: 
org/mortbay/jetty/security/SslSelectChannelConnector
at __randomizedtesting.SeedInfo.seed([F40BC78059FCD62F]:0)
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1007)
at 
java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:801)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.access$400(BuiltinClassLoader.java:95)
at 
java.base/jdk.internal.loader.BuiltinClassLoader$4.run(BuiltinClassLoader.java:712)
at 
java.base/jdk.internal.loader.BuiltinClassLoader$4.run(BuiltinClassLoader.java:707)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:720)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:622)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:580)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
at 
org.apache.hadoop.hdfs.DFSUtil.httpServerTemplateForNNAndJN(DFSUtil.java:1738)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:121)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:760)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:819)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:803)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1500)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1115)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:986)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:815)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:746)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:616)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:105)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:63)
at 
org.apache.solr.cloud.MoveReplicaHDFSUlogDirTest.setupClass(MoveReplicaHDFSUlogDirTest.java:55)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)

[jira] [Created] (SOLR-11262) XML writer does not implement PushWriter

2017-08-20 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-11262:
---

 Summary: XML writer does not implement PushWriter
 Key: SOLR-11262
 URL: https://issues.apache.org/jira/browse/SOLR-11262
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Yonik Seeley


While implementing points support for the terms component in a streaming manner 
(via PushWriter/MapWriter) I discovered that the XML response writer does not 
implement this interface.

This means that any code using PushWriter (via MapWriter or IteratorWriter) 
will be broken if one tries to use XML response format.  This may easily go 
unnoticed if one is not using XML response format in testing (JSON or binary is 
frequently used).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7934) PlanetObject Interface

2017-08-20 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright resolved LUCENE-7934.
-
   Resolution: Fixed
Fix Version/s: 7.1
   master (8.0)
   6.7

> PlanetObject Interface
> --
>
> Key: LUCENE-7934
> URL: https://issues.apache.org/jira/browse/LUCENE-7934
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: LUCENE-7934.patch
>
>
> Hi [~daddywri],
> I propose to add a new interface call PlanetObject which all shapes should 
> implement. It is actually extracted from class BasePlanetObject. The 
> motivation is that currently the method getPlanetModel() is not visible and 
> therefore there is no possibility to know to which PlanetModel a shape 
> belongs to. 
> The side effect for this change is that the constructors for composite shapes 
> change as they need to be created with a PlanetModel. I think this is correct 
> as then we can check the planet model when adding a shape and make sure all 
> objects in a composite belongs to the same Planet model.
> In addition, we check that two shape belongs to the shape Planet model when 
> calling getRelationship(GeoShape geoShape).  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7934) PlanetObject Interface

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134409#comment-16134409
 ] 

ASF subversion and git services commented on LUCENE-7934:
-

Commit 75ada53802e40df66abea8aa9932e74ce7e0a4c4 in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=75ada53 ]

LUCENE-7934: Add planet model interface.


> PlanetObject Interface
> --
>
> Key: LUCENE-7934
> URL: https://issues.apache.org/jira/browse/LUCENE-7934
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-7934.patch
>
>
> Hi [~daddywri],
> I propose to add a new interface call PlanetObject which all shapes should 
> implement. It is actually extracted from class BasePlanetObject. The 
> motivation is that currently the method getPlanetModel() is not visible and 
> therefore there is no possibility to know to which PlanetModel a shape 
> belongs to. 
> The side effect for this change is that the constructors for composite shapes 
> change as they need to be created with a PlanetModel. I think this is correct 
> as then we can check the planet model when adding a shape and make sure all 
> objects in a composite belongs to the same Planet model.
> In addition, we check that two shape belongs to the shape Planet model when 
> calling getRelationship(GeoShape geoShape).  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7934) PlanetObject Interface

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134408#comment-16134408
 ] 

ASF subversion and git services commented on LUCENE-7934:
-

Commit 030b395ff83ba4e2f99ebbc38a7223c1b230b964 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=030b395 ]

LUCENE-7934: Add planet model interface.


> PlanetObject Interface
> --
>
> Key: LUCENE-7934
> URL: https://issues.apache.org/jira/browse/LUCENE-7934
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-7934.patch
>
>
> Hi [~daddywri],
> I propose to add a new interface call PlanetObject which all shapes should 
> implement. It is actually extracted from class BasePlanetObject. The 
> motivation is that currently the method getPlanetModel() is not visible and 
> therefore there is no possibility to know to which PlanetModel a shape 
> belongs to. 
> The side effect for this change is that the constructors for composite shapes 
> change as they need to be created with a PlanetModel. I think this is correct 
> as then we can check the planet model when adding a shape and make sure all 
> objects in a composite belongs to the same Planet model.
> In addition, we check that two shape belongs to the shape Planet model when 
> calling getRelationship(GeoShape geoShape).  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11173) Add TermsComponent support for Points fields

2017-08-20 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-11173.
-
   Resolution: Fixed
Fix Version/s: 7.0

> Add TermsComponent support for Points fields
> 
>
> Key: SOLR-11173
> URL: https://issues.apache.org/jira/browse/SOLR-11173
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>  Labels: numeric-tries-to-points
> Fix For: 7.0
>
> Attachments: SOLR-11173.patch, SOLR-11173.patch, SOLR-11173.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11173) Add TermsComponent support for Points fields

2017-08-20 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-11173:
---

Assignee: Yonik Seeley

> Add TermsComponent support for Points fields
> 
>
> Key: SOLR-11173
> URL: https://issues.apache.org/jira/browse/SOLR-11173
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Yonik Seeley
>  Labels: numeric-tries-to-points
> Fix For: 7.0
>
> Attachments: SOLR-11173.patch, SOLR-11173.patch, SOLR-11173.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7934) PlanetObject Interface

2017-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134407#comment-16134407
 ] 

ASF subversion and git services commented on LUCENE-7934:
-

Commit 94b695e672b88adf74f02ecc083925ceb7b772e9 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=94b695e ]

LUCENE-7934: Add planet model interface.


> PlanetObject Interface
> --
>
> Key: LUCENE-7934
> URL: https://issues.apache.org/jira/browse/LUCENE-7934
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-7934.patch
>
>
> Hi [~daddywri],
> I propose to add a new interface call PlanetObject which all shapes should 
> implement. It is actually extracted from class BasePlanetObject. The 
> motivation is that currently the method getPlanetModel() is not visible and 
> therefore there is no possibility to know to which PlanetModel a shape 
> belongs to. 
> The side effect for this change is that the constructors for composite shapes 
> change as they need to be created with a PlanetModel. I think this is correct 
> as then we can check the planet model when adding a shape and make sure all 
> objects in a composite belongs to the same Planet model.
> In addition, we check that two shape belongs to the shape Planet model when 
> calling getRelationship(GeoShape geoShape).  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7933) LongBistSet can't have Long size

2017-08-20 Thread Won Jonghoon (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134402#comment-16134402
 ] 

Won Jonghoon commented on LUCENE-7933:
--

I agree it.


It just seems to add size checking logic.








- 원본 메일 -


보낸사람: Michael McCandless (JIRA) 
받는사람: 
날짜: 17.08.20 18:46 GMT +0900
제목: [jira] [Commented] (LUCENE-7933) LongBistSet can't have Long size


[ 
[1]https://issues.apache.org/jira/browse/LUCENE-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134359#comment-16134359
 ] 

Michael McCandless commented on LUCENE-7933:


Let's just add a check in the ctor and throw {{IllegalArgumentException}} if 
the request {{numBits}} is too large?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)





[1] 
https://issues.apache.org/jira/browse/LUCENE-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134359#comment-16134359
[2] https://issues.apache.org/jira/browse/LUCENE-7933

[1] 
https://issues.apache.org/jira/browse/LUCENE-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134359#comment-16134359
[2] https://issues.apache.org/jira/browse/LUCENE-7933


> LongBistSet can't have Long size
> 
>
> Key: LUCENE-7933
> URL: https://issues.apache.org/jira/browse/LUCENE-7933
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.6
>Reporter: Won Jonghoon
>Priority: Trivial
>
> private final long[] bits; // Array of longs holding the bits 
> ===> bits.length is small for bit number having Long.MAX
> so you can not call "LongBitSet.set(Long.MAX-1)"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Release a 6.6.1

2017-08-20 Thread Uwe Schindler
Hi,

 

I just noticed, that our Hadoop friends released Hadoop 2.7.4. This fixes the 
stupid Java 9 bug in their static initializer (StringIndexOutOfBounds). So I’d 
like to also get https://issues.apache.org/jira/browse/SOLR-11261 in. If 
Jenkins is happy on 7.x and master, this should be easy.

 

If you think it’s too risky (Hadoop 2.7.2 -> 2.7.4), we can live with the 
workaround in Lucene 6.6.1! But the workaround is really hacky: It changes the 
“java.version” system property temporarily on Java 9 while initializing Hadoop, 
which is not something you should ever do!

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de

 

From: Uwe Schindler [mailto:u...@thetaphi.de] 
Sent: Sunday, August 20, 2017 12:53 PM
To: dev@lucene.apache.org
Subject: RE: Release a 6.6.1

 

Hi,

 

I need to backport SOLR-10966 to branch 6.6, otherwise Jenkins does not pass 
with Java 9.

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de  

 

From: Uwe Schindler [mailto:u...@thetaphi.de] 
Sent: Saturday, August 19, 2017 12:00 AM
To: dev@lucene.apache.org  
Subject: Re: Release a 6.6.1

 

Hi,

I enabled Jenkins jobs on . ASF was active already.

Uwe

Am 18. August 2017 23:34:23 MESZ schrieb Varun Thacker  >:

>From the bug fixes in lucene 7.0 do we need to backport any of these issues :  
>LUCENE-7859 / LUCENE-7871 / LUCENE-7914 ?

 

I plan on backporting these three Solr fixes on Sunday 

 

SOLR-10698

SOLR-10719

SOLR-11228

 

looking through the 7.0 bug fixes these two look important to get in as well :

 

SOLR-10983

SOLR-9262

 

So if no one get's to it I'll try backporting them as well 

 

Can someone please enable Jenkins on the branch again?

 

 

On Thu, Aug 17, 2017 at 3:18 PM, Erick Erickson  > wrote:

Right, that was the original note before we decided to backport a
bunch of other stuff and I decided it made no sense to omit this one.
All that has to happen is remove the " (note, not in 7.0, is in 7.1)"
bits since it's in 6.6, 6.x, 7.0, 7.1 and master.

Good catch!




On Thu, Aug 17, 2017 at 3:10 PM, Varun Thacker  > wrote:
> Should I then go remove the note part from the CHANGES entry in branch_6_6 ?
>
> * SOLR-11177: CoreContainer.load needs to send lazily loaded core
> descriptors to the proper list rather than send
>   them all to the transient lists. (Erick Erickson) (note, not in 7.0, is in
> 7.1)
>
> I see a commit for this in branch_7_0
>
> Commit c73b5429b722b09b9353ec82627a35e2b864b823 in lucene-solr's branch
> refs/heads/branch_7_0 from Erick
> [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c73b542 ]
>
>
>
> On Thu, Aug 17, 2017 at 2:48 PM, Erick Erickson   >
> wrote:
>>
>> Well, it is in 7.0. Everything I moved to 6.6.1 is also in 7.0, or should
>> be.
>>
>> On Thu, Aug 17, 2017 at 2:31 PM, Varun Thacker >  > wrote:
>> > Hi Erick,
>> >
>> > I was going through the CHANGES file from the 6_6 branch and just
>> > curious
>> > why are we not planning on putting SOLR-11177 in 7.0 ?
>> >
>> > On Thu, Aug 17, 2017 at 7:45 AM, Erick Erickson
>> >  >
>> > wrote:
>> >>
>> >> OK, I'm done with my changes for 7.0, I think Varun might have a few
>> >> too.
>> >>
>> >> And things didn't melt down overnight so...
>> >>
>> >> On Wed, Aug 16, 2017 at 12:25 PM, Anshum Gupta > >>  >
>> >> wrote:
>> >> > +1 on getting the fixes into 7.0 if you are confident with those, and
>> >> > if
>> >> > they are a part of 6.6.1.
>> >> >
>> >> > Thanks for taking care of this Erick.
>> >> >
>> >> > On Wed, Aug 16, 2017 at 12:24 PM Erick Erickson
>> >> >  >
>> >> > wrote:
>> >> >>
>> >> >> FYI:
>> >> >>
>> >> >> I'll be backporting the following to SOLR 7.0 today:
>> >> >>
>> >> >> SOLR-11024: ParallelStream should set the StreamContext when
>> >> >> constructing SolrStreams:
>> >> >> SOLR-11177: CoreContainer.load needs to send lazily loaded core
>> >> >> descriptors to the proper list rather than send them all to the
>> >> >> transient lists.
>> >> >> SOLR-11122: Creating a core should write a core.properties file
>> >> >> first
>> >> >> and clean up on failure
>> >> >>
>> >> >> and those as well as several others to 6.6.1.
>> >> >>
>> >> >> Since some of these depend on others, I need to add them in a
>> >> >> specific
>> >> >> order. I intend to run minimal tests for each JIRA before pushing,
>> >> >> then when they are all in place go through the full test 

  1   2   >