[jira] [Commented] (SOLR-10806) Solr Replica goes down with NumberFormatException: Invalid shift value (64) in prefixCoded bytes (is encoded value really an INT?)

2017-06-02 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035838#comment-16035838
 ] 

Ishan Chattopadhyaya commented on SOLR-10806:
-

Which Solr version is this observed on? 6.3.1 is not a released version.

> Solr Replica goes down with NumberFormatException: Invalid shift value (64) 
> in prefixCoded bytes (is encoded value really an INT?)
> --
>
> Key: SOLR-10806
> URL: https://issues.apache.org/jira/browse/SOLR-10806
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3.1
>Reporter: Sachin Goyal
>
> Our Solr nodes go down within 20-30 minutes of indexing.
> It does not seem that load-rate is too high because the exception in the logs 
> is pointing to a data problem:
> {color:darkred}
> INFO  - 2017-06-02 23:21:19.094; org.apache.solr.core.SolrCore; 
> \[node-instances_shard2_replica3\] Registered new searcher 
> Searcher@6740879c\[node-instances_shard2_replica3\] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_ne(6.3.0):C200591/8616:delGen=20)
>  Uninverting(_wx(6.3.0):C72132/697:delGen=5) 
> Uninverting(_y0(6.3.0):c5798/27:delGen=3) 
> Uninverting(_yv(6.3.0):c10935/827:delGen=2) 
> Uninverting(_z4(6.3.0):C4163/2277:delGen=1)))}
> ERROR - 2017-06-02 23:21:19.105; org.apache.solr.core.CoreContainer; Error 
> waiting for SolrCore to be created
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core 
> \[node-instances_shard2_replica3\]
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.solr.core.CoreContainer.lambda$load$1(CoreContainer.java:526)
> at 
> org.apache.solr.core.CoreContainer$$Lambda$38/199449817.run(Unknown Source)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1611272577.run(Unknown
>  Source)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> \[node-instances_shard2_replica3\]
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:855)
> at 
> org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
> at 
> org.apache.solr.core.CoreContainer$$Lambda$37/1402433372.call(Unknown Source)
> ... 6 more
> Caused by: java.lang.NumberFormatException: Invalid shift value (64) in 
> prefixCoded bytes (is encoded value really an INT?)
> at 
> org.apache.lucene.util.LegacyNumericUtils.getPrefixCodedLongShift(LegacyNumericUtils.java:163)
> at 
> org.apache.lucene.util.LegacyNumericUtils$1.accept(LegacyNumericUtils.java:392)
> at 
> org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:232)
> at org.apache.lucene.index.Terms.getMax(Terms.java:169)
> at 
> org.apache.lucene.util.LegacyNumericUtils.getMaxLong(LegacyNumericUtils.java:504)
> at 
> org.apache.solr.update.VersionInfo.getMaxVersionFromIndex(VersionInfo.java:233)
> at 
> org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1584)
> at 
> org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
> at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:949)
> at org.apache.solr.core.SolrCore.(SolrCore.java:931)
> at org.apache.solr.core.SolrCore.(SolrCore.java:776)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
> ... 8 more
> {color}
> It does not seem right that Solr Node itself should go down for such a 
> problem.
> # Error waiting for SolrCore to be created
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core
> # Unable to create core
> # NumberFormatException: Invalid shift value (64) in prefixCoded bytes (is 
> encoded value really an INT?)
> i.e. Core creation fails because there was some confusion between long and 
> integer.
> If there is a data issue then somehow it should communicate it with an 
> exception during ingestion.
> \\
> \\
> *UPDATE*:
> Another 

[jira] [Updated] (SOLR-10806) Solr Replica goes down with NumberFormatException: Invalid shift value (64) in prefixCoded bytes (is encoded value really an INT?)

2017-06-02 Thread Sachin Goyal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sachin Goyal updated SOLR-10806:

Description: 
Our Solr nodes go down within 20-30 minutes of indexing.
It does not seem that load-rate is too high because the exception in the logs 
is pointing to a data problem:

{color:darkred}
INFO  - 2017-06-02 23:21:19.094; org.apache.solr.core.SolrCore; 
\[node-instances_shard2_replica3\] Registered new searcher 
Searcher@6740879c\[node-instances_shard2_replica3\] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_ne(6.3.0):C200591/8616:delGen=20)
 Uninverting(_wx(6.3.0):C72132/697:delGen=5) 
Uninverting(_y0(6.3.0):c5798/27:delGen=3) 
Uninverting(_yv(6.3.0):c10935/827:delGen=2) 
Uninverting(_z4(6.3.0):C4163/2277:delGen=1)))}
ERROR - 2017-06-02 23:21:19.105; org.apache.solr.core.CoreContainer; Error 
waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: 
Unable to create core \[node-instances_shard2_replica3\]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.core.CoreContainer.lambda$load$1(CoreContainer.java:526)
at org.apache.solr.core.CoreContainer$$Lambda$38/199449817.run(Unknown 
Source)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1611272577.run(Unknown
 Source)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core 
\[node-instances_shard2_replica3\]
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:855)
at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at 
org.apache.solr.core.CoreContainer$$Lambda$37/1402433372.call(Unknown Source)
... 6 more
Caused by: java.lang.NumberFormatException: Invalid shift value (64) in 
prefixCoded bytes (is encoded value really an INT?)
at 
org.apache.lucene.util.LegacyNumericUtils.getPrefixCodedLongShift(LegacyNumericUtils.java:163)
at 
org.apache.lucene.util.LegacyNumericUtils$1.accept(LegacyNumericUtils.java:392)
at 
org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:232)
at org.apache.lucene.index.Terms.getMax(Terms.java:169)
at 
org.apache.lucene.util.LegacyNumericUtils.getMaxLong(LegacyNumericUtils.java:504)
at 
org.apache.solr.update.VersionInfo.getMaxVersionFromIndex(VersionInfo.java:233)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1584)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:949)
at org.apache.solr.core.SolrCore.(SolrCore.java:931)
at org.apache.solr.core.SolrCore.(SolrCore.java:776)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
... 8 more
{color}

It does not seem right that Solr Node itself should go down for such a problem.
# Error waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: 
Unable to create core
# Unable to create core
# NumberFormatException: Invalid shift value (64) in prefixCoded bytes (is 
encoded value really an INT?)

i.e. Core creation fails because there was some confusion between long and 
integer.
If there is a data issue then somehow it should communicate it with an 
exception during ingestion.

\\
\\
*UPDATE*:
Another issue I see with the above problem is that solr cluster is completely 
inaccessible.
Solr-UI is also not coming up. I restarted the Solr servers and they refuse to 
recover.
I am not even able to delete the collections and create them afresh.
It seems the only way out is to do an *rm -rf* and re-install

Note that it is not related to network as I can ssh to the Solr machines and 
send messages to other Solr machines using nc

  was:
Our Solr nodes go down within 20-30 minutes of indexing.
It does not seem that load-rate is too high because the exception in the logs 
is pointing to a data problem:

{color:darkred}
INFO  - 2017-06-02 23:21:19.094; org.apache.solr.core.SolrCore; 
\[node-instances_shard2_replica3\] Registered new searcher 
Searcher@6740879c\[node-instances_shard2_replica3\] 

[jira] [Commented] (SOLR-8437) Remove outdated RAMDirectory comment from example solrconfigs

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035831#comment-16035831
 ] 

ASF subversion and git services commented on SOLR-8437:
---

Commit c65523af1839d867cbebf68a9f363da08e2b811d in lucene-solr's branch 
refs/heads/branch_6x from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c65523a ]

SOLR-8437: Improve RAMDirectory details in sample solrconfig files

(cherry picked from commit 2c9f860)


> Remove outdated RAMDirectory comment from example solrconfigs
> -
>
> Key: SOLR-8437
> URL: https://issues.apache.org/jira/browse/SOLR-8437
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: 6.7, 7.0
>
> Attachments: SOLR-8437.patch
>
>
> There is a comment here in the solrconfig.xml file -
> {code}
>solr.RAMDirectoryFactory is memory based, not
>persistent, and doesn't work with replication.
> {code}
> This is outdated after SOLR-3911 . I tried recovering a replica manually as 
> well when they were using RAMDirectoryFactory and it worked just fine.
> So we should just get rid of that comment from all the example configs 
> shipped with solr.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8437) Remove outdated RAMDirectory comment from example solrconfigs

2017-06-02 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-8437.
-
   Resolution: Fixed
Fix Version/s: (was: 6.0)
   (was: 5.5)
   7.0
   6.7

> Remove outdated RAMDirectory comment from example solrconfigs
> -
>
> Key: SOLR-8437
> URL: https://issues.apache.org/jira/browse/SOLR-8437
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: 6.7, 7.0
>
> Attachments: SOLR-8437.patch
>
>
> There is a comment here in the solrconfig.xml file -
> {code}
>solr.RAMDirectoryFactory is memory based, not
>persistent, and doesn't work with replication.
> {code}
> This is outdated after SOLR-3911 . I tried recovering a replica manually as 
> well when they were using RAMDirectoryFactory and it worked just fine.
> So we should just get rid of that comment from all the example configs 
> shipped with solr.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8437) Remove outdated RAMDirectory comment from example solrconfigs

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035826#comment-16035826
 ] 

ASF subversion and git services commented on SOLR-8437:
---

Commit 2c9f8604c2a8a82d53c125a5af4ad6326df311ac in lucene-solr's branch 
refs/heads/master from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2c9f860 ]

SOLR-8437: Improve RAMDirectory details in sample solrconfig files


> Remove outdated RAMDirectory comment from example solrconfigs
> -
>
> Key: SOLR-8437
> URL: https://issues.apache.org/jira/browse/SOLR-8437
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-8437.patch
>
>
> There is a comment here in the solrconfig.xml file -
> {code}
>solr.RAMDirectoryFactory is memory based, not
>persistent, and doesn't work with replication.
> {code}
> This is outdated after SOLR-3911 . I tried recovering a replica manually as 
> well when they were using RAMDirectoryFactory and it worked just fine.
> So we should just get rid of that comment from all the example configs 
> shipped with solr.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8437) Remove outdated RAMDirectory comment from example solrconfigs

2017-06-02 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-8437:

Attachment: SOLR-8437.patch

Simple patch. I'll commit this shortly.

> Remove outdated RAMDirectory comment from example solrconfigs
> -
>
> Key: SOLR-8437
> URL: https://issues.apache.org/jira/browse/SOLR-8437
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-8437.patch
>
>
> There is a comment here in the solrconfig.xml file -
> {code}
>solr.RAMDirectoryFactory is memory based, not
>persistent, and doesn't work with replication.
> {code}
> This is outdated after SOLR-3911 . I tried recovering a replica manually as 
> well when they were using RAMDirectoryFactory and it worked just fine.
> So we should just get rid of that comment from all the example configs 
> shipped with solr.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Windows (64bit/jdk-9-ea+171) - Build # 930 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/930/
Java: 64bit/jdk-9-ea+171 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard2_replica1\data\tlog\tlog.001:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard2_replica1\data\tlog\tlog.001:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard2_replica1\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard2_replica1\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard2_replica1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard2_replica1\data

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard2_replica1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard2_replica1

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard1_replica1\data\tlog\tlog.001:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard1_replica1\data\tlog\tlog.001:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard1_replica1\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard1_replica1\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard1_replica1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard1_replica1\data

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard1_replica1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2\collection1_shard1_replica1

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node2

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node1\collection1_shard2_replica2\data\tlog\tlog.001:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_7134BD76D2D29FA7-001\tempDir-001\node1\collection1_shard2_replica2\data\tlog\tlog.001:
 The process cannot access the file because it is being used by another 
process. 

[jira] [Closed] (SOLR-8650) Alias Collection API can be moved to the CollectionsHandler

2017-06-02 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker closed SOLR-8650.
---
Resolution: Won't Fix

I guess it's unnecessary to move it around . 

> Alias Collection API can be moved to the CollectionsHandler
> ---
>
> Key: SOLR-8650
> URL: https://issues.apache.org/jira/browse/SOLR-8650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-8650.patch
>
>
> Today the create/delete alias operations is processed by the overseer . While 
> not being an expensive operation there is no real need of it going to the 
> overseer .
> So we can optimize here and handle the request from the collections handler 
> directly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8781) Add a ZkStateReader#getClusterProperty method

2017-06-02 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-8781.
-
Resolution: Fixed

This was implemented as part of SOLR-9106 . 

> Add a ZkStateReader#getClusterProperty method
> -
>
> Key: SOLR-8781
> URL: https://issues.apache.org/jira/browse/SOLR-8781
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-8781.patch
>
>
> Currently we have ZkStateReader#getClusterProps and 
> ZkStateReader#setClusterProperty . This doesn't look consistent. Also most 
> use cases want the value of a particular propery and not all the properties.
> I propse to add a method in ZkStateReader called getClusterProperty .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10803) Solr should refuse to create Trie*Field instances in 7.0 indices

2017-06-02 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035800#comment-16035800
 ] 

Hoss Man commented on SOLR-10803:
-


Quick side digressions...

bq. Maybe also similar stuff to prevent FieldCache usage? ... I have seen that 
Solr 7 allows to merge non-docvalues segments to ones with docvalues using 
uninverter with a special mergepolicy. ...

That feels like a _very_ orthoginal idea (or 2?) that should really be 
discussed in their own Jiras since they are broader in scope then just Trie vs 
Point.  (I'm not familar enough with what would be involved to even create the 
jira)


bq. I'd suggest to also enable DocValues by default for all string/numeric/date 
fields, unless explicitly disabled.

I like this idea -- but it's definitely orthoginal to the topic at hand.

I've spun that off into SOLR-10808




bq. I put the blocker priority since I think it is a better experience if all 
7.x indices can be used with Solr 8, but there is also the possibility of just 
removing Trie*Field in 8.0 and refusing to open any index that would make use 
of those fields, even if they were created in 7.x.

Coincidently, sarowe & cassandra & I were just talking yesterday about our 
concerns that beyond "known gaps" in terms of Solr features that work with Trie 
fields but not (yet) Point fields (ex:  SOLR-9989, SOLR-10503, SOLR-9985, 
etc...) a larger concern as we move towards 7.0 is that test coverage of 
PointFields in Solr is currently pretty shallow.  We don't really have a very 
good idea of what does/doesn't work with PointFields, which is disconcerting 
for pushing them as the "default" (or "recommended") numeric types in Solr -- 
let alone forbiding the use of (new) Trie fields as suggested here.

Which is why I've started working on SOLR-10807 -- the current aim is a quick 
and dirty way to help identify all of the potentially problematic areas as 
quickly as possible, by forcing every test to use PointFields instead of 
TrieFields.  See comments in that Jira for details, but the nutshell is: at 
this point it's hard to guess how many features/test might fail if we cut over 
to PointFields -- because a big portion of our tests are using/expecting the 
'id' field to be numeric, and before we can even get to the meat of the test, 
using a Point based numeric as the 'id' field causes all sorts of problems 
because they don't have any 'Terms' for updateDocument/deleteDocument

bq. In addition, the merge policy could also be used to convert Trie* to Point* 
values by first uninverting (if no docvalues on trie) and redindexing the 
fields during merging... (not sure how to do this, but should work somehow).

If we think it's viable to create a MergePolicy (Wrapper) that could convert 
Trie fields to Point fields, then my straw many suggestion would be that in 7.x 
we only discourage Trie fields, but not ban them completely -- with some 
strongly worded warnings that Trie Fields will be completely removed in 8.0, 
and any index that uses them will require manual upgrade using a special 
converstion tool.

As things stand today, even if that MergePolicy/tool doesn't yet exist when 7.0 
comes out, I'd rather say "7.x indexes using Trie fields *MAY* require 
reindexing in 8.0, pending possible development of a tool to upgrade Trie 
fields to Point fields" then ban Trie fields outright.

(Hell: As things stand today, even if we were confident it would be 
_impossible_ to create such a tool/mergepolicy, I'd still rather say "7.x 
indexes using Trie fields will *REQUIRE* reindexing in 8.0" then to change the 
default configsets in 7.0 to use Points, let alone ban new Trie fields)





> Solr should refuse to create Trie*Field instances in 7.0 indices
> 
>
> Key: SOLR-10803
> URL: https://issues.apache.org/jira/browse/SOLR-10803
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
>
> If we want to be able to remove support for legacy numerics from Solr in 8.0, 
> we need to forbid the use of Trie*Field in indices that are created on or 
> after 7.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10808) Enable DocValues by default for all string/numeric/date fields, unless explicitly disabled

2017-06-02 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10808:
---

 Summary: Enable DocValues by default for all string/numeric/date 
fields, unless explicitly disabled
 Key: SOLR-10808
 URL: https://issues.apache.org/jira/browse/SOLR-10808
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


Spinning this idea off of SOLR-10803 where Uwe suggested it...


bq. I'd suggest to also enable DocValues by default for all string/numeric/date 
fields, unless explicitly disabled.

This would be fairly easy to do -- we just bump up the "schema version" and 
change the default for docValues in the affected FieldTypes (or perhaps in 
PrimitiveFieldType? Just like OMIT_NORMS ? ... Need to think about that a bit 
more.)






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10807) Randomize PointFields in all tests unless explicit reason not to

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035798#comment-16035798
 ] 

ASF subversion and git services commented on SOLR-10807:


Commit c76a79b5bb47e9b2903b90b84958c6bdeb043d52 in lucene-solr's branch 
refs/heads/jira/SOLR-10807 from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c76a79b ]

SOLR-10807: first start at brute forcing PointFields to replace Tries in all 
test schemas to see what features break

lots of missleading failures due to test schemas using numeric uniqueKey fields


>  Randomize PointFields in all tests unless explicit reason not to
> -
>
> Key: SOLR-10807
> URL: https://issues.apache.org/jira/browse/SOLR-10807
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: core.test.log.txt
>
>
> We need to seriously beef up our testing of PointFields to figure out what 
> Solr features don't currently work with PointFields.
> The existing Trie/Point randomization logic in SolrTestCaseJ4 is a good start 
> -- but only a handful of schema files leverage it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10807) Randomize PointFields in all tests unless explicit reason not to

2017-06-02 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-10807:

Attachment: core.test.log.txt


I've started experimenting with bulk change to all test schemas to use the 
randomization logic -- or more specifically: a slight modification of the 
existing randomization logic, because of the issues noted in SOLR-10177.

In a nutshell...

* change every schema, and every test that uses managed schema to create a 
numeric field, to use a "randomly" choosen sysprop for defining the 
Int/Float/Long/Double/Date FieldType Class name to use
** force the "random" logic to always choose points (temporarily)
*** NOTE: this still allows TrieFields for the tests that use 
@SuppressPointFields
* HACK: in "test mode": force PointFields to ignore extra TrieField args (ie: 
precisionStep)
** to minimize number of changes needed to test schemas

(w/o that last HACK, a simple "search and replace" in test schemas wouldn't be 
good enough, because many schemas might have multiple "Integer" Trie fieldTypes 
with diff precisionSteps -- I was looking for a simple brute force way to 
replace every usage of Trie fields with Point fields ... even if it's not 
something we want to commit to master)

Once that was in place, I started running tests and needed to HACK a few more 
things to get past some large barrieres that were obvious very quickly...

* had to disable some useless code in ExternalFileField to get most schemas to 
load
** should probably commit this change either way
* in SimpleFacets & StatsComponent, I tweaked the error handling to allow 
uninversion of single valued points
** see SOLR-10472 which came after these error checks were added
** tests using multivalued non-DV numerics still fail of course, but a lot of 
single valued ones seem to wrok now
* we have 48(!) test schemas that use numeric fields for the uniqueKey Field 
(why?!?!?!)
** this very obviously/visibly breaks things like QEC on init - and lots of 
configs have QEC registered even if the test doesn't care about it
** more subtly: it also breaks simple things like deleteById (no term to delete 
by IIUC)
** I attempted to brute force these schemas to use "string" for the id, 
expecting it might cause _some_ "false failures" (ie: if a test using one of 
these schemas was depending on numeric order for sorting/range-queries on the 
uniqueKeyField) but in practice there were still lots of confusing failures due 
to tests expecting Numerc types for the 'id' field in returned documents.
** So instead I just made IndexSchema fail fast if someone tries to use a 
PointField for uniqueKey -- this means we're still probably masking some 
*other* PointField realted bugs in tests that use these schema, but my hope was 
that at least the "masking" failures would now unambiguious.

That's as far as i got today -- but i plan to continue to spend a lot of time 
on this next week as well.

I'm attaching my ant output from running _just_ the solr/core tests.  I haven't 
had a chance to dig into the results much -- but at a glance...

{noformat}
Completed [724/724 (145!)] on J0 in 0.00s, 5 tests
...
Tests with failures [seed: DA139D8A7DC075B6] (first 10 out of 207):
...
Tests summary: 724 suites (8 ignored), 2866 tests, 104 suite-level errors, 71 
errors, 34 failures, 1631 ignored (85 assumptions)

$ grep -l "facet on a multivalued PointField without docValues" 
../build/solr-core/test/*.xml
../build/solr-core/test/TEST-org.apache.solr.schema.TestPointFields.xml
../build/solr-core/test/TEST-org.apache.solr.TestRandomDVFaceting.xml
$ grep -l "stats on a multivalued PointField without docValues" 
../build/solr-core/test/*.xml

# no matches to the last grep ... i guess we're not doing much multi-valued 
numeric stats testing???

$ grep -l "nocommit: uniqueKey" ../build/solr-core/test/*.xml | wc -l
119

$ grep -L "nocommit: uniqueKey" ../build/solr-core/test/*.xml | xargs grep -L 
'errors="0" failures="0"'
../build/solr-core/test/TEST-org.apache.solr.cloud.BasicDistributedZk2Test.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.DistribCursorPagingTest.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.DistribJoinFromCollectionTest.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.ForceLeaderTest.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.HttpPartitionTest.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.RecoveryAfterSoftCommitTest.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.SolrCloudExampleTest.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.TestCloudDeleteByQuery.xml
../build/solr-core/test/TEST-org.apache.solr.cloud.TestCryptoKeys.xml

[jira] [Created] (SOLR-10807) Randomize PointFields in all tests unless explicit reason not to

2017-06-02 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10807:
---

 Summary:  Randomize PointFields in all tests unless explicit 
reason not to
 Key: SOLR-10807
 URL: https://issues.apache.org/jira/browse/SOLR-10807
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man
Assignee: Hoss Man
Priority: Blocker
 Fix For: master (7.0)


We need to seriously beef up our testing of PointFields to figure out what Solr 
features don't currently work with PointFields.

The existing Trie/Point randomization logic in SolrTestCaseJ4 is a good start 
-- but only a handful of schema files leverage it.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.6-Linux (32bit/jdk1.8.0_131) - Build # 54 - Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/54/
Java: 32bit/jdk1.8.0_131 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([3894735E60AE6BD4:B3B3A08F21A8C050]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:437)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035783#comment-16035783
 ] 

ASF subversion and git services commented on LUCENE-7705:
-

Commit e4a43cf59a12ca39eb8278cc2533d409d792185a in lucene-solr's branch 
refs/heads/branch_6x from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e4a43cf ]

LUCENE-7705: Allow CharTokenizer-derived tokenizers and KeywordTokenizer to 
configure the max token len (test fix)

(cherry picked from commit 15a8a24)


> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (7.0), 6.7
>
> Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035782#comment-16035782
 ] 

ASF subversion and git services commented on LUCENE-7705:
-

Commit 2eacf13def4dc9fbea1de9c79150c05682b0cdec in lucene-solr's branch 
refs/heads/master from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2eacf13 ]

LUCENE-7705: Allow CharTokenizer-derived tokenizers and KeywordTokenizer to 
configure the max token length, fix test failure.


> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (7.0), 6.7
>
> Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035781#comment-16035781
 ] 

ASF subversion and git services commented on LUCENE-7705:
-

Commit 15a8a2415280d50c982fcd4fca893a3c3224da14 in lucene-solr's branch 
refs/heads/master from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=15a8a24 ]

LUCENE-7705: Allow CharTokenizer-derived tokenizers and KeywordTokenizer to 
configure the max token len (test fix)


> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (7.0), 6.7
>
> Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10806) Solr Replica goes down with NumberFormatException: Invalid shift value (64) in prefixCoded bytes (is encoded value really an INT?)

2017-06-02 Thread Sachin Goyal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sachin Goyal updated SOLR-10806:

Description: 
Our Solr nodes go down within 20-30 minutes of indexing.
It does not seem that load-rate is too high because the exception in the logs 
is pointing to a data problem:

{color:darkred}
INFO  - 2017-06-02 23:21:19.094; org.apache.solr.core.SolrCore; 
\[node-instances_shard2_replica3\] Registered new searcher 
Searcher@6740879c\[node-instances_shard2_replica3\] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_ne(6.3.0):C200591/8616:delGen=20)
 Uninverting(_wx(6.3.0):C72132/697:delGen=5) 
Uninverting(_y0(6.3.0):c5798/27:delGen=3) 
Uninverting(_yv(6.3.0):c10935/827:delGen=2) 
Uninverting(_z4(6.3.0):C4163/2277:delGen=1)))}
ERROR - 2017-06-02 23:21:19.105; org.apache.solr.core.CoreContainer; Error 
waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: 
Unable to create core \[node-instances_shard2_replica3\]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.core.CoreContainer.lambda$load$1(CoreContainer.java:526)
at org.apache.solr.core.CoreContainer$$Lambda$38/199449817.run(Unknown 
Source)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1611272577.run(Unknown
 Source)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core 
\[node-instances_shard2_replica3\]
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:855)
at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at 
org.apache.solr.core.CoreContainer$$Lambda$37/1402433372.call(Unknown Source)
... 6 more
Caused by: java.lang.NumberFormatException: Invalid shift value (64) in 
prefixCoded bytes (is encoded value really an INT?)
at 
org.apache.lucene.util.LegacyNumericUtils.getPrefixCodedLongShift(LegacyNumericUtils.java:163)
at 
org.apache.lucene.util.LegacyNumericUtils$1.accept(LegacyNumericUtils.java:392)
at 
org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:232)
at org.apache.lucene.index.Terms.getMax(Terms.java:169)
at 
org.apache.lucene.util.LegacyNumericUtils.getMaxLong(LegacyNumericUtils.java:504)
at 
org.apache.solr.update.VersionInfo.getMaxVersionFromIndex(VersionInfo.java:233)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1584)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:949)
at org.apache.solr.core.SolrCore.(SolrCore.java:931)
at org.apache.solr.core.SolrCore.(SolrCore.java:776)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
... 8 more
{color}

It does not seem right that Solr Node itself should go down for such a problem.
# Error waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: 
Unable to create core
# Unable to create core
# NumberFormatException: Invalid shift value (64) in prefixCoded bytes (is 
encoded value really an INT?)

i.e. Core creation fails because there was some confusion between long and 
integer.
If there is a data issue then somehow it should communicate it with an 
exception during ingestion.


  was:
Our Solr nodes go down within 20-30 minutes of indexing.
It does not seem that load-rate is too high because the exception in the logs 
is pointing to a data problem:

{color:darkred}
INFO  - 2017-06-02 23:21:19.094; org.apache.solr.core.SolrCore; 
\[node-instances_shard2_replica3\] Registered new searcher 
Searcher@6740879c\[node-instances_shard2_replica3\] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_ne(6.3.0):C200591/8616:delGen=20)
 Uninverting(_wx(6.3.0):C72132/697:delGen=5) 
Uninverting(_y0(6.3.0):c5798/27:delGen=3) 
Uninverting(_yv(6.3.0):c10935/827:delGen=2) 
Uninverting(_z4(6.3.0):C4163/2277:delGen=1)))}
ERROR - 2017-06-02 23:21:19.105; org.apache.solr.core.CoreContainer; Error 
waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: 
Unable to create core 

[jira] [Created] (SOLR-10806) Solr Replica goes down with NumberFormatException: Invalid shift value (64) in prefixCoded bytes (is encoded value really an INT?)

2017-06-02 Thread Sachin Goyal (JIRA)
Sachin Goyal created SOLR-10806:
---

 Summary: Solr Replica goes down with NumberFormatException: 
Invalid shift value (64) in prefixCoded bytes (is encoded value really an INT?)
 Key: SOLR-10806
 URL: https://issues.apache.org/jira/browse/SOLR-10806
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.3.1
Reporter: Sachin Goyal


Our Solr nodes go down within 20-30 minutes of indexing.
It does not seem that load-rate is too high because the exception in the logs 
is pointing to a data problem:

{color:darkred}
INFO  - 2017-06-02 23:21:19.094; org.apache.solr.core.SolrCore; 
\[node-instances_shard2_replica3\] Registered new searcher 
Searcher@6740879c\[node-instances_shard2_replica3\] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_ne(6.3.0):C200591/8616:delGen=20)
 Uninverting(_wx(6.3.0):C72132/697:delGen=5) 
Uninverting(_y0(6.3.0):c5798/27:delGen=3) 
Uninverting(_yv(6.3.0):c10935/827:delGen=2) 
Uninverting(_z4(6.3.0):C4163/2277:delGen=1)))}
ERROR - 2017-06-02 23:21:19.105; org.apache.solr.core.CoreContainer; Error 
waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: 
Unable to create core \[node-instances_shard2_replica3\]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.core.CoreContainer.lambda$load$1(CoreContainer.java:526)
at org.apache.solr.core.CoreContainer$$Lambda$38/199449817.run(Unknown 
Source)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1611272577.run(Unknown
 Source)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core 
\[node-instances_shard2_replica3\]
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:855)
at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at 
org.apache.solr.core.CoreContainer$$Lambda$37/1402433372.call(Unknown Source)
... 6 more
Caused by: java.lang.NumberFormatException: Invalid shift value (64) in 
prefixCoded bytes (is encoded value really an INT?)
at 
org.apache.lucene.util.LegacyNumericUtils.getPrefixCodedLongShift(LegacyNumericUtils.java:163)
at 
org.apache.lucene.util.LegacyNumericUtils$1.accept(LegacyNumericUtils.java:392)
at 
org.apache.lucene.index.FilteredTermsEnum.next(FilteredTermsEnum.java:232)
at org.apache.lucene.index.Terms.getMax(Terms.java:169)
at 
org.apache.lucene.util.LegacyNumericUtils.getMaxLong(LegacyNumericUtils.java:504)
at 
org.apache.solr.update.VersionInfo.getMaxVersionFromIndex(VersionInfo.java:233)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1584)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:949)
at org.apache.solr.core.SolrCore.(SolrCore.java:931)
at org.apache.solr.core.SolrCore.(SolrCore.java:776)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
... 8 more
{color}

It does not seem right that Solr Node itself should go down for such a problem.
If there is a data issue then somehow it should communicate it with an 
exception during ingestion.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 867 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/867/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([BC925A2875283189:D64065472DCBE146]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 40 more




Build Log:
[...truncated 13226 lines...]
   

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_131) - Build # 3647 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3647/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([ED11F8EA6375C040:87C3C7853B96108F]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 40 more




Build Log:
[...truncated 11433 lines...]
   [junit4] Suite: 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1337 - Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1337/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 8 in http://127.0.0.1:51427/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 8 in http://127.0.0.1:51427/solr
at 
__randomizedtesting.SeedInfo.seed([3F9327CAF656A655:FE635E66DB066CF2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:868)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:589)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 11239 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestTlogReplica
   [junit4]   2> Creating 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 904 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/904/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([194B999CBFDDAFD1:7399A6F3E73E7F1E]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 40 more




Build Log:
[...truncated 12024 lines...]
   

Re: Release planning for 7.0

2017-06-02 Thread Steve Rowe

> On Jun 2, 2017, at 5:40 PM, Shawn Heisey  wrote:
> 
> On 6/2/2017 10:23 AM, Steve Rowe wrote:
> 
>> I see zero benefits from cutting branch_7x now.  Shawn, can you describe why 
>> you think we should do this?
>> 
>> My interpretation of your argument is that you’re in favor of delaying 
>> cutting branch_7_0 until feature freeze - which BTW is the status quo - but 
>> I don’t get why that argues for cutting branch_7x now.
> 
> I think I read something in the message I replied to that wasn't
> actually stated.  I hate it when I don't read things closely enough.
> 
> I meant to address the idea of making both branch_7x and branch_7_0 at
> the same time, whenever the branching happens.  Somehow I came up with
> the idea that the gist of the discussion included making the branches
> now, which I can see is not the case.
> 
> My point, which I think applies equally to branch_7x, is to wait as long
> as practical before creating a branch, so that there is as little
> backporting as we can manage, particularly minimizing the amount of time
> that we have more than two branches being actively changed.

+1

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10713) git should ignore common output files (*.pid, *.out)

2017-06-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035491#comment-16035491
 ] 

Mike Drob commented on SOLR-10713:
--

Let's get the easy parts committed and not worry about the hard parts for now 
(i.e. collection directories) - we can always come back to them later if 
somebody decides they're enough of a problem to solve rather than a 
hypothetical problem to try and handle. What's the phrase... striving to 
better, oft we mar what's well?

I'm not sure it makes a difference, but when moving entries from the top-level 
to the solr specific ignore list, some retained the leading slash and some lost 
it. I went looking at {{man gitignore}}, and discovered these two entries:

* A leading slash matches the beginning of the pathname. For example,
   "/*.c" matches "cat-file.c" but not "mozilla-sha1/sha1.c".
* A leading "**" followed by a slash means match in all directories.
   For example, "**/foo" matches file or directory "foo" anywhere, the
   same as pattern "foo"

So let's drop leading **, and also get consistent about our use of leading 
slash.

I have no preference on {{.patch}} files. Maybe leave them out so that they 
still get shown by {{git status}} and the user can be reminded that they exist. 
I think the risk of accidentally committing one is low and the remedy is simple 
since conflicts and dependancies on {{SOLR-XXX.patch}} should be unlikely.

> git should ignore common output files (*.pid, *.out)
> 
>
> Key: SOLR-10713
> URL: https://issues.apache.org/jira/browse/SOLR-10713
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Jason Gerlowski
>Assignee: Mike Drob
>Priority: Trivial
> Attachments: SOLR-10713.patch, SOLR-10713.patch, SOLR-10713.patch, 
> SOLR-10713.patch
>
>
> During the course of experimenting/testing Solr, it's common to accumulate a 
> number of output files in the source checkout.  Many of these are already 
> ignored via the {{.gitignore}}.  (For example, {{*.jar}} and {{*.log}} files 
> are untracked currently)
> Some common output files aren't explicitly ignored by git though.  I know 
> this is true of {{*.pid}} and {{*.out}} files (such as those produced by 
> running a standalone ZK).
> It'd be nice if we could update the {{.gitignore}} to explicitly ignore these 
> filetypes by default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release planning for 7.0

2017-06-02 Thread Shawn Heisey
On 6/2/2017 10:23 AM, Steve Rowe wrote:

> I see zero benefits from cutting branch_7x now.  Shawn, can you describe why 
> you think we should do this?
>
> My interpretation of your argument is that you’re in favor of delaying 
> cutting branch_7_0 until feature freeze - which BTW is the status quo - but I 
> don’t get why that argues for cutting branch_7x now.

I think I read something in the message I replied to that wasn't
actually stated.  I hate it when I don't read things closely enough.

I meant to address the idea of making both branch_7x and branch_7_0 at
the same time, whenever the branching happens.  Somehow I came up with
the idea that the gist of the discussion included making the branches
now, which I can see is not the case.

My point, which I think applies equally to branch_7x, is to wait as long
as practical before creating a branch, so that there is as little
backporting as we can manage, particularly minimizing the amount of time
that we have more than two branches being actively changed.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10713) git should ignore common output files (*.pid, *.out)

2017-06-02 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035470#comment-16035470
 ] 

Jason Gerlowski commented on SOLR-10713:


Another possible addition to the .gitignore would be {{.patch}} files.  These 
aren't _as_ common to accumulate, but they can pop up when people are juggling 
multiple JIRAs simultaneously.

I don't feel strongly either way; just mentioning it here in case anyone else 
really likes the idea.  Any thoughts [~mdrob]?

> git should ignore common output files (*.pid, *.out)
> 
>
> Key: SOLR-10713
> URL: https://issues.apache.org/jira/browse/SOLR-10713
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Jason Gerlowski
>Assignee: Mike Drob
>Priority: Trivial
> Attachments: SOLR-10713.patch, SOLR-10713.patch, SOLR-10713.patch, 
> SOLR-10713.patch
>
>
> During the course of experimenting/testing Solr, it's common to accumulate a 
> number of output files in the source checkout.  Many of these are already 
> ignored via the {{.gitignore}}.  (For example, {{*.jar}} and {{*.log}} files 
> are untracked currently)
> Some common output files aren't explicitly ignored by git though.  I know 
> this is true of {{*.pid}} and {{*.out}} files (such as those produced by 
> running a standalone ZK).
> It'd be nice if we could update the {{.gitignore}} to explicitly ignore these 
> filetypes by default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



InterruptedException handling in the code base

2017-06-02 Thread Varun Thacker
Here are two cases where we catch InterruptedException and do different
things:


   1.  Log a warn / error and move on : Example
   
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/update/UpdateLog.java#L1248
   2. Throw an SolrException : Example
   
https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java#L925


Would this be the correct way to deal with these :


   1. For 1, we should restore the interrupted thread by doing this
   - Thread.currentThread().interrupt();
   2. For 2, do we need to interrupt the thread before throwing an
   exception?


I wanted to understand the usage and then file a Jira to fix any mistakes
there is currently.


[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+171) - Build # 3646 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3646/
Java: 64bit/jdk-9-ea+171 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.memory.TestDirectDocValuesFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([D06D131F923883B8]:0)


FAILED:  
org.apache.lucene.codecs.memory.TestDirectDocValuesFormat.testGCDCompression

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([D06D131F923883B8]:0)


FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:43963","node_name":"127.0.0.1:43963_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:42865;,   "node_name":"127.0.0.1:42865_",  
 "state":"down"}, "core_node2":{   "state":"down",  
 "base_url":"http://127.0.0.1:38771;,   
"core":"c8n_1x3_lf_shard1_replica3",   "node_name":"127.0.0.1:38771_"}, 
"core_node3":{   "core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:43963;,   "node_name":"127.0.0.1:43963_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:43963","node_name":"127.0.0.1:43963_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:42865;,
  "node_name":"127.0.0.1:42865_",
  "state":"down"},
"core_node2":{
  "state":"down",
  "base_url":"http://127.0.0.1:38771;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:38771_"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:43963;,
  "node_name":"127.0.0.1:43963_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([488D188F3F091B04:C0D9275591F576FC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:563)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4044 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4044/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestPullReplicaErrorHandling

Error Message:
ObjectTracker found 10 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper, SolrCore, SolrCore, MockDirectoryWrapper, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:480)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:328) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:419) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$12(ReplicationHandler.java:1183)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:480)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:328) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:419) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$12(ReplicationHandler.java:1183)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:361)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:721)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:948)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:855)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:973)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:908)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:178)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:747)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:728)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:509)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:318)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 

[jira] [Commented] (SOLR-8762) DIH entity child=true should respond nested documents on debug

2017-06-02 Thread gopikannan venugopalsamy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035418#comment-16035418
 ] 

gopikannan venugopalsamy commented on SOLR-8762:


[~mkhludnev] No problem.

> DIH entity child=true should respond nested documents on debug
> --
>
> Key: SOLR-8762
> URL: https://issues.apache.org/jira/browse/SOLR-8762
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Minor
>  Labels: newbie, newdev
> Attachments: SOLR-8762.patch, SOLR-8762.patch
>
>
> Problem is described in 
> [comment|https://issues.apache.org/jira/browse/SOLR-5147?focusedCommentId=14744852=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14744852]
>  of SOLR-5147 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10649) Document new metrics config changes

2017-06-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-10649:


Assignee: Cassandra Targett

> Document new metrics config changes
> ---
>
> Key: SOLR-10649
> URL: https://issues.apache.org/jira/browse/SOLR-10649
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, metrics
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
> Fix For: master (7.0)
>
>
> SOLR-10262 made several improvements to the configuration of metrics in Solr.
> Internally, the {{}} configuration section of {{solr.xml}} is now 
> represented as {{MetricsConfig}} class, which makes it easier to add new 
> properties there.
> A new section {{}} has been added to {{solr.xml}}, 
> which allows users to define what implementations of metrics they want to use 
> as well as to configure their parameters. This is useful eg. when selecting 
> what kind of reservoir to use for histograms and timers, or to change the 
> reference clock type, or for providing any other configuration parameters for 
> custom implementations of metrics.
> The {{}} section specifies implementations and configurations of 
> metric suppliers, ie. classes responsible for creating instances of metrics. 
> There are default implementations provided for all types of metrics, and they 
> are used if no {{class}} attribute is specified, or an invalid one. Custom 
> suppliers must implement {{MetricSupplier}} interface and have a zero-args 
> constructor. Bean setter methods will be used for applying values from their 
> plugin configuration, alternatively they may also implement 
> {{PluginInfoInitialized}}. These rules also apply to any other custom classes 
> loaded in the metrics config, eg. custom Reservoir implementations.
> Each configuration element in the {{}} section follows a general 
> plugin configuration format, ie. it may optionally contain "name" and "class" 
> attributes and contain sub-elements that define typed configuration 
> parameters. As mentioned above, if the "class" attribute is missing or 
> invalid (the class can't be loaded or it doesn't implement the right 
> interface) a default implementation will be used. If an element is missing 
> then default configuration will be used.
> The following elements are supported in this section:
> * {{}} - this element defines the implementation and configuration 
> of a {{Counter}} supplier. The default implementation doesn't support any 
> configuration.
> * {{}} - implementation and configuration of a {{Meter}} supplier. The 
> default implementation supports one optional config parameter:
> ** {{}} - type of clock to use for calculating EWMA rates; 
> supported values are "user" (default, which uses {{System.nanoTime()}}) and 
> "cpu" (which uses current thread's CPU time).
> * {{}} - implementation and configuration of a {{Histogram}} 
> supplier. In addition to the {{clock}} parameter the following parameters are 
> supported by the default supplier implementation:
> ** {{}} - a fully-qualified class name of 
> implementation of {{Reservoir}} to use. Default value is 
> {{com.codahale.metrics.ExponentiallyDecayingReservoir}}. Note: all 
> implementations of {{Reservoir}} that ship with the metrics library are 
> supported, even though they don't follow the custom class rules listed above. 
> The following config parameters can be used with these implementations:
> *** {{size}} - (int, default is 1028) reservoir size.
> *** {{alpha}} - (double, default is 0.015) decay parameter for 
> {{ExponentiallyDecayingReservoir}}.
> *** {{window}} - (long, default is 300) window size parameter for 
> {{SlidingTimeWindowReservoir}}, in seconds. 300 seconds = 5 minutes, which 
> more or less fits the default bias of {{ExponentiallyDecayingReservoir}}.
> * {{}} - implementation and configuration of a {{Timer}} supplier. 
> Default implementation supports configuration parameters related to clock and 
> reservoir, as specified above.
> Example section of {{solr.xml}}. The default {{Meter}} supplier is used with 
> non-default clock, and the default {{Timer}} supplier is used but with a 
> non-default reservoir configuration:
> {code}
> 
>  
>   
> 
>   cpu
> 
> 
>name="reservoir">com.codahale.metrics.SlidingTimeWindowReservoir
>   600
> 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10509) Document changes in SOLR-10418

2017-06-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-10509.
--
   Resolution: Fixed
Fix Version/s: master (7.0)

Added docs to master for 7.0.

> Document changes in SOLR-10418
> --
>
> Key: SOLR-10509
> URL: https://issues.apache.org/jira/browse/SOLR-10509
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
> Fix For: master (7.0)
>
>
> SOLR-10418 added system properties to the metrics API, together with some 
> other changes that should be documented:
> * {{solr.jvm}} metrics group now contains a compound metric (ie. a metric 
> that has several mixed-type properties) named {{system.properties}}. This 
> metric exposes most of key/value pairs from {{System.getProperties()}}, 
> except for some properties that are considered sensitive.
> * a new optional element {{/metrics/hiddenSysProps}} is now supported in 
> {{solr.xml}} config file. It can be used for declaring what system properties 
> are considered sensitive and should not be exposed via metrics API. If this 
> element is absent a default list of hidden properties is used, equivalent to 
> the following configuration:
> {code}
> 
> ...
>   
> 
>   javax.net.ssl.keyStorePassword
>   javax.net.ssl.trustStorePassword
>   basicauth
>   zkDigestPassword
>   zkDigestReadonlyPassword
> 
> ...
>   
> 
> {code}
> * {{/admin/metrics}} handler now supports a {{property}} parameter that can 
> be used for selecting a specific property from a compound metric. Multiple 
> {{property}} parameters can be specified, which will act as a logical OR. For 
> example, parameters 
> {{prefix=system.properties=user.home=user.name}} would 
> return just the two specified system properties. This property selection 
> mechanism works also for other types of metrics that have multiple properties 
> (eg. timers, meters, histograms, etc), for example 
> {{property=p99=p99_ms}} will return only the 99-th percentile value 
> from all selected histograms and timers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10649) Document new metrics config changes

2017-06-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-10649.
--
   Resolution: Fixed
Fix Version/s: master (7.0)

Added docs to master for 7.0.

> Document new metrics config changes
> ---
>
> Key: SOLR-10649
> URL: https://issues.apache.org/jira/browse/SOLR-10649
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, metrics
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
> Fix For: master (7.0)
>
>
> SOLR-10262 made several improvements to the configuration of metrics in Solr.
> Internally, the {{}} configuration section of {{solr.xml}} is now 
> represented as {{MetricsConfig}} class, which makes it easier to add new 
> properties there.
> A new section {{}} has been added to {{solr.xml}}, 
> which allows users to define what implementations of metrics they want to use 
> as well as to configure their parameters. This is useful eg. when selecting 
> what kind of reservoir to use for histograms and timers, or to change the 
> reference clock type, or for providing any other configuration parameters for 
> custom implementations of metrics.
> The {{}} section specifies implementations and configurations of 
> metric suppliers, ie. classes responsible for creating instances of metrics. 
> There are default implementations provided for all types of metrics, and they 
> are used if no {{class}} attribute is specified, or an invalid one. Custom 
> suppliers must implement {{MetricSupplier}} interface and have a zero-args 
> constructor. Bean setter methods will be used for applying values from their 
> plugin configuration, alternatively they may also implement 
> {{PluginInfoInitialized}}. These rules also apply to any other custom classes 
> loaded in the metrics config, eg. custom Reservoir implementations.
> Each configuration element in the {{}} section follows a general 
> plugin configuration format, ie. it may optionally contain "name" and "class" 
> attributes and contain sub-elements that define typed configuration 
> parameters. As mentioned above, if the "class" attribute is missing or 
> invalid (the class can't be loaded or it doesn't implement the right 
> interface) a default implementation will be used. If an element is missing 
> then default configuration will be used.
> The following elements are supported in this section:
> * {{}} - this element defines the implementation and configuration 
> of a {{Counter}} supplier. The default implementation doesn't support any 
> configuration.
> * {{}} - implementation and configuration of a {{Meter}} supplier. The 
> default implementation supports one optional config parameter:
> ** {{}} - type of clock to use for calculating EWMA rates; 
> supported values are "user" (default, which uses {{System.nanoTime()}}) and 
> "cpu" (which uses current thread's CPU time).
> * {{}} - implementation and configuration of a {{Histogram}} 
> supplier. In addition to the {{clock}} parameter the following parameters are 
> supported by the default supplier implementation:
> ** {{}} - a fully-qualified class name of 
> implementation of {{Reservoir}} to use. Default value is 
> {{com.codahale.metrics.ExponentiallyDecayingReservoir}}. Note: all 
> implementations of {{Reservoir}} that ship with the metrics library are 
> supported, even though they don't follow the custom class rules listed above. 
> The following config parameters can be used with these implementations:
> *** {{size}} - (int, default is 1028) reservoir size.
> *** {{alpha}} - (double, default is 0.015) decay parameter for 
> {{ExponentiallyDecayingReservoir}}.
> *** {{window}} - (long, default is 300) window size parameter for 
> {{SlidingTimeWindowReservoir}}, in seconds. 300 seconds = 5 minutes, which 
> more or less fits the default bias of {{ExponentiallyDecayingReservoir}}.
> * {{}} - implementation and configuration of a {{Timer}} supplier. 
> Default implementation supports configuration parameters related to clock and 
> reservoir, as specified above.
> Example section of {{solr.xml}}. The default {{Meter}} supplier is used with 
> non-default clock, and the default {{Timer}} supplier is used but with a 
> non-default reservoir configuration:
> {code}
> 
>  
>   
> 
>   cpu
> 
> 
>name="reservoir">com.codahale.metrics.SlidingTimeWindowReservoir
>   600
> 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10509) Document changes in SOLR-10418

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035311#comment-16035311
 ] 

ASF subversion and git services commented on SOLR-10509:


Commit 9e99a23f31b8d3508526ea473b944beb13303334 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9e99a23 ]

SOLR-10509, SOLR-10649: add docs for new metric features; add  to 
solr.xml docs


> Document changes in SOLR-10418
> --
>
> Key: SOLR-10509
> URL: https://issues.apache.org/jira/browse/SOLR-10509
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
>
> SOLR-10418 added system properties to the metrics API, together with some 
> other changes that should be documented:
> * {{solr.jvm}} metrics group now contains a compound metric (ie. a metric 
> that has several mixed-type properties) named {{system.properties}}. This 
> metric exposes most of key/value pairs from {{System.getProperties()}}, 
> except for some properties that are considered sensitive.
> * a new optional element {{/metrics/hiddenSysProps}} is now supported in 
> {{solr.xml}} config file. It can be used for declaring what system properties 
> are considered sensitive and should not be exposed via metrics API. If this 
> element is absent a default list of hidden properties is used, equivalent to 
> the following configuration:
> {code}
> 
> ...
>   
> 
>   javax.net.ssl.keyStorePassword
>   javax.net.ssl.trustStorePassword
>   basicauth
>   zkDigestPassword
>   zkDigestReadonlyPassword
> 
> ...
>   
> 
> {code}
> * {{/admin/metrics}} handler now supports a {{property}} parameter that can 
> be used for selecting a specific property from a compound metric. Multiple 
> {{property}} parameters can be specified, which will act as a logical OR. For 
> example, parameters 
> {{prefix=system.properties=user.home=user.name}} would 
> return just the two specified system properties. This property selection 
> mechanism works also for other types of metrics that have multiple properties 
> (eg. timers, meters, histograms, etc), for example 
> {{property=p99=p99_ms}} will return only the 99-th percentile value 
> from all selected histograms and timers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10649) Document new metrics config changes

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035312#comment-16035312
 ] 

ASF subversion and git services commented on SOLR-10649:


Commit 9e99a23f31b8d3508526ea473b944beb13303334 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9e99a23 ]

SOLR-10509, SOLR-10649: add docs for new metric features; add  to 
solr.xml docs


> Document new metrics config changes
> ---
>
> Key: SOLR-10649
> URL: https://issues.apache.org/jira/browse/SOLR-10649
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, metrics
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>
> SOLR-10262 made several improvements to the configuration of metrics in Solr.
> Internally, the {{}} configuration section of {{solr.xml}} is now 
> represented as {{MetricsConfig}} class, which makes it easier to add new 
> properties there.
> A new section {{}} has been added to {{solr.xml}}, 
> which allows users to define what implementations of metrics they want to use 
> as well as to configure their parameters. This is useful eg. when selecting 
> what kind of reservoir to use for histograms and timers, or to change the 
> reference clock type, or for providing any other configuration parameters for 
> custom implementations of metrics.
> The {{}} section specifies implementations and configurations of 
> metric suppliers, ie. classes responsible for creating instances of metrics. 
> There are default implementations provided for all types of metrics, and they 
> are used if no {{class}} attribute is specified, or an invalid one. Custom 
> suppliers must implement {{MetricSupplier}} interface and have a zero-args 
> constructor. Bean setter methods will be used for applying values from their 
> plugin configuration, alternatively they may also implement 
> {{PluginInfoInitialized}}. These rules also apply to any other custom classes 
> loaded in the metrics config, eg. custom Reservoir implementations.
> Each configuration element in the {{}} section follows a general 
> plugin configuration format, ie. it may optionally contain "name" and "class" 
> attributes and contain sub-elements that define typed configuration 
> parameters. As mentioned above, if the "class" attribute is missing or 
> invalid (the class can't be loaded or it doesn't implement the right 
> interface) a default implementation will be used. If an element is missing 
> then default configuration will be used.
> The following elements are supported in this section:
> * {{}} - this element defines the implementation and configuration 
> of a {{Counter}} supplier. The default implementation doesn't support any 
> configuration.
> * {{}} - implementation and configuration of a {{Meter}} supplier. The 
> default implementation supports one optional config parameter:
> ** {{}} - type of clock to use for calculating EWMA rates; 
> supported values are "user" (default, which uses {{System.nanoTime()}}) and 
> "cpu" (which uses current thread's CPU time).
> * {{}} - implementation and configuration of a {{Histogram}} 
> supplier. In addition to the {{clock}} parameter the following parameters are 
> supported by the default supplier implementation:
> ** {{}} - a fully-qualified class name of 
> implementation of {{Reservoir}} to use. Default value is 
> {{com.codahale.metrics.ExponentiallyDecayingReservoir}}. Note: all 
> implementations of {{Reservoir}} that ship with the metrics library are 
> supported, even though they don't follow the custom class rules listed above. 
> The following config parameters can be used with these implementations:
> *** {{size}} - (int, default is 1028) reservoir size.
> *** {{alpha}} - (double, default is 0.015) decay parameter for 
> {{ExponentiallyDecayingReservoir}}.
> *** {{window}} - (long, default is 300) window size parameter for 
> {{SlidingTimeWindowReservoir}}, in seconds. 300 seconds = 5 minutes, which 
> more or less fits the default bias of {{ExponentiallyDecayingReservoir}}.
> * {{}} - implementation and configuration of a {{Timer}} supplier. 
> Default implementation supports configuration parameters related to clock and 
> reservoir, as specified above.
> Example section of {{solr.xml}}. The default {{Meter}} supplier is used with 
> non-default clock, and the default {{Timer}} supplier is used but with a 
> non-default reservoir configuration:
> {code}
> 
>  
>   
> 
>   cpu
> 
> 
>name="reservoir">com.codahale.metrics.SlidingTimeWindowReservoir
>   600
> 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_131) - Build # 929 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/929/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([AE139E3099EF6329:C4C1A15FC10CB3E6]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 40 more




Build Log:
[...truncated 13299 lines...]
   [junit4] 

[jira] [Commented] (SOLR-10506) Possible memory leak upon collection reload

2017-06-02 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035222#comment-16035222
 ] 

Christine Poerschke commented on SOLR-10506:


bq. ... to have as a separate issue ...
bq. ... preference would be to open a separate issue for that ...

Sounds good.

I just returned to this, and maybe Friday evening timing was a mistake and it 
will all be clearer next week ... am struggling to convince myself that the 
proposed removal of the watcher re-creation in _ZkIndexSchemaReader.command()_ 
is appropriate. The existing comment on the method says
{code}
  /**
   * Called after a ZooKeeper session expiration occurs; need to re-create the 
watcher and update the current
   * schema from ZooKeeper.
   */
{code}
and _ZkController.addOnReconnectListener(OnReconnect listener)_ method has a 
comment
{code}
  /**
   * Add a listener to be notified once there is a new session created after a 
ZooKeeper session expiration occurs;
   * in most cases, listeners will be components that have watchers that need 
to be re-created.
   */
{code}
and intuitively "we got disconnected and so need to recreate our watchers 
since/if the watchers we had previously were for the connection that got 
disconnected" seems plausible but then equally so "we registered watches with 
the zkclient and wouldn't it be nice for zkclient to take care of watcher 
lifecycle across disconnects?" is not implausible. Need to go checkout ZK docs 
and stuff, not today.


> Possible memory leak upon collection reload
> ---
>
> Key: SOLR-10506
> URL: https://issues.apache.org/jira/browse/SOLR-10506
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.5
>Reporter: Torsten Bøgh Köster
>Assignee: Christine Poerschke
> Attachments: solr_collection_reload_13_cores.png, 
> solr_gc_path_via_zk_WatchManager.png
>
>
> Upon manual Solr Collection reloading, references to the closed {{SolrCore}} 
> are not fully removed by the garbage collector as a strong reference to the 
> {{ZkIndexSchemaReader}} is held in a ZooKeeper {{Watcher}} that watches for 
> schema changes.
> In our case, this leads to a massive memory leak as managed resources are 
> still referenced by the closed {{SolrCore}}. Our Solr cloud environment 
> utilizes rather large managed resources (synonyms, stopwords). To reproduce, 
> we fired out environment up and reloaded the collection 13 times. As a result 
> we fully exhausted our heap. A closer look with the Yourkit profiler revealed 
> 13 {{SolrCore}} instances, still holding strong references to the garbage 
> collection root (see screenshot 1).
> Each {{SolrCore}} instance holds a single path with strong references to the 
> gc root via a `Watcher` in `ZkIndexSchemaReader` (see screenshot 2). The 
> {{ZkIndexSchemaReader}} registers a close hook in the {{SolrCore}} but the 
> Zookeeper is not removed upon core close.
> We supplied a Github Pull Request 
> (https://github.com/apache/lucene-solr/pull/197) that extracts the zookeeper 
> `Watcher` as a static inner class. To eliminate the memory leak, the schema 
> reader is held inside a `WeakReference` and the reference is explicitly 
> removed on core close.
> Initially I wanted to supply a test case but unfortunately did not find a 
> good starting point ...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10557) Make "compact" format default for /admin/metrics

2017-06-02 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035145#comment-16035145
 ] 

Cassandra Targett commented on SOLR-10557:
--

[~ab] My understanding of this is that if I add {{compact=false}} to my request 
I get the more verbose version? I'm not seeing that, so wondering if the option 
was removed altogether?

> Make "compact" format default for /admin/metrics
> 
>
> Key: SOLR-10557
> URL: https://issues.apache.org/jira/browse/SOLR-10557
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Trivial
> Fix For: master (7.0)
>
>
> The "compact" format is more readable and significantly more compact :) It 
> should be the default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10801) Delete deprecated & dead code (that are exposed to plugin writers)

2017-06-02 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-10801.
-
Resolution: Fixed

> Delete deprecated & dead code (that are exposed to plugin writers)
> --
>
> Key: SOLR-10801
> URL: https://issues.apache.org/jira/browse/SOLR-10801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10801.patch
>
>
> These methods have been deprecated for a while, are currently unused in Solr 
> (master), and are very visibile to people writting "plugins" -- so we should 
> definitely ensure they are removed before 7.0...
> * QParser.getSort
> * QueryParsing.getQueryParserDefaultOperator
> * QueryParsing.getDefaultField
> * SolrPluginUtils.docListToSolrDocumentList



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10801) Delete deprecated & dead code (that are exposed to plugin writers)

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035125#comment-16035125
 ] 

ASF subversion and git services commented on SOLR-10801:


Commit 038baaed92a0894faa4204089373fd1deb295097 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=038baae ]

SOLR-10801: Remove several deprecated methods that were exposed to plugin 
writers


> Delete deprecated & dead code (that are exposed to plugin writers)
> --
>
> Key: SOLR-10801
> URL: https://issues.apache.org/jira/browse/SOLR-10801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-10801.patch
>
>
> These methods have been deprecated for a while, are currently unused in Solr 
> (master), and are very visibile to people writting "plugins" -- so we should 
> definitely ensure they are removed before 7.0...
> * QParser.getSort
> * QueryParsing.getQueryParserDefaultOperator
> * QueryParsing.getDefaultField
> * SolrPluginUtils.docListToSolrDocumentList



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10509) Document changes in SOLR-10418

2017-06-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-10509:


Assignee: Cassandra Targett

> Document changes in SOLR-10418
> --
>
> Key: SOLR-10509
> URL: https://issues.apache.org/jira/browse/SOLR-10509
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
>
> SOLR-10418 added system properties to the metrics API, together with some 
> other changes that should be documented:
> * {{solr.jvm}} metrics group now contains a compound metric (ie. a metric 
> that has several mixed-type properties) named {{system.properties}}. This 
> metric exposes most of key/value pairs from {{System.getProperties()}}, 
> except for some properties that are considered sensitive.
> * a new optional element {{/metrics/hiddenSysProps}} is now supported in 
> {{solr.xml}} config file. It can be used for declaring what system properties 
> are considered sensitive and should not be exposed via metrics API. If this 
> element is absent a default list of hidden properties is used, equivalent to 
> the following configuration:
> {code}
> 
> ...
>   
> 
>   javax.net.ssl.keyStorePassword
>   javax.net.ssl.trustStorePassword
>   basicauth
>   zkDigestPassword
>   zkDigestReadonlyPassword
> 
> ...
>   
> 
> {code}
> * {{/admin/metrics}} handler now supports a {{property}} parameter that can 
> be used for selecting a specific property from a compound metric. Multiple 
> {{property}} parameters can be specified, which will act as a logical OR. For 
> example, parameters 
> {{prefix=system.properties=user.home=user.name}} would 
> return just the two specified system properties. This property selection 
> mechanism works also for other types of metrics that have multiple properties 
> (eg. timers, meters, histograms, etc), for example 
> {{property=p99=p99_ms}} will return only the 99-th percentile value 
> from all selected histograms and timers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10280) Document the "compact" format of /admin/metrics

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035106#comment-16035106
 ] 

ASF subversion and git services commented on SOLR-10280:


Commit a3af2d4c158d989f761f6e1cf17a0ad7f6566f9f in lucene-solr's branch 
refs/heads/branch_6_6 from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a3af2d4 ]

SOLR-10280: document "compact" format of metrics response


> Document the "compact" format of /admin/metrics
> ---
>
> Key: SOLR-10280
> URL: https://issues.apache.org/jira/browse/SOLR-10280
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.5, master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
>
> SOLR-10247 introduced a new compact format for the output of 
> {{/admin/metrics}} handler. The new format is turned on by a request 
> parameter {{compact=true}} and the default value is {{false}}.
> Example of regular format in XML:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE
> ...
> 
> 
> gettingstarted
> 
> 
> 
> gettingstarted
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 
> 
> 1
> 
> 
> 2017-03-14T11:43:23.822Z
> 
> ...
> {code}
> Example of compact format in XML:
> {code}
> ...
> 
> gettingstarted
> 
> gettingstarted
>  name="CORE.indexDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
>  name="CORE.instanceDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 1
> 2017-03-14T11:43:23.822Z
> ...
> {code}
> Example of regular format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": {
> "value": [
>   "gettingstarted"
> ]
>   },
>   "CORE.coreName": {
> "value": "gettingstarted"
>   },
>   "CORE.indexDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/"
>   },
>   "CORE.instanceDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted"
>   },
>   "CORE.refCount": {
> "value": 1
>   },
>   "CORE.startTime": {
> "value": "2017-03-14T11:43:23.822Z"
>   }
> }
>   ]
> ...
> {code}
> Example of compact format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=true=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": [
> "gettingstarted"
>   ],
>   "CORE.coreName": "gettingstarted",
>   "CORE.indexDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/",
>   "CORE.instanceDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted",
>   "CORE.refCount": 1,
>   "CORE.startTime": "2017-03-14T11:43:23.822Z"
> }
>   ]
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10280) Document the "compact" format of /admin/metrics

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035105#comment-16035105
 ] 

ASF subversion and git services commented on SOLR-10280:


Commit a14b6f70b6b7628f7507513455240e9ab382f25a in lucene-solr's branch 
refs/heads/branch_6x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a14b6f7 ]

SOLR-10280: document "compact" format of metrics response


> Document the "compact" format of /admin/metrics
> ---
>
> Key: SOLR-10280
> URL: https://issues.apache.org/jira/browse/SOLR-10280
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.5, master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
>
> SOLR-10247 introduced a new compact format for the output of 
> {{/admin/metrics}} handler. The new format is turned on by a request 
> parameter {{compact=true}} and the default value is {{false}}.
> Example of regular format in XML:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE
> ...
> 
> 
> gettingstarted
> 
> 
> 
> gettingstarted
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 
> 
> 1
> 
> 
> 2017-03-14T11:43:23.822Z
> 
> ...
> {code}
> Example of compact format in XML:
> {code}
> ...
> 
> gettingstarted
> 
> gettingstarted
>  name="CORE.indexDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
>  name="CORE.instanceDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 1
> 2017-03-14T11:43:23.822Z
> ...
> {code}
> Example of regular format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": {
> "value": [
>   "gettingstarted"
> ]
>   },
>   "CORE.coreName": {
> "value": "gettingstarted"
>   },
>   "CORE.indexDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/"
>   },
>   "CORE.instanceDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted"
>   },
>   "CORE.refCount": {
> "value": 1
>   },
>   "CORE.startTime": {
> "value": "2017-03-14T11:43:23.822Z"
>   }
> }
>   ]
> ...
> {code}
> Example of compact format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=true=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": [
> "gettingstarted"
>   ],
>   "CORE.coreName": "gettingstarted",
>   "CORE.indexDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/",
>   "CORE.instanceDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted",
>   "CORE.refCount": 1,
>   "CORE.startTime": "2017-03-14T11:43:23.822Z"
> }
>   ]
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10280) Document the "compact" format of /admin/metrics

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035103#comment-16035103
 ] 

ASF subversion and git services commented on SOLR-10280:


Commit 3b45d8284fed203eba146d98662485eb7a31364c in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3b45d82 ]

SOLR-10280: document "compact" format of metrics response


> Document the "compact" format of /admin/metrics
> ---
>
> Key: SOLR-10280
> URL: https://issues.apache.org/jira/browse/SOLR-10280
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.5, master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
>
> SOLR-10247 introduced a new compact format for the output of 
> {{/admin/metrics}} handler. The new format is turned on by a request 
> parameter {{compact=true}} and the default value is {{false}}.
> Example of regular format in XML:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE
> ...
> 
> 
> gettingstarted
> 
> 
> 
> gettingstarted
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 
> 
> 1
> 
> 
> 2017-03-14T11:43:23.822Z
> 
> ...
> {code}
> Example of compact format in XML:
> {code}
> ...
> 
> gettingstarted
> 
> gettingstarted
>  name="CORE.indexDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
>  name="CORE.instanceDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 1
> 2017-03-14T11:43:23.822Z
> ...
> {code}
> Example of regular format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": {
> "value": [
>   "gettingstarted"
> ]
>   },
>   "CORE.coreName": {
> "value": "gettingstarted"
>   },
>   "CORE.indexDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/"
>   },
>   "CORE.instanceDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted"
>   },
>   "CORE.refCount": {
> "value": 1
>   },
>   "CORE.startTime": {
> "value": "2017-03-14T11:43:23.822Z"
>   }
> }
>   ]
> ...
> {code}
> Example of compact format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=true=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": [
> "gettingstarted"
>   ],
>   "CORE.coreName": "gettingstarted",
>   "CORE.indexDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/",
>   "CORE.instanceDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted",
>   "CORE.refCount": 1,
>   "CORE.startTime": "2017-03-14T11:43:23.822Z"
> }
>   ]
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10280) Document the "compact" format of /admin/metrics

2017-06-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-10280.
--
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.6

Added to parameters section of {{solr-ref-guide/src/metrics-reporting.adoc}}.

> Document the "compact" format of /admin/metrics
> ---
>
> Key: SOLR-10280
> URL: https://issues.apache.org/jira/browse/SOLR-10280
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.5, master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
>
> SOLR-10247 introduced a new compact format for the output of 
> {{/admin/metrics}} handler. The new format is turned on by a request 
> parameter {{compact=true}} and the default value is {{false}}.
> Example of regular format in XML:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE
> ...
> 
> 
> gettingstarted
> 
> 
> 
> gettingstarted
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 
> 
> 1
> 
> 
> 2017-03-14T11:43:23.822Z
> 
> ...
> {code}
> Example of compact format in XML:
> {code}
> ...
> 
> gettingstarted
> 
> gettingstarted
>  name="CORE.indexDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
>  name="CORE.instanceDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 1
> 2017-03-14T11:43:23.822Z
> ...
> {code}
> Example of regular format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": {
> "value": [
>   "gettingstarted"
> ]
>   },
>   "CORE.coreName": {
> "value": "gettingstarted"
>   },
>   "CORE.indexDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/"
>   },
>   "CORE.instanceDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted"
>   },
>   "CORE.refCount": {
> "value": 1
>   },
>   "CORE.startTime": {
> "value": "2017-03-14T11:43:23.822Z"
>   }
> }
>   ]
> ...
> {code}
> Example of compact format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=true=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": [
> "gettingstarted"
>   ],
>   "CORE.coreName": "gettingstarted",
>   "CORE.indexDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/",
>   "CORE.instanceDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted",
>   "CORE.refCount": 1,
>   "CORE.startTime": "2017-03-14T11:43:23.822Z"
> }
>   ]
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10799) Create a new function to get eligible replicas in HttpShardHandler

2017-06-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-10799.
--
Resolution: Fixed

Thanks Domenico!

> Create a new function to get eligible replicas in HttpShardHandler
> --
>
> Key: SOLR-10799
> URL: https://issues.apache.org/jira/browse/SOLR-10799
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Domenico Fabio Marino
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10799-6X.patch, SOLR-10799-6X.patch, 
> SOLR-10799.patch, SOLR-10799.patch
>
>
> Extract a function called createEligibleReplicas from prepDistributed() in 
> HttpShardHandler.
> This method takes a collection of all available replicas, a cluster state, 
> onlyNrtReplicas boolean and a predicate and returns a list of eligible 
> replicas.
> This helps with readability and could be used to perform further refactoring 
> in the future 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10799) Create a new function to get eligible replicas in HttpShardHandler

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035099#comment-16035099
 ] 

ASF subversion and git services commented on SOLR-10799:


Commit 3618fc529dff85ee604614b3c545fa0b5fbf3b06 in lucene-solr's branch 
refs/heads/master from [~tomasflobbe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3618fc5 ]

SOLR-10799: Refator HttpShardHandler.prepDistributed collection of shard 
replicas


> Create a new function to get eligible replicas in HttpShardHandler
> --
>
> Key: SOLR-10799
> URL: https://issues.apache.org/jira/browse/SOLR-10799
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Domenico Fabio Marino
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10799-6X.patch, SOLR-10799-6X.patch, 
> SOLR-10799.patch, SOLR-10799.patch
>
>
> Extract a function called createEligibleReplicas from prepDistributed() in 
> HttpShardHandler.
> This method takes a collection of all available replicas, a cluster state, 
> onlyNrtReplicas boolean and a predicate and returns a list of eligible 
> replicas.
> This helps with readability and could be used to perform further refactoring 
> in the future 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10280) Document the "compact" format of /admin/metrics

2017-06-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-10280:


Assignee: Cassandra Targett

> Document the "compact" format of /admin/metrics
> ---
>
> Key: SOLR-10280
> URL: https://issues.apache.org/jira/browse/SOLR-10280
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.5, master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
>Priority: Minor
>
> SOLR-10247 introduced a new compact format for the output of 
> {{/admin/metrics}} handler. The new format is turned on by a request 
> parameter {{compact=true}} and the default value is {{false}}.
> Example of regular format in XML:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE
> ...
> 
> 
> gettingstarted
> 
> 
> 
> gettingstarted
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
> 
> 
>  name="value">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 
> 
> 1
> 
> 
> 2017-03-14T11:43:23.822Z
> 
> ...
> {code}
> Example of compact format in XML:
> {code}
> ...
> 
> gettingstarted
> 
> gettingstarted
>  name="CORE.indexDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/
>  name="CORE.instanceDir">/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted
> 1
> 2017-03-14T11:43:23.822Z
> ...
> {code}
> Example of regular format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": {
> "value": [
>   "gettingstarted"
> ]
>   },
>   "CORE.coreName": {
> "value": "gettingstarted"
>   },
>   "CORE.indexDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/"
>   },
>   "CORE.instanceDir": {
> "value": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted"
>   },
>   "CORE.refCount": {
> "value": 1
>   },
>   "CORE.startTime": {
> "value": "2017-03-14T11:43:23.822Z"
>   }
> }
>   ]
> ...
> {code}
> Example of compact format in JSON:
> {code}
> http://localhost:8983/solr/admin/metrics?registry=solr.core.gettingstarted=true=CORE=json
> ...
>   "metrics": [
> "solr.core.gettingstarted",
> {
>   "CORE.aliases": [
> "gettingstarted"
>   ],
>   "CORE.coreName": "gettingstarted",
>   "CORE.indexDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted/data/index/",
>   "CORE.instanceDir": 
> "/Users/ab/work/lucene/lucene-solr/solr/example/schemaless/solr/gettingstarted",
>   "CORE.refCount": 1,
>   "CORE.startTime": "2017-03-14T11:43:23.822Z"
> }
>   ]
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5985) Consider moving the Solr Reference Guide from Confluence to the ASF CMS

2017-06-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-5985.
-
Resolution: Won't Fix

> Consider moving the Solr Reference Guide from Confluence to the ASF CMS
> ---
>
> Key: SOLR-5985
> URL: https://issues.apache.org/jira/browse/SOLR-5985
> Project: Solr
>  Issue Type: Task
>  Components: documentation
>Reporter: Steve Rowe
>Priority: Minor
>
> In the context of enabling Confluence shortcut link functionality 
> (SOLR-5965), [~joes] mentioned on #asfinfra that the ASF CMS can emulate all 
> Confluence features, including PDF export, and that we should consider 
> switching the Solr Reference Guide from Confluence to the ASF CMS.  See the 
> transcript at 
> [https://issues.apache.org/jira/browse/SOLR-5965?focusedCommentId=13969869=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13969869].
> Joe says that http://thrift.apache.org is a good example of a CMS-driven 
> website - more info on this blog post: 
> [http://blogs.apache.org/infra/entry/scaling_down_the_cms_to].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9898) Documentation for metrics collection and /admin/metrics

2017-06-02 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-9898.
-
   Resolution: Fixed
Fix Version/s: 6.4

Totally forgot to resolve this when I added it to the Ref Guide in 6.4.

> Documentation for metrics collection and /admin/metrics
> ---
>
> Key: SOLR-9898
> URL: https://issues.apache.org/jira/browse/SOLR-9898
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.4, master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Cassandra Targett
> Fix For: 6.4
>
>
> Draft documentation follows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (PYLUCENE-37) Extended interfaces beyond first are ignored

2017-06-02 Thread Jesper Mattsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PYLUCENE-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesper Mattsson updated PYLUCENE-37:

Attachment: jcc.multiple.inheritance.patch

First attempt at solution.

> Extended interfaces beyond first are ignored
> 
>
> Key: PYLUCENE-37
> URL: https://issues.apache.org/jira/browse/PYLUCENE-37
> Project: PyLucene
>  Issue Type: Bug
>Reporter: Jesper Mattsson
> Attachments: jcc.multiple.inheritance.patch, Test.zip
>
>
> When generating wrapper for a Java interface that extends more than one other 
> interface, then only the first extended interface is used when generating the 
> C++ class.
> In cpp.header(), the code snippets:
> {code}
> if cls.isInterface():
> if interfaces:
> superCls = interfaces.pop(0)
> {code}
> and:
> {code}
> line(out, indent, 'class %s%s : public %s {',
>  _dll_export, cppname(names[-1]), absname(cppnames(superNames)))
> {code}
> are likely responsible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PYLUCENE-37) Extended interfaces beyond first are ignored

2017-06-02 Thread Jesper Mattsson (JIRA)

[ 
https://issues.apache.org/jira/browse/PYLUCENE-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035020#comment-16035020
 ] 

Jesper Mattsson commented on PYLUCENE-37:
-

I made a first stab at a solution. It is only implemented in the Python 2 
version, and the Python wrapper generation isn't done, but I'll attach it as a 
patch in the hope that it will be of use to you.

> Extended interfaces beyond first are ignored
> 
>
> Key: PYLUCENE-37
> URL: https://issues.apache.org/jira/browse/PYLUCENE-37
> Project: PyLucene
>  Issue Type: Bug
>Reporter: Jesper Mattsson
> Attachments: Test.zip
>
>
> When generating wrapper for a Java interface that extends more than one other 
> interface, then only the first extended interface is used when generating the 
> C++ class.
> In cpp.header(), the code snippets:
> {code}
> if cls.isInterface():
> if interfaces:
> superCls = interfaces.pop(0)
> {code}
> and:
> {code}
> line(out, indent, 'class %s%s : public %s {',
>  _dll_export, cppname(names[-1]), absname(cppnames(superNames)))
> {code}
> are likely responsible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Release planning for 7.0

2017-06-02 Thread Steve Rowe
I see zero benefits from cutting branch_7x now.  Shawn, can you describe why 
you think we should do this?

My interpretation of your argument is that you’re in favor of delaying cutting 
branch_7_0 until feature freeze - which BTW is the status quo - but I don’t get 
why that argues for cutting branch_7x now.

--
Steve
www.lucidworks.com

> On Jun 2, 2017, at 12:14 PM, Anshum Gupta  wrote:
> 
> I was trying to find this stuff in the ReleaseToDo documentation as I don't 
> exactly remember what I did for 5.0, or what happened during 6.0 but there 
> doesn't seem to be any.
> 
> What you suggest makes things easier for sure but it ideally would also mean 
> that we don't add new features to 7x. If we intend to continue adding new 
> features, then we lose the idea behind cutting the branch.
> 
> The good part of having 7x, and 7.0 cut at the same time is that we can then 
> just work on 7.0 to get it stable, without adding more features.
> 
> What does everyone else think about this? Also, we can go with what we did in 
> the previous releases if that seems like the best option.
> 
> -Anshum
> 
> On Fri, Jun 2, 2017 at 9:01 AM Shawn Heisey  wrote:
> On 6/2/2017 6:45 AM, Adrien Grand wrote:
> > Hi Anshum, will you branch both branch_7x and branch_7_0 at the same
> > time? I think this is what we need to do but I'm asking in case you
> > had planned differently.
> 
> It seems like a better idea to create only branch_7x right now, and
> delay creating the 7_0 branch until we're ready to declare a feature
> freeze, after which new stuff would be slated for 7.1.
> 
> For the two major releases I've witnessed, it took quite a while to go
> from trunk/master to new release.  Most work for that preparation will
> already require committing to both master and 7x, with some changes only
> happening in one branch.  If we create the 7_0 branch now, a lot of work
> over a fairly long timeframe will need to be applied to three branches
> instead of two.  Backporting twice isn't difficult, but it is an extra
> step that could easily be forgotten, causing unwanted divergence between
> branches.
> 
> Usually the amount of time we need to worry about three branches is only
> a few days and doesn't involve very many commits.  I don't think we want
> that situation for as long as it would take to get from master to 7.0.
> 
> Thanks,
> Shawn
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release planning for 7.0

2017-06-02 Thread Anshum Gupta
I was trying to find this stuff in the ReleaseToDo documentation as I don't
exactly remember what I did for 5.0, or what happened during 6.0 but there
doesn't seem to be any.

What you suggest makes things easier for sure but it ideally would also
mean that we don't add new features to 7x. If we intend to continue adding
new features, then we lose the idea behind cutting the branch.

The good part of having 7x, and 7.0 cut at the same time is that we can
then just work on 7.0 to get it stable, without adding more features.

What does everyone else think about this? Also, we can go with what we did
in the previous releases if that seems like the best option.

-Anshum

On Fri, Jun 2, 2017 at 9:01 AM Shawn Heisey  wrote:

> On 6/2/2017 6:45 AM, Adrien Grand wrote:
> > Hi Anshum, will you branch both branch_7x and branch_7_0 at the same
> > time? I think this is what we need to do but I'm asking in case you
> > had planned differently.
>
> It seems like a better idea to create only branch_7x right now, and
> delay creating the 7_0 branch until we're ready to declare a feature
> freeze, after which new stuff would be slated for 7.1.
>
> For the two major releases I've witnessed, it took quite a while to go
> from trunk/master to new release.  Most work for that preparation will
> already require committing to both master and 7x, with some changes only
> happening in one branch.  If we create the 7_0 branch now, a lot of work
> over a fairly long timeframe will need to be applied to three branches
> instead of two.  Backporting twice isn't difficult, but it is an extra
> step that could easily be forgotten, causing unwanted divergence between
> branches.
>
> Usually the amount of time we need to worry about three branches is only
> a few days and doesn't involve very many commits.  I don't think we want
> that situation for as long as it would take to get from master to 7.0.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Created] (SOLR-10805) Improve error handling for statistical Stream Evaluators

2017-06-02 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10805:
-

 Summary: Improve error handling for statistical Stream Evaluators
 Key: SOLR-10805
 URL: https://issues.apache.org/jira/browse/SOLR-10805
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket will add more error handling to the new statistical Stream 
Evaluators.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release planning for 7.0

2017-06-02 Thread Shawn Heisey
On 6/2/2017 6:45 AM, Adrien Grand wrote:
> Hi Anshum, will you branch both branch_7x and branch_7_0 at the same
> time? I think this is what we need to do but I'm asking in case you
> had planned differently.

It seems like a better idea to create only branch_7x right now, and
delay creating the 7_0 branch until we're ready to declare a feature
freeze, after which new stuff would be slated for 7.1.

For the two major releases I've witnessed, it took quite a while to go
from trunk/master to new release.  Most work for that preparation will
already require committing to both master and 7x, with some changes only
happening in one branch.  If we create the 7_0 branch now, a lot of work
over a fairly long timeframe will need to be applied to three branches
instead of two.  Backporting twice isn't difficult, but it is an extra
step that could easily be forgotten, causing unwanted divergence between
branches.

Usually the amount of time we need to worry about three branches is only
a few days and doesn't involve very many commits.  I don't think we want
that situation for as long as it would take to get from master to 7.0.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_131) - Build # 3645 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3645/
Java: 64bit/jdk1.8.0_131 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv

Error Message:
java.lang.RuntimeException: Error from server at 
http://127.0.0.1:40255/solr/test_col: Failed synchronous update on shard 
StdNode: http://127.0.0.1:36309/solr/test_col_shard1_replica2/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@79e2a1f3

Stack Trace:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error from 
server at http://127.0.0.1:40255/solr/test_col: Failed synchronous update on 
shard StdNode: http://127.0.0.1:36309/solr/test_col_shard1_replica2/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@79e2a1f3
at 
__randomizedtesting.SeedInfo.seed([4BAFE8412879F788:7DBB8A07A224CD99]:0)
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:281)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv(TestStressCloudBlindAtomicUpdates.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-10574) Choose a default configset for Solr 7

2017-06-02 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034904#comment-16034904
 ] 

Ishan Chattopadhyaya commented on SOLR-10574:
-

Initially, I was looking at the toggleable flag to be set as follows:
{code}
Start with basic_configs configset, create a collection "test3" with that 
configset.

Enable data driven nature:

curl http://localhost:8983/solr/test3/config -d '{"add-initparams": {"name": 
"data-driven-nature", "path": "/update/**", "defaults": {"update.chain": 
"add-unknown-fields-to-the-schema"}}}'

Disable data driven nature:

curl http://localhost:8983/solr/test3/config -d '{"delete-initparams" : 
"data-driven-nature" }'
{code}

This currently works as of 6.x and this would've required minimal changes to 
achieve what we wanted (maybe just wrap that lengthy command into a shorter 
wrapper).

However, Noble informed me that, going forward, editing initparams is not the 
best choice. Upon his suggestion, and also Alexandre's, I am now looking at 
using paramsets and trying to construct the update chain (which is currently 
called "add-unknown-fields-to-the-schema") programmatically, and used upon the 
passing in of the appropriate parameter(s) for enabling/disabling data-driven 
nature. I shall post a patch soon.

> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: 7.0
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Lets deprecate what we know as data_driven_schema_configs
> # Build a "toggleable" data driven functionality into the basic_configs 
> configset (and make it the default)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10574) Choose a default configset for Solr 7

2017-06-02 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-10574:
---

 Assignee: Ishan Chattopadhyaya
 Priority: Blocker  (was: Major)
Fix Version/s: 7.0
  Description: 
Currently, the data_driven_schema_configs is the default configset when 
collections are created using the bin/solr script and no configset is specified.
However, that may not be the best choice. We need to decide which is the best 
choice, out of the box, considering many users might create collections without 
knowing about the concept of a configset going forward.

(See also SOLR-10272)

Proposed changes:
# Lets deprecate what we know as data_driven_schema_configs
# Build a "toggleable" data driven functionality into the basic_configs 
configset (and make it the default)

  was:
Currently, the data_driven_schema_configs is the default configset when 
collections are created using the bin/solr script and no configset is specified.
However, that may not be the best choice. We need to decide which is the best 
choice, out of the box, considering many users might create collections without 
knowing about the concept of a configset going forward.

(See also SOLR-10272)


> Choose a default configset for Solr 7
> -
>
> Key: SOLR-10574
> URL: https://issues.apache.org/jira/browse/SOLR-10574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: 7.0
>
>
> Currently, the data_driven_schema_configs is the default configset when 
> collections are created using the bin/solr script and no configset is 
> specified.
> However, that may not be the best choice. We need to decide which is the best 
> choice, out of the box, considering many users might create collections 
> without knowing about the concept of a configset going forward.
> (See also SOLR-10272)
> Proposed changes:
> # Lets deprecate what we know as data_driven_schema_configs
> # Build a "toggleable" data driven functionality into the basic_configs 
> configset (and make it the default)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7452) json facet api returning inconsistent counts in cloud set up

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034890#comment-16034890
 ] 

ASF subversion and git services commented on SOLR-7452:
---

Commit 393a2ed176b8acfe26cee821d7f3a8babed122b9 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=393a2ed ]

SOLR-7452: tests: templatize refinement tests


> json facet api returning inconsistent counts in cloud set up
> 
>
> Key: SOLR-7452
> URL: https://issues.apache.org/jira/browse/SOLR-7452
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Affects Versions: 5.1
>Reporter: Vamsi Krishna D
>  Labels: count, facet, sort
> Attachments: SOLR-7452.patch, SOLR-7452.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> While using the newly added feature of json term facet api 
> (http://yonik.com/json-facet-api/#TermsFacet) I am encountering inconsistent 
> returns of counts of faceted value ( Note I am running on a cloud mode of 
> solr). For example consider that i have txns_id(unique field or key), 
> consumer_number and amount. Now for a 10 million such records , lets say i 
> query for 
> q=*:*=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> the results are as follows ( some are omitted ):
> "facets":{
> "count":6641277,
> "biskatoo":{
>   "numBuckets":3112708,
>   "buckets":[{
>   "val":"surya",
>   "count":4,
>   "y":2.264506},
>   {
>   "val":"raghu",
>   "COUNT":3,   // capitalised for recognition 
>   "y":1.8},
> {
>   "val":"malli",
>   "count":4,
>   "y":1.78}]}}}
> but if i restrict the query to 
> q=consumer_number:raghu=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> i get :
>   "facets":{
> "count":4,
> "biskatoo":{
>   "numBuckets":1,
>   "buckets":[{
>   "val":"raghu",
>   "COUNT":4,
>   "y":2429708.24}]}}}
> One can see the count results are inconsistent ( and I found many occasions 
> of inconsistencies).
> I have tried the patch https://issues.apache.org/jira/browse/SOLR-7412 but 
> still the issue seems not resolved



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10804) Document Centric Versioning Constraints - Overwrite same version

2017-06-02 Thread Sergio Garcia Maroto (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034888#comment-16034888
 ] 

Sergio Garcia Maroto commented on SOLR-10804:
-

Here is the link to the forum with the topic
http://lucene.472066.n3.nabble.com/version-Versioning-using-timespan-td4338171.html

> Document Centric Versioning Constraints - Overwrite same version
> 
>
> Key: SOLR-10804
> URL: https://issues.apache.org/jira/browse/SOLR-10804
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
>Reporter: Sergio Garcia Maroto
> Fix For: master (7.0)
>
>
> It would be nice to have the possibility to overwrite documents with the same 
> external version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10804) Document Centric Versioning Constraints - Overwrite same version

2017-06-02 Thread Sergio Garcia Maroto (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034889#comment-16034889
 ] 

Sergio Garcia Maroto commented on SOLR-10804:
-

I had a look to the source code and I see 
DocBasedVersionConstraintsProcessorFactory 

if (0 < ((Comparable)newUserVersion).compareTo((Comparable) 
oldUserVersion)) { 
  // log.info("VERSION returning true (proceed with update)" ); 
  return true; 
} 

I can't find a way of overwriting same version without changing that piece 
of code. 
Would be possible to add a parameter to the 
"DocBasedVersionConstraintsProcessorFactory" something like 
"overwrite.same.version=true" 
so the new code would look like. 


int compareTo = ((Comparable)newUserVersion).compareTo((Comparable) 
oldUserVersion); 
if ( ((overwritesameversion) && 0 <= compareTo) || (0 < compareTo)) { 
  // log.info("VERSION returning true (proceed with update)" ); 
  return true; 
} 


Is that thing going to break anyhting? Can i do that change? 

Thanks 
Sergio 

> Document Centric Versioning Constraints - Overwrite same version
> 
>
> Key: SOLR-10804
> URL: https://issues.apache.org/jira/browse/SOLR-10804
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
>Reporter: Sergio Garcia Maroto
> Fix For: master (7.0)
>
>
> It would be nice to have the possibility to overwrite documents with the same 
> external version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10804) Document Centric Versioning Constraints - Overwrite same version

2017-06-02 Thread Sergio Garcia Maroto (JIRA)
Sergio Garcia Maroto created SOLR-10804:
---

 Summary: Document Centric Versioning Constraints - Overwrite same 
version
 Key: SOLR-10804
 URL: https://issues.apache.org/jira/browse/SOLR-10804
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.1
Reporter: Sergio Garcia Maroto
 Fix For: master (7.0)


It would be nice to have the possibility to overwrite documents with the same 
external version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 866 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/866/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([D661FC648CFC6A7:67B420A9102C1668]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 40 more




Build Log:
[...truncated 13218 lines...]
   [junit4] 

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2017-06-02 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034885#comment-16034885
 ] 

Ishan Chattopadhyaya commented on SOLR-6736:


Thanks [~ctargett].

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: newzkconf.zip, newzkconf.zip, SOLR-6736.doc.patch, 
> SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> test_private.pem, test_pub.der, zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10447) An endpoint to get the alias for a collection

2017-06-02 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034883#comment-16034883
 ] 

Ishan Chattopadhyaya commented on SOLR-10447:
-

Thanks [~ctargett].

> An endpoint to get the alias for a collection
> -
>
> Key: SOLR-10447
> URL: https://issues.apache.org/jira/browse/SOLR-10447
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10447.doc.patch, SOLR-10447.patch
>
>
> We have CREATEALIAS and DELTEALIAS commands. However, there's no way to get 
> the aliases that are already there. I propose that we add a -GETALIAS- 
> LISTALIASES command (Collection API) for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10447) An endpoint to get the alias for a collection

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034879#comment-16034879
 ] 

ASF subversion and git services commented on SOLR-10447:


Commit 0a28cdea55decf0d6bd26daa8fa67e18bdfa7ad5 in lucene-solr's branch 
refs/heads/branch_6_6 from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0a28cde ]

SOLR-10447: Ref guide documentation


> An endpoint to get the alias for a collection
> -
>
> Key: SOLR-10447
> URL: https://issues.apache.org/jira/browse/SOLR-10447
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10447.doc.patch, SOLR-10447.patch
>
>
> We have CREATEALIAS and DELTEALIAS commands. However, there's no way to get 
> the aliases that are already there. I propose that we add a -GETALIAS- 
> LISTALIASES command (Collection API) for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10447) An endpoint to get the alias for a collection

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034878#comment-16034878
 ] 

ASF subversion and git services commented on SOLR-10447:


Commit a607efa6fd3b9f56a2afaad5e2634df216c4eff4 in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a607efa ]

SOLR-10447: Ref guide documentation


> An endpoint to get the alias for a collection
> -
>
> Key: SOLR-10447
> URL: https://issues.apache.org/jira/browse/SOLR-10447
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10447.doc.patch, SOLR-10447.patch
>
>
> We have CREATEALIAS and DELTEALIAS commands. However, there's no way to get 
> the aliases that are already there. I propose that we add a -GETALIAS- 
> LISTALIASES command (Collection API) for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10447) An endpoint to get the alias for a collection

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034876#comment-16034876
 ] 

ASF subversion and git services commented on SOLR-10447:


Commit ac26d81116079365dfdb8d70e8e0f50f93749b8b in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ac26d81 ]

SOLR-10447: Ref guide documentation


> An endpoint to get the alias for a collection
> -
>
> Key: SOLR-10447
> URL: https://issues.apache.org/jira/browse/SOLR-10447
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10447.doc.patch, SOLR-10447.patch
>
>
> We have CREATEALIAS and DELTEALIAS commands. However, there's no way to get 
> the aliases that are already there. I propose that we add a -GETALIAS- 
> LISTALIASES command (Collection API) for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10447) An endpoint to get the alias for a collection

2017-06-02 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034867#comment-16034867
 ] 

Cassandra Targett commented on SOLR-10447:
--

Patch looks good [~ichattopadhyaya] - +1 to commit.

> An endpoint to get the alias for a collection
> -
>
> Key: SOLR-10447
> URL: https://issues.apache.org/jira/browse/SOLR-10447
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10447.doc.patch, SOLR-10447.patch
>
>
> We have CREATEALIAS and DELTEALIAS commands. However, there's no way to get 
> the aliases that are already there. I propose that we add a -GETALIAS- 
> LISTALIASES command (Collection API) for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034866#comment-16034866
 ] 

ASF subversion and git services commented on SOLR-6736:
---

Commit f358c6834d3957b73690d73e49c021644c2f61fb in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f358c68 ]

SOLR-10446, SOLR-6736: Ref guide documentation


> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: newzkconf.zip, newzkconf.zip, SOLR-6736.doc.patch, 
> SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> test_private.pem, test_pub.der, zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10446) Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034868#comment-16034868
 ] 

ASF subversion and git services commented on SOLR-10446:


Commit 2eb324f9bae1553c9c68c4a740a4f865b0ec6da5 in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2eb324f ]

SOLR-10446, SOLR-6736: Ref guide documentation


> Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)
> ---
>
> Key: SOLR-10446
> URL: https://issues.apache.org/jira/browse/SOLR-10446
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10446.doc.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-9057.patch
>
>
> An HTTP based ClusterStateProvider to remove the sole dependency of 
> CloudSolrClient on ZooKeeper, and hence provide an optional way for CSC to 
> access cluster state without requiring ZK.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034869#comment-16034869
 ] 

ASF subversion and git services commented on SOLR-6736:
---

Commit 2eb324f9bae1553c9c68c4a740a4f865b0ec6da5 in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2eb324f ]

SOLR-10446, SOLR-6736: Ref guide documentation


> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: newzkconf.zip, newzkconf.zip, SOLR-6736.doc.patch, 
> SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> test_private.pem, test_pub.der, zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10446) Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034865#comment-16034865
 ] 

ASF subversion and git services commented on SOLR-10446:


Commit f358c6834d3957b73690d73e49c021644c2f61fb in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f358c68 ]

SOLR-10446, SOLR-6736: Ref guide documentation


> Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)
> ---
>
> Key: SOLR-10446
> URL: https://issues.apache.org/jira/browse/SOLR-10446
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10446.doc.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-9057.patch
>
>
> An HTTP based ClusterStateProvider to remove the sole dependency of 
> CloudSolrClient on ZooKeeper, and hence provide an optional way for CSC to 
> access cluster state without requiring ZK.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10446) Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034862#comment-16034862
 ] 

ASF subversion and git services commented on SOLR-10446:


Commit 5c4f0a27a327dba22e121680a19c192a53b8d75e in lucene-solr's branch 
refs/heads/branch_6_6 from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5c4f0a2 ]

SOLR-10446, SOLR-6736: Ref guide documentation


> Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)
> ---
>
> Key: SOLR-10446
> URL: https://issues.apache.org/jira/browse/SOLR-10446
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10446.doc.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-9057.patch
>
>
> An HTTP based ClusterStateProvider to remove the sole dependency of 
> CloudSolrClient on ZooKeeper, and hence provide an optional way for CSC to 
> access cluster state without requiring ZK.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034863#comment-16034863
 ] 

ASF subversion and git services commented on SOLR-6736:
---

Commit 5c4f0a27a327dba22e121680a19c192a53b8d75e in lucene-solr's branch 
refs/heads/branch_6_6 from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5c4f0a2 ]

SOLR-10446, SOLR-6736: Ref guide documentation


> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: newzkconf.zip, newzkconf.zip, SOLR-6736.doc.patch, 
> SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> test_private.pem, test_pub.der, zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release planning for 7.0

2017-06-02 Thread Anshum Gupta
Yes, I do plan to branch them at the same time. I think that is also what's
documented :).


On Fri, Jun 2, 2017 at 5:45 AM Adrien Grand  wrote:

> Hi Anshum, will you branch both branch_7x and branch_7_0 at the same time?
> I think this is what we need to do but I'm asking in case you had planned
> differently.
>
> Le mer. 31 mai 2017 à 20:17, Anshum Gupta  a
> écrit :
>
>> We can certainly hold back cutting the branch, and/or release.
>>
>> I'm headed to Berlin Buzzwords next week so I'd most probably be cutting
>> the branch over the next weekend or during the conference. I'll keep
>> everyone posted about the 'when' and if there are more contributors who
>> want me to hold back for valid reasons, I'd be happy to do so :).
>>
>> I'm tracking all blockers (and critical JIRAs) for 7.0, and if anyone
>> thinks there's something that must be a part of 7.0, kindly mark the issue
>> as a blocker for 7.0.
>>
>> -Anshum
>>
>>
>> On Tue, May 30, 2017 at 9:27 AM Christine Poerschke (BLOOMBERG/ LONDON) <
>> cpoersc...@bloomberg.net> wrote:
>>
>>> Hi Everyone,
>>>
>>> Just to say that https://issues.apache.org/jira/browse/SOLR-8668 for
>>>  removal should complete later this week, hopefully.
>>>
>>> And on an unrelated note, does anyone have any history or experience
>>> with the NOTICE.txt files? Including
>>> https://issues.apache.org/jira/browse/LUCENE-7852 in 7.0 would be good
>>> i think (though it being a small change the issue would not need to block
>>> branch_7x branch cutting).
>>>
>>> Thanks,
>>> Christine
>>>
>>> From: dev@lucene.apache.org At: 05/03/17 16:56:09
>>> To: dev@lucene.apache.org
>>> Subject: Re:Release planning for 7.0
>>>
>>> Hi,
>>>
>>> It's May already, and with 6.6 lined up, I think we should start
>>> planning on how we want to proceed with 7.0, in terms of both - the
>>> timeline, and what it would likely contain.
>>>
>>> I am not suggesting we start the release process right away, but just
>>> wanted to start a discussion around the above mentioned lines.
>>>
>>> With 6.6 in the pipeline, I think sometime in June would be a good time
>>> to cut a release branch. What do all of you think?
>>>
>>> P.S: This email is about 'discussion' and 'planning', so if someone
>>> wants to volunteer to be the release manager, please go ahead. I can't
>>> remember if someone did explicit volunteer to wear this hat for 7.0. If no
>>> one volunteers, I will take it up.
>>>
>>> -Anshum
>>>
>>>


[jira] [Updated] (SOLR-10447) An endpoint to get the alias for a collection

2017-06-02 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10447:

Attachment: SOLR-10447.doc.patch

Updated documentation. [~ctargett], please review.

> An endpoint to get the alias for a collection
> -
>
> Key: SOLR-10447
> URL: https://issues.apache.org/jira/browse/SOLR-10447
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10447.doc.patch, SOLR-10447.patch
>
>
> We have CREATEALIAS and DELTEALIAS commands. However, there's no way to get 
> the aliases that are already there. I propose that we add a -GETALIAS- 
> LISTALIASES command (Collection API) for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10803) Solr should refuse to create Trie*Field instances in 7.0 indices

2017-06-02 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034822#comment-16034822
 ] 

Uwe Schindler edited comment on SOLR-10803 at 6/2/17 3:02 PM:
--

bq. FieldCache
I'd suggest to also enable DocValues by default for all string/numeric/date 
fields, unless explicitly disabled. I have seen that Solr 7 allows to merge 
non-docvalues segments to ones with docvalues using uninverter with a special 
mergepolicy. IMHO, this merge policy should be the default, just printing a 
line of information to the logs, so users know that their index segments are 
updated and this may temporarily require more ram. When using an index created 
with 7.x  (maybe using the new metadata added by [~jpountz] recently) and 
something tries to access FieldCache (e.g. for sorting or facetting or 
functions), it should fail the query.

In addition, the merge policy could also be used to convert Trie* to Point* 
values by first uninverting (if no docvalues on trie) and redindexing the 
fields during merging... (not sure how to do this, but should work somehow).


was (Author: thetaphi):
bq. FieldCache
I'd suggest to also enable DocValues by default for all string/numeric/date 
fields, unless explicitly disabled. I have seen that Solr 7 allows to merge 
non-docvalues segments to ones with docvalues using uninverter with a special 
mergepolicy. IMHO, this merge policy should be the default, just printing a 
line of information to the logs, so users know that their index segments are 
updated and this may temporarily require more ram. When using an index created 
with 7.x  (maybe using the new metadata added by [~jpountz] recently) and 
something tries to access FieldCache (e.g. for sorting or facetting or 
functions), it should fail the query.

> Solr should refuse to create Trie*Field instances in 7.0 indices
> 
>
> Key: SOLR-10803
> URL: https://issues.apache.org/jira/browse/SOLR-10803
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
>
> If we want to be able to remove support for legacy numerics from Solr in 8.0, 
> we need to forbid the use of Trie*Field in indices that are created on or 
> after 7.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10803) Solr should refuse to create Trie*Field instances in 7.0 indices

2017-06-02 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034822#comment-16034822
 ] 

Uwe Schindler commented on SOLR-10803:
--

bq. FieldCache
I'd suggest to also enable DocValues by default for all string/numeric/date 
fields, unless explicitly disabled. I have seen that Solr 7 allows to merge 
non-docvalues segments to ones with docvalues using uninverter with a special 
mergepolicy. IMHO, this merge policy should be the default, just printing a 
line of information to the logs, so users know that their index segments are 
updated and this may temporarily require more ram. When using an index created 
with 7.x  (maybe using the new metadata added by [~jpountz] recently) and 
something tries to access FieldCache (e.g. for sorting or facetting or 
functions), it should fail the query.

> Solr should refuse to create Trie*Field instances in 7.0 indices
> 
>
> Key: SOLR-10803
> URL: https://issues.apache.org/jira/browse/SOLR-10803
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
>
> If we want to be able to remove support for legacy numerics from Solr in 8.0, 
> we need to forbid the use of Trie*Field in indices that are created on or 
> after 7.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10803) Solr should refuse to create Trie*Field instances in 7.0 indices

2017-06-02 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034720#comment-16034720
 ] 

Uwe Schindler edited comment on SOLR-10803 at 6/2/17 2:55 PM:
--

That's a good idea. Disallow to create new indexes with trie fields. Maybe also 
similar stuff to prevent FieldCache usage?


was (Author: thetaphi):
That's a good idea. Disallow to create new indexes with true fields. Maybe also 
similar stuff to prevent FieldCache usage?

> Solr should refuse to create Trie*Field instances in 7.0 indices
> 
>
> Key: SOLR-10803
> URL: https://issues.apache.org/jira/browse/SOLR-10803
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
>
> If we want to be able to remove support for legacy numerics from Solr in 8.0, 
> we need to forbid the use of Trie*Field in indices that are created on or 
> after 7.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-7834) BlockTree's terms index should be loaded into memory lazily

2017-06-02 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed LUCENE-7834.

Resolution: Won't Fix

> BlockTree's terms index should be loaded into memory lazily
> ---
>
> Key: LUCENE-7834
> URL: https://issues.apache.org/jira/browse/LUCENE-7834
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: LUCENE_7834_BlockTreeLazyFST.patch
>
>
> I propose that BlockTree delay loading the FST prefix terms index into memory 
> until {{terms(String)}} is first called.  This seems like how it should work 
> since if someone wants to eager load then they can use {{IndexReaderWarmer}}. 
>  By making the FST lazy load, we can be more NRT friendly (if some fields are 
> rarely used), and also save memory (if some fields are rarely used).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10803) Solr should refuse to create Trie*Field instances in 7.0 indices

2017-06-02 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034775#comment-16034775
 ] 

Adrien Grand commented on SOLR-10803:
-

bq. Maybe also similar stuff to prevent FieldCache usage?

+1 that would be great! Then we could remove FieldCache too in 8.0.

> Solr should refuse to create Trie*Field instances in 7.0 indices
> 
>
> Key: SOLR-10803
> URL: https://issues.apache.org/jira/browse/SOLR-10803
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
>
> If we want to be able to remove support for legacy numerics from Solr in 8.0, 
> we need to forbid the use of Trie*Field in indices that are created on or 
> after 7.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8668) Remove support for (in favour of )

2017-06-02 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8668.
---
Resolution: Fixed

Thanks Ishan for your input. Thanks Hoss for the reviews.
.
Resolving, this will be included in 7.0 release (and not included in 6.* 
releases).

> Remove support for  (in favour of )
> 
>
> Key: SOLR-8668
> URL: https://issues.apache.org/jira/browse/SOLR-8668
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-8668-part1.patch, SOLR-8668-part1.patch, 
> SOLR-8668.patch
>
>
> Following SOLR-8621, we should remove support for {{}} (and 
> related {{}} and {{}}) in trunk/6x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8668) Remove support for (in favour of )

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034750#comment-16034750
 ] 

ASF subversion and git services commented on SOLR-8668:
---

Commit c64f9d64b4edc8c3761368befc394e879b2284ff in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c64f9d6 ]

SOLR-8668: In solrconfig.xml remove  (and related  
and )
support in favor of the  element introduced by SOLR-8621 in 
Solr 5.5.0.
(Christine Poerschke, hossman)


> Remove support for  (in favour of )
> 
>
> Key: SOLR-8668
> URL: https://issues.apache.org/jira/browse/SOLR-8668
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-8668-part1.patch, SOLR-8668-part1.patch, 
> SOLR-8668.patch
>
>
> Following SOLR-8621, we should remove support for {{}} (and 
> related {{}} and {{}}) in trunk/6x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8621) solrconfig.xml: deprecate/replace with

2017-06-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034751#comment-16034751
 ] 

ASF subversion and git services commented on SOLR-8621:
---

Commit c64f9d64b4edc8c3761368befc394e879b2284ff in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c64f9d6 ]

SOLR-8668: In solrconfig.xml remove  (and related  
and )
support in favor of the  element introduced by SOLR-8621 in 
Solr 5.5.0.
(Christine Poerschke, hossman)


> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, 6.0
>
> Attachments: explicit-merge-auto-set.patch, 
> SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * Lucene's SortingMergePolicy can be configured in Solr (with SOLR-5730)
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5 deprecates (but maintains)  support
> * SOLR-8668 in solr 6.0(\?) will remove  support 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10803) Solr should refuse to create Trie*Field instances in 7.0 indices

2017-06-02 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034720#comment-16034720
 ] 

Uwe Schindler commented on SOLR-10803:
--

That's a good idea. Disallow to create new indexes with true fields. Maybe also 
similar stuff to prevent FieldCache usage?

> Solr should refuse to create Trie*Field instances in 7.0 indices
> 
>
> Key: SOLR-10803
> URL: https://issues.apache.org/jira/browse/SOLR-10803
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
>
> If we want to be able to remove support for legacy numerics from Solr in 8.0, 
> we need to forbid the use of Trie*Field in indices that are created on or 
> after 7.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_131) - Build # 19763 - Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19763/
Java: 32bit/jdk1.8.0_131 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([C3868175D4506EE0:48A152A49556C564]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:910)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:436)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
   

[jira] [Commented] (SOLR-10803) Solr should refuse to create Trie*Field instances in 7.0 indices

2017-06-02 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034693#comment-16034693
 ] 

Adrien Grand commented on SOLR-10803:
-

I put the blocker priority since I think it is a better experience if all 7.x 
indices can be used with Solr 8, but there is also the possibility of just 
removing Trie*Field in 8.0 and refusing to open any index that would make use 
of those fields, even if they were created in 7.x.

> Solr should refuse to create Trie*Field instances in 7.0 indices
> 
>
> Key: SOLR-10803
> URL: https://issues.apache.org/jira/browse/SOLR-10803
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
>
> If we want to be able to remove support for legacy numerics from Solr in 8.0, 
> we need to forbid the use of Trie*Field in indices that are created on or 
> after 7.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10803) Solr should refuse to create Trie*Field instances in 7.0 indices

2017-06-02 Thread Adrien Grand (JIRA)
Adrien Grand created SOLR-10803:
---

 Summary: Solr should refuse to create Trie*Field instances in 7.0 
indices
 Key: SOLR-10803
 URL: https://issues.apache.org/jira/browse/SOLR-10803
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Adrien Grand
Priority: Blocker
 Fix For: master (7.0)


If we want to be able to remove support for legacy numerics from Solr in 8.0, 
we need to forbid the use of Trie*Field in indices that are created on or after 
7.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7855) Add support for WikipediaTokenizer's advanced options

2017-06-02 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034666#comment-16034666
 ] 

Adrien Grand commented on LUCENE-7855:
--

+1 I'll merge it soon unless someone objects.

> Add support for WikipediaTokenizer's advanced options
> -
>
> Key: LUCENE-7855
> URL: https://issues.apache.org/jira/browse/LUCENE-7855
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Juan Pedro
>Priority: Minor
> Attachments: LUCENE-7855.patch
>
>
> The advanced parameters of the WikipediaTokenizer should be added to the 
> factory.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 903 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/903/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([1F5880993BA87A28:970CBF43955417D0]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at org.apache.solr.cloud.ReplaceNodeTest.test(ReplaceNodeTest.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([1F5880993BA87A28:758ABFF6634BAAE7]:0)
at 

Re: Release planning for 7.0

2017-06-02 Thread Adrien Grand
Hi Anshum, will you branch both branch_7x and branch_7_0 at the same time?
I think this is what we need to do but I'm asking in case you had planned
differently.

Le mer. 31 mai 2017 à 20:17, Anshum Gupta  a écrit :

> We can certainly hold back cutting the branch, and/or release.
>
> I'm headed to Berlin Buzzwords next week so I'd most probably be cutting
> the branch over the next weekend or during the conference. I'll keep
> everyone posted about the 'when' and if there are more contributors who
> want me to hold back for valid reasons, I'd be happy to do so :).
>
> I'm tracking all blockers (and critical JIRAs) for 7.0, and if anyone
> thinks there's something that must be a part of 7.0, kindly mark the issue
> as a blocker for 7.0.
>
> -Anshum
>
>
> On Tue, May 30, 2017 at 9:27 AM Christine Poerschke (BLOOMBERG/ LONDON) <
> cpoersc...@bloomberg.net> wrote:
>
>> Hi Everyone,
>>
>> Just to say that https://issues.apache.org/jira/browse/SOLR-8668 for
>>  removal should complete later this week, hopefully.
>>
>> And on an unrelated note, does anyone have any history or experience with
>> the NOTICE.txt files? Including
>> https://issues.apache.org/jira/browse/LUCENE-7852 in 7.0 would be good i
>> think (though it being a small change the issue would not need to block
>> branch_7x branch cutting).
>>
>> Thanks,
>> Christine
>>
>> From: dev@lucene.apache.org At: 05/03/17 16:56:09
>> To: dev@lucene.apache.org
>> Subject: Re:Release planning for 7.0
>>
>> Hi,
>>
>> It's May already, and with 6.6 lined up, I think we should start planning
>> on how we want to proceed with 7.0, in terms of both - the timeline, and
>> what it would likely contain.
>>
>> I am not suggesting we start the release process right away, but just
>> wanted to start a discussion around the above mentioned lines.
>>
>> With 6.6 in the pipeline, I think sometime in June would be a good time
>> to cut a release branch. What do all of you think?
>>
>> P.S: This email is about 'discussion' and 'planning', so if someone wants
>> to volunteer to be the release manager, please go ahead. I can't remember
>> if someone did explicit volunteer to wear this hat for 7.0. If no one
>> volunteers, I will take it up.
>>
>> -Anshum
>>
>>


[jira] [Updated] (SOLR-8668) Remove support for (in favour of )

2017-06-02 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8668:
--
Attachment: SOLR-8668.patch

Attaching diff between working branch and master as patch.

> Remove support for  (in favour of )
> 
>
> Key: SOLR-8668
> URL: https://issues.apache.org/jira/browse/SOLR-8668
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shai Erera
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-8668-part1.patch, SOLR-8668-part1.patch, 
> SOLR-8668.patch
>
>
> Following SOLR-8621, we should remove support for {{}} (and 
> related {{}} and {{}}) in trunk/6x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4043 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4043/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([89C01A8F4C3FD829:EBADE4CE83B1B817]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.junit.Assert.assertNotNull(Assert.java:537)
at 
org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter(MetricsHandlerTest.java:201)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12751 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.MetricsHandlerTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_131) - Build # 3644 - Still Unstable!

2017-06-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3644/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([2B4999A2AED89FBC:419BA6CDF63B4F73]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:895)
at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

022


request was:q=letter0:lett=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:888)
... 40 more




Build Log:
[...truncated 11229 lines...]
   [junit4] Suite: 

[jira] [Resolved] (SOLR-10741) Create a new method to get slice shards string in HttpShardHandler

2017-06-02 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-10741.

   Resolution: Fixed
Fix Version/s: 6.7
   master (7.0)

Thanks [~dmarino]!

> Create a new method to get slice shards string in HttpShardHandler
> --
>
> Key: SOLR-10741
> URL: https://issues.apache.org/jira/browse/SOLR-10741
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Domenico Fabio Marino
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (7.0), 6.7
>
> Attachments: SOLR-10741.patch
>
>
> Extract a method called getSliceShardsStr from prepDistributed() in 
> HttpShardHandler.java.
> This method takes a list of shard URLs and concatenates them into a String, 
> and then returns the String. 
> This could allow to perform some more refactoring in the future



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10790) fix 6 (Recovered) WARNINGs

2017-06-02 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-10790.

   Resolution: Fixed
Fix Version/s: 6.7
   master (7.0)

> fix 6 (Recovered) WARNINGs
> --
>
> Key: SOLR-10790
> URL: https://issues.apache.org/jira/browse/SOLR-10790
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (7.0), 6.7
>
> Attachments: SOLR-10790.patch
>
>
> In [~erickerickson]'s notclosed.txt attachment for SOLR-10778 these warnings 
> not about unclosed resources caught my attention:
> {code}
>  [ecj-lint] 1. WARNING in 
> /Users/Erick/apache/solrJiras/jiramaster/solr/core/src/java/org/apache/solr/cloud/Assign.java
>  (at line 101)
>  [ecj-lint]   Collections.sort(shardIdNames, (o1, o2) -> {
>  [ecj-lint]  ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
> analysis
>  [ecj-lint] --
>  [ecj-lint] 2. WARNING in 
> /Users/Erick/apache/solrJiras/jiramaster/solr/core/src/java/org/apache/solr/cloud/Assign.java
>  (at line 101)
>  [ecj-lint]   Collections.sort(shardIdNames, (o1, o2) -> {
>  [ecj-lint]  ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
> analysis
>  [ecj-lint] --
>  [ecj-lint] 3. WARNING in 
> /Users/Erick/apache/solrJiras/jiramaster/solr/core/src/java/org/apache/solr/cloud/Assign.java
>  (at line 101)
>  [ecj-lint]   Collections.sort(shardIdNames, (o1, o2) -> {
>  [ecj-lint]  ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
> analysis
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 4. WARNING in 
> /Users/Erick/apache/solrJiras/jiramaster/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
>  (at line 214)
>  [ecj-lint]   Collections.sort(sortedLiveNodes, (n1, n2) -> {
>  [ecj-lint] ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
> analysis
>  [ecj-lint] --
>  [ecj-lint] 5. WARNING in 
> /Users/Erick/apache/solrJiras/jiramaster/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
>  (at line 214)
>  [ecj-lint]   Collections.sort(sortedLiveNodes, (n1, n2) -> {
>  [ecj-lint] ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
> analysis
>  [ecj-lint] --
>  [ecj-lint] 6. WARNING in 
> /Users/Erick/apache/solrJiras/jiramaster/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
>  (at line 214)
>  [ecj-lint]   Collections.sort(sortedLiveNodes, (n1, n2) -> {
>  [ecj-lint] ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
> analysis
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >