[jira] [Commented] (SOLR-9993) Add support for ExpandComponent with PointFIelds

2017-03-28 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946556#comment-15946556
 ] 

Varun Thacker commented on SOLR-9993:
-

>  I think this is not necessary because TestExpandComponent also test for 
> "group_i" and "group_f" fields.

You're right. I didn't look at the patch closely enough before commenting.

+1 to commit

> Add support for ExpandComponent with PointFIelds
> 
>
> Key: SOLR-9993
> URL: https://issues.apache.org/jira/browse/SOLR-9993
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Cao Manh Dat
> Attachments: SOLR-9993.patch
>
>
> Followup task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10263) Different SpellcheckComponents should have their own suggestMode

2017-03-28 Thread Abhishek Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946518#comment-15946518
 ] 

Abhishek Kumar Singh edited comment on SOLR-10263 at 3/29/17 4:42 AM:
--

The _solrconfig.xml_   of*WordBreakSolrSpellChecker* ( and later, for all 
the components) can be configured like this :-

{code:xml}

spellcheckword
solr.WordBreakSolrSpellChecker
fieldspell
true
true
10
0
SUGGEST_WHEN_NOT_IN_INDEX

{code}



was (Author: asingh2411):
The _solrconfig.xml_  *WordBreakSolrSpellChecker* and later, for all the 
components can be configured like this :-

{code:xml}

spellcheckword
solr.WordBreakSolrSpellChecker
fieldspell
true
true
10
0
SUGGEST_WHEN_NOT_IN_INDEX

{code}


> Different SpellcheckComponents should have their own suggestMode
> 
>
> Key: SOLR-10263
> URL: https://issues.apache.org/jira/browse/SOLR-10263
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>Priority: Minor
>
> As of now, common spellcheck options are applied to all the 
> SpellCheckComponents.
> This can create problem in the following case:-
>  It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST 
> spellcheck suggestions. 
> But we may want *WordBreakSpellChecker* to suggest only if the token is not 
> in the index (SUGGEST_WHEN_NOT_IN_INDEX) . 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10263) Different SpellcheckComponents should have their own suggestMode

2017-03-28 Thread Abhishek Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946518#comment-15946518
 ] 

Abhishek Kumar Singh edited comment on SOLR-10263 at 3/29/17 4:42 AM:
--

The _solrconfig.xml_   of*WordBreakSolrSpellChecker*   (and later, for all 
the components)   can be configured like this :-

{code:xml}

spellcheckword
solr.WordBreakSolrSpellChecker
fieldspell
true
true
10
0
SUGGEST_WHEN_NOT_IN_INDEX

{code}



was (Author: asingh2411):
The _solrconfig.xml_   of*WordBreakSolrSpellChecker* ( and later, for all 
the components) can be configured like this :-

{code:xml}

spellcheckword
solr.WordBreakSolrSpellChecker
fieldspell
true
true
10
0
SUGGEST_WHEN_NOT_IN_INDEX

{code}


> Different SpellcheckComponents should have their own suggestMode
> 
>
> Key: SOLR-10263
> URL: https://issues.apache.org/jira/browse/SOLR-10263
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>Priority: Minor
>
> As of now, common spellcheck options are applied to all the 
> SpellCheckComponents.
> This can create problem in the following case:-
>  It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST 
> spellcheck suggestions. 
> But we may want *WordBreakSpellChecker* to suggest only if the token is not 
> in the index (SUGGEST_WHEN_NOT_IN_INDEX) . 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10263) Different SpellcheckComponents should have their own suggestMode

2017-03-28 Thread Abhishek Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946518#comment-15946518
 ] 

Abhishek Kumar Singh commented on SOLR-10263:
-

The _solrconfig.xml_  *WordBreakSolrSpellChecker* and later, for all the 
components can be configured like this :-

{code:xml}

spellcheckword
solr.WordBreakSolrSpellChecker
fieldspell
true
true
10
0
SUGGEST_WHEN_NOT_IN_INDEX

{code}


> Different SpellcheckComponents should have their own suggestMode
> 
>
> Key: SOLR-10263
> URL: https://issues.apache.org/jira/browse/SOLR-10263
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>Priority: Minor
>
> As of now, common spellcheck options are applied to all the 
> SpellCheckComponents.
> This can create problem in the following case:-
>  It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST 
> spellcheck suggestions. 
> But we may want *WordBreakSpellChecker* to suggest only if the token is not 
> in the index (SUGGEST_WHEN_NOT_IN_INDEX) . 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9784) Refactor CloudSolrClient to eliminate direct dependency on ZK

2017-03-28 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-9784.
--
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.4

> Refactor CloudSolrClient to eliminate direct dependency on ZK
> -
>
> Key: SOLR-9784
> URL: https://issues.apache.org/jira/browse/SOLR-9784
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.4, master (7.0)
>
> Attachments: SOLR-9584.patch
>
>
> CloudSolrClient should decouple itself from the ZK reading/write. This will 
> help us provide alternate implementations w/o direct ZK dependency



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9784) Refactor CloudSolrClient to eliminate direct dependency on ZK

2017-03-28 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul closed SOLR-9784.


> Refactor CloudSolrClient to eliminate direct dependency on ZK
> -
>
> Key: SOLR-9784
> URL: https://issues.apache.org/jira/browse/SOLR-9784
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.4, master (7.0)
>
> Attachments: SOLR-9584.patch
>
>
> CloudSolrClient should decouple itself from the ZK reading/write. This will 
> help us provide alternate implementations w/o direct ZK dependency



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10263) Different SpellcheckComponents should have their own suggestMode

2017-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946513#comment-15946513
 ] 

ASF GitHub Bot commented on SOLR-10263:
---

Github user abhidemon commented on the issue:

https://github.com/apache/lucene-solr/pull/176
  
For this Issue.  https://issues.apache.org/jira/browse/SOLR-10263


> Different SpellcheckComponents should have their own suggestMode
> 
>
> Key: SOLR-10263
> URL: https://issues.apache.org/jira/browse/SOLR-10263
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>Priority: Minor
>
> As of now, common spellcheck options are applied to all the 
> SpellCheckComponents.
> This can create problem in the following case:-
>  It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST 
> spellcheck suggestions. 
> But we may want *WordBreakSpellChecker* to suggest only if the token is not 
> in the index (SUGGEST_WHEN_NOT_IN_INDEX) . 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #176: SOLR-10263 : Override spellcheck's SuggestMode by Wo...

2017-03-28 Thread abhidemon
Github user abhidemon commented on the issue:

https://github.com/apache/lucene-solr/pull/176
  
For this Issue.  https://issues.apache.org/jira/browse/SOLR-10263


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10349) Add totalTermFreq support to TermsComponent

2017-03-28 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved SOLR-10349.
---
Resolution: Fixed

Pushed to master and branch_6x.

> Add totalTermFreq support to TermsComponent
> ---
>
> Key: SOLR-10349
> URL: https://issues.apache.org/jira/browse/SOLR-10349
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10349.patch, SOLR-10349.patch, SOLR-10349.patch
>
>
> See discussion here: http://markmail.org/message/gmpmege2jpfrsp75. Both 
> {{docFreq}} and {{totalTermFreq}} are already available to the 
> TermsComponent, it's just that doesn't add the ttf measure to the response.
> This issue adds a new {{terms.ttf}} parameter which if set to true results in 
> the following output:
> {noformat}
> 
>   
> 
>   2
>   2
> 
> ...
> {noformat}
> The reason for the new parameter is to not break backward-compatibility, 
> though I wish we could always return those two measures (it doesn't cost us 
> anything, the two are already available to the code). Maybe we can break the 
> response in {{master}} and add this parameter only to {{6x}} as deprecated? I 
> am also fine if we leave it and handle it in a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10349) Add totalTermFreq support to TermsComponent

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946504#comment-15946504
 ] 

ASF subversion and git services commented on SOLR-10349:


Commit bcc36b9005afc5a36c1e9fc28ae6a9e5aedcd83d in lucene-solr's branch 
refs/heads/branch_6x from [~shaie]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bcc36b9 ]

SOLR-10349: Add totalTermFreq support to TermsComponent

TermsComponent only returns docFreq information per requested term.
This commit adds a terms.ttf parameter, which if set to true, will
return both docFreq and totalTermFreq statistics for each requested
term.


> Add totalTermFreq support to TermsComponent
> ---
>
> Key: SOLR-10349
> URL: https://issues.apache.org/jira/browse/SOLR-10349
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10349.patch, SOLR-10349.patch, SOLR-10349.patch
>
>
> See discussion here: http://markmail.org/message/gmpmege2jpfrsp75. Both 
> {{docFreq}} and {{totalTermFreq}} are already available to the 
> TermsComponent, it's just that doesn't add the ttf measure to the response.
> This issue adds a new {{terms.ttf}} parameter which if set to true results in 
> the following output:
> {noformat}
> 
>   
> 
>   2
>   2
> 
> ...
> {noformat}
> The reason for the new parameter is to not break backward-compatibility, 
> though I wish we could always return those two measures (it doesn't cost us 
> anything, the two are already available to the code). Maybe we can break the 
> response in {{master}} and add this parameter only to {{6x}} as deprecated? I 
> am also fine if we leave it and handle it in a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10357) When sow=false, edismax query parsers should handle per-fieldtype autoGeneratePhraseQueries by setting QueryBuilder.autoGenerateMultiTermSynonymsQuery

2017-03-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946497#comment-15946497
 ] 

David Smiley commented on SOLR-10357:
-

Thanks Steve.

> When sow=false, edismax query parsers should handle per-fieldtype 
> autoGeneratePhraseQueries by setting 
> QueryBuilder.autoGenerateMultiTermSynonymsQuery
> ---
>
> Key: SOLR-10357
> URL: https://issues.apache.org/jira/browse/SOLR-10357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10357.patch, SOLR-10357.patch, SOLR-10357.patch
>
>
> Right now, the options to not split on whitespace ({{sow=false}}) and to 
> autogenerate phrase queries ({{autoGeneratePhraseQueries="true"}}) will cause 
> queries to throw an exception, since they are incompatible.
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}}, introduced in 
> LUCENE-7638, is the graph query version of Solr's per-fieldtype 
> {{autoGeneratePhraseQueries}} option, and is not incompatible with 
> {{sow=false}}.  
> So {{autoGeneratePhraseQueries="true"}} should cause  
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}} to be set to true 
> when {{sow=false}}, rather than triggering an exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+162) - Build # 19286 - Failure!

2017-03-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19286/
Java: 32bit/jdk-9-ea+162 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSegmentSorting.testAtomicUpdateOfSegmentSortField

Error Message:
Error from server at https://127.0.0.1:40501/solr: Could not fully remove 
collection: testAtomicUpdateOfSegmentSortField

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:40501/solr: Could not fully remove collection: 
testAtomicUpdateOfSegmentSortField
at 
__randomizedtesting.SeedInfo.seed([EC59391A32E9F0B1:D4B1E1FE4AC6B09]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1364)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1054)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:177)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:437)
at 
org.apache.solr.cloud.TestSegmentSorting.ensureClusterEmpty(TestSegmentSorting.java:63)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:965)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:47)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_121) - Build # 812 - Still Unstable!

2017-03-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/812/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SolrCLIZkUtilsTest.testCp

Error Message:
Should have found /cp7/conf on Zookeeper

Stack Trace:
java.lang.AssertionError: Should have found /cp7/conf on Zookeeper
at 
__randomizedtesting.SeedInfo.seed([522370DEA1B32E7A:B9CF8D90651A9018]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest$1.checkPathOnZk(SolrCLIZkUtilsTest.java:670)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest$1.preVisitDirectory(SolrCLIZkUtilsTest.java:686)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest$1.preVisitDirectory(SolrCLIZkUtilsTest.java:666)
at java.nio.file.Files.walkFileTree(Files.java:2677)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest.verifyAllFilesAreZNodes(SolrCLIZkUtilsTest.java:666)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest.verifyZkLocalPathsMatch(SolrCLIZkUtilsTest.java:642)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest.testCp(SolrCLIZkUtilsTest.java:321)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (SOLR-10349) Add totalTermFreq support to TermsComponent

2017-03-28 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-10349:
--
Fix Version/s: 6.6
   master (7.0)

> Add totalTermFreq support to TermsComponent
> ---
>
> Key: SOLR-10349
> URL: https://issues.apache.org/jira/browse/SOLR-10349
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10349.patch, SOLR-10349.patch, SOLR-10349.patch
>
>
> See discussion here: http://markmail.org/message/gmpmege2jpfrsp75. Both 
> {{docFreq}} and {{totalTermFreq}} are already available to the 
> TermsComponent, it's just that doesn't add the ttf measure to the response.
> This issue adds a new {{terms.ttf}} parameter which if set to true results in 
> the following output:
> {noformat}
> 
>   
> 
>   2
>   2
> 
> ...
> {noformat}
> The reason for the new parameter is to not break backward-compatibility, 
> though I wish we could always return those two measures (it doesn't cost us 
> anything, the two are already available to the code). Maybe we can break the 
> response in {{master}} and add this parameter only to {{6x}} as deprecated? I 
> am also fine if we leave it and handle it in a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9985) LukeRequestHandler doesn’t populate docFreq for PointFields

2017-03-28 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9985:
---
Attachment: SOLR-9985.patch

Can you take a look at the patch [~ab] ?

> LukeRequestHandler doesn’t populate docFreq for PointFields
> ---
>
> Key: SOLR-9985
> URL: https://issues.apache.org/jira/browse/SOLR-9985
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-9985.patch
>
>
> Followup task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10349) Add totalTermFreq support to TermsComponent

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946463#comment-15946463
 ] 

ASF subversion and git services commented on SOLR-10349:


Commit deddc9b5c8d8c2859469583fa8b956be48efff82 in lucene-solr's branch 
refs/heads/master from [~shaie]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=deddc9b ]

SOLR-10349: Add totalTermFreq support to TermsComponent

TermsComponent only returns docFreq information per requested term.
This commit adds a terms.ttf parameter, which if set to true, will
return both docFreq and totalTermFreq statistics for each requested
term.


> Add totalTermFreq support to TermsComponent
> ---
>
> Key: SOLR-10349
> URL: https://issues.apache.org/jira/browse/SOLR-10349
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Attachments: SOLR-10349.patch, SOLR-10349.patch, SOLR-10349.patch
>
>
> See discussion here: http://markmail.org/message/gmpmege2jpfrsp75. Both 
> {{docFreq}} and {{totalTermFreq}} are already available to the 
> TermsComponent, it's just that doesn't add the ttf measure to the response.
> This issue adds a new {{terms.ttf}} parameter which if set to true results in 
> the following output:
> {noformat}
> 
>   
> 
>   2
>   2
> 
> ...
> {noformat}
> The reason for the new parameter is to not break backward-compatibility, 
> though I wish we could always return those two measures (it doesn't cost us 
> anything, the two are already available to the code). Maybe we can break the 
> response in {{master}} and add this parameter only to {{6x}} as deprecated? I 
> am also fine if we leave it and handle it in a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #177: Jira/solr 6203

2017-03-28 Thread jitka18
GitHub user jitka18 opened a pull request:

https://github.com/apache/lucene-solr/pull/177

Jira/solr 6203

Christine, I created a fork of the repo and updated our branch from master 
last weekend.  There were conflicts in SearchGroupsResultTransformer.java, 
which I resolved.  After that I committed the update to 
DistributedQueryComponentCustomSortTest.java which was the subject of my most 
recent patch.  I ran 'ant clean compile' and all tests passed.

Judith

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jitka18/lucene-solr jira/solr-6203

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/177.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #177


commit 2e56c0e50564c8feeeb2831dd36cff1e9b23a00f
Author: Mike McCandless 
Date:   2017-02-24T22:00:45Z

LUCENE-7707: add explicit boolean to TopDocs.merge to govern whether 
incoming or implicit shard index should be used

commit cab3aae11dd6e781acabf513095eb11606feddde
Author: Mike McCandless 
Date:   2017-02-24T22:13:49Z

LUCENE-7710: BlockPackedReader now throws CorruptIndexException if 
bitsPerValue is out of bounds, not generic IOException

commit 57a42e4ec54aebac40c1ef7dc93d933cd00dbe1e
Author: Jim Ferenczi 
Date:   2017-02-24T22:37:37Z

LUCENE-7708: Fix position length attribute set by the ShingleFilter when 
outputUnigrams=false

commit 30125f99daf38c4788a9763a89fddb3730c709af
Author: Jan Høydahl 
Date:   2017-02-24T23:43:42Z

Revert "SOLR-9640: Support PKI authentication and SSL in standalone-mode 
master/slave auth with local security.json"

This reverts commit 95d6fc2512d6525b2354165553f0d6cc4d0d6310.

commit 39887b86297e36785607f57cfd0e785bcae3c61a
Author: Tomas Fernandez Lobbe 
Date:   2017-02-25T01:33:12Z

SOLR-10190: Fix NPE in CloudSolrClient when reading stale alias

This closes #160

commit 99e8ef2304b67712d45a2393e649c5319aaac972
Author: Tomas Fernandez Lobbe 
Date:   2017-02-25T01:37:44Z

SOLR-10190: Fixed assert message

commit 6f3f6a2d66d107e94d723a1f931da0b7bdb06928
Author: Uwe Schindler 
Date:   2017-02-25T19:59:26Z

Fix Java 9 b158+ problem (no compatibility layer for non expanded paths 
anymore)

commit ea37b9ae870257c943bdc8c2896f1238a4dc94b6
Author: Uwe Schindler 
Date:   2017-02-25T20:15:09Z

SOLR-10158: Add support for "preload" option in MMapDirectoryFactory

commit 048b24c64a9b1a41e2f7c2bd3ca1c818ddd916df
Author: Christine Poerschke 
Date:   2017-02-27T12:31:25Z

SOLR-10192: Fix copy/paste in solr-ltr pom.xml template.

commit a248e6e3c080cfe6deb873d1ef114e4b9c1c043d
Author: Andrzej Bialecki 
Date:   2017-02-27T13:39:13Z

SOLR-10182 Remove metrics collection at Directory level.

commit 0c1fde664fb1c9456b3fdc2abd08e80dc8f86eb8
Author: Joel Bernstein 
Date:   2017-02-27T17:03:03Z

SOLR-10208: Adjust scoring formula for the scoreNodes function

commit 0f5875b735d889ad41f22315b00ba5451ac9ad1a
Author: Varun Thacker 
Date:   2017-02-28T01:40:57Z

SOLR-7453: Remove replication & backup scripts in the solr/scripts 
directory of the checkout

commit 86b5b6330fda49f7dc6114dac03fef9fd0caea96
Author: markrmiller 
Date:   2017-02-28T03:51:02Z

tests: raise timeout

commit ed0f0f45ce17e2218ec2e97aab2fcb1a0d4defa6
Author: markrmiller 
Date:   2017-02-28T03:55:10Z

SOLR-10207: Harden CleanupOldIndexTest.

commit 04ba9968c0686a5fa1a9c5d89a7cd92839902f32
Author: markrmiller 
Date:   2017-02-28T04:41:30Z

SOLR-10196: ElectionContext#runLeaderProcess can hit NPE on core close.

commit b6c5a8a0c1c6b93b36a57921b06346b577251439
Author: Adrien Grand 
Date:   2017-02-28T10:53:50Z

Avoid infinite loop in TestFuzzyQuery.

commit d9c0f2599d934766549b2566d7c0dd159c3af5c8
Author: Adrien Grand 
Date:   2017-02-22T15:11:52Z

LUCENE-7703: Record the index creation version.

commit c7fd1437706a21d0571c5fced2e2e734563fa895
Author: Adrien Grand 
Date:   2017-02-28T12:38:04Z

LUCENE-7709: Remove unused backward compatibility logic.

commit 8e65aca0e1e08c8f3e3d53e2561b8cd09a5e1a22
Author: Adrien Grand 
Date:   2017-02-28T12:38:55Z

LUCENE-7716: Reduce specialization in TopFieldCollector.

commit df6f83072303b4891a296b700a50c743284d3c30
Author: Adrien Grand 
Date:   2017-02-28T13:21:30Z

LUCENE-7410: Make cache keys and close listeners less trappy.

commit 0010867a631ced339ed9240f573d5e99cad282cf
Author: Ishan 

[GitHub] lucene-solr issue #174: Move Ukrainian dictionary to external dependency

2017-03-28 Thread arysin
Github user arysin commented on the issue:

https://github.com/apache/lucene-solr/pull/174
  
Closing - will be replaced by new pull with newer version of the dictionary 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #174: Move Ukrainian dictionary to external depende...

2017-03-28 Thread arysin
Github user arysin closed the pull request at:

https://github.com/apache/lucene-solr/pull/174


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_121) - Build # 6482 - Still Unstable!

2017-03-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6482/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.SolrCLIZkUtilsTest.testCp

Error Message:
Should have found /cp7/conf on Zookeeper

Stack Trace:
java.lang.AssertionError: Should have found /cp7/conf on Zookeeper
at 
__randomizedtesting.SeedInfo.seed([65C25EBE642D0428:8E2EA3F0A084BA4A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest$1.checkPathOnZk(SolrCLIZkUtilsTest.java:670)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest$1.preVisitDirectory(SolrCLIZkUtilsTest.java:686)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest$1.preVisitDirectory(SolrCLIZkUtilsTest.java:666)
at java.nio.file.Files.walkFileTree(Files.java:2677)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest.verifyAllFilesAreZNodes(SolrCLIZkUtilsTest.java:666)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest.verifyZkLocalPathsMatch(SolrCLIZkUtilsTest.java:642)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest.testCp(SolrCLIZkUtilsTest.java:321)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-10079) TestInPlaceUpdates(Distrib|Standalone) failures

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946334#comment-15946334
 ] 

ASF subversion and git services commented on SOLR-10079:


Commit 144091ad2957d59f83d59c7fcb1afeda65b0f914 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=144091a ]

SOLR-10079: TestInPlaceUpdates(Distrib|Standalone) failures


> TestInPlaceUpdates(Distrib|Standalone) failures
> ---
>
> Key: SOLR-10079
> URL: https://issues.apache.org/jira/browse/SOLR-10079
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Cao Manh Dat
> Fix For: master (7.0), branch_6x
>
> Attachments: SOLR-10079.patch, SOLR-10079.patch, stdout, 
> tests-failures.txt
>
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18881/], 
> reproduces for me:
> {noformat}
> Checking out Revision d8d61ff61d1d798f5e3853ef66bc485d0d403f18 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test 
> -Dtests.seed=E1BB56269B8215B0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sr-Latn-RS -Dtests.timezone=America/Grand_Turk 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 77.7s J2 | TestInPlaceUpdatesDistrib.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: Earlier: [79, 79, 
> 79], now: [78, 78, 78]
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E1BB56269B8215B0:69EF69FC357E7848]:0)
>[junit4]>  at 
> org.apache.solr.update.TestInPlaceUpdatesDistrib.ensureRtgWorksWithPartialUpdatesTest(TestInPlaceUpdatesDistrib.java:425)
>[junit4]>  at 
> org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:142)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:543)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id_i=PostingsFormat(name=LuceneFixedGap), title_s=FSTOrd50, 
> id=PostingsFormat(name=Asserting), 
> id_field_copy_that_does_not_support_in_place_update_s=FSTOrd50}, 
> docValues:{inplace_updatable_float=DocValuesFormat(name=Asserting), 
> id_i=DocValuesFormat(name=Direct), _version_=DocValuesFormat(name=Asserting), 
> title_s=DocValuesFormat(name=Lucene70), id=DocValuesFormat(name=Lucene70), 
> id_field_copy_that_does_not_support_in_place_update_s=DocValuesFormat(name=Lucene70),
>  inplace_updatable_int_with_default=DocValuesFormat(name=Asserting), 
> inplace_updatable_int=DocValuesFormat(name=Direct), 
> inplace_updatable_float_with_default=DocValuesFormat(name=Direct)}, 
> maxPointsInLeafNode=1342, maxMBSortInHeap=6.368734895089348, 
> sim=RandomSimilarity(queryNorm=true): {}, locale=sr-Latn-RS, 
> timezone=America/Grand_Turk
>[junit4]   2> NOTE: Linux 4.4.0-53-generic i386/Oracle Corporation 9-ea 
> (32-bit)/cpus=12,threads=1,free=107734480,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10382) Documents UI screen still encourages index time doc boosting

2017-03-28 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-10382:

Component/s: Admin UI

> Documents UI screen still encourages index time doc boosting
> 
>
> Key: SOLR-10382
> URL: https://issues.apache.org/jira/browse/SOLR-10382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Hoss Man
>
> LUCENE-6819 deprecated index time boosts, but the "Documents" screen in the 
> Solr Admin UI still suggests this option to users...
> {noformat}
> hossman@tray:~/lucene/dev [detached] $ find solr/webapp/web/ -name \*.html | 
> xargs grep Boost | grep 1.0
> solr/webapp/web/tpl/documents.html: value="1.0" title="Document Boost">
> solr/webapp/web/partials/documents.html: type="text" id="boost" value="1.0" title="Document Boost">
> {noformat}
> Once this is fixed, the Admin UI screenshot needs updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7452) json facet api returning inconsistent counts in cloud set up

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946251#comment-15946251
 ] 

ASF subversion and git services commented on SOLR-7452:
---

Commit f36b2bfbb9ea77f4232bae35e49ce6a6241886de in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f36b2bf ]

SOLR-7452: change terminology from _m missing-bucket to _p partial-bucket for 
refinement


> json facet api returning inconsistent counts in cloud set up
> 
>
> Key: SOLR-7452
> URL: https://issues.apache.org/jira/browse/SOLR-7452
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Affects Versions: 5.1
>Reporter: Vamsi Krishna D
>  Labels: count, facet, sort
> Attachments: SOLR-7452.patch, SOLR-7452.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> While using the newly added feature of json term facet api 
> (http://yonik.com/json-facet-api/#TermsFacet) I am encountering inconsistent 
> returns of counts of faceted value ( Note I am running on a cloud mode of 
> solr). For example consider that i have txns_id(unique field or key), 
> consumer_number and amount. Now for a 10 million such records , lets say i 
> query for 
> q=*:*=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> the results are as follows ( some are omitted ):
> "facets":{
> "count":6641277,
> "biskatoo":{
>   "numBuckets":3112708,
>   "buckets":[{
>   "val":"surya",
>   "count":4,
>   "y":2.264506},
>   {
>   "val":"raghu",
>   "COUNT":3,   // capitalised for recognition 
>   "y":1.8},
> {
>   "val":"malli",
>   "count":4,
>   "y":1.78}]}}}
> but if i restrict the query to 
> q=consumer_number:raghu=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> i get :
>   "facets":{
> "count":4,
> "biskatoo":{
>   "numBuckets":1,
>   "buckets":[{
>   "val":"raghu",
>   "COUNT":4,
>   "y":2429708.24}]}}}
> One can see the count results are inconsistent ( and I found many occasions 
> of inconsistencies).
> I have tried the patch https://issues.apache.org/jira/browse/SOLR-7412 but 
> still the issue seems not resolved



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7452) json facet api returning inconsistent counts in cloud set up

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946246#comment-15946246
 ] 

ASF subversion and git services commented on SOLR-7452:
---

Commit 66bfdcbdbab8f294341232946a30a61898228a34 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=66bfdcb ]

SOLR-7452: change terminology from _m missing-bucket to _p partial-bucket for 
refinement


> json facet api returning inconsistent counts in cloud set up
> 
>
> Key: SOLR-7452
> URL: https://issues.apache.org/jira/browse/SOLR-7452
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Affects Versions: 5.1
>Reporter: Vamsi Krishna D
>  Labels: count, facet, sort
> Attachments: SOLR-7452.patch, SOLR-7452.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> While using the newly added feature of json term facet api 
> (http://yonik.com/json-facet-api/#TermsFacet) I am encountering inconsistent 
> returns of counts of faceted value ( Note I am running on a cloud mode of 
> solr). For example consider that i have txns_id(unique field or key), 
> consumer_number and amount. Now for a 10 million such records , lets say i 
> query for 
> q=*:*=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> the results are as follows ( some are omitted ):
> "facets":{
> "count":6641277,
> "biskatoo":{
>   "numBuckets":3112708,
>   "buckets":[{
>   "val":"surya",
>   "count":4,
>   "y":2.264506},
>   {
>   "val":"raghu",
>   "COUNT":3,   // capitalised for recognition 
>   "y":1.8},
> {
>   "val":"malli",
>   "count":4,
>   "y":1.78}]}}}
> but if i restrict the query to 
> q=consumer_number:raghu=0&
>  json.facet={
>biskatoo:{
>type : terms,
>field : consumer_number,
>limit : 20,
>   sort : {y:desc},
>   numBuckets : true,
>   facet:{
>y : "sum(amount)"
>}
>}
>  }
> i get :
>   "facets":{
> "count":4,
> "biskatoo":{
>   "numBuckets":1,
>   "buckets":[{
>   "val":"raghu",
>   "COUNT":4,
>   "y":2429708.24}]}}}
> One can see the count results are inconsistent ( and I found many occasions 
> of inconsistencies).
> I have tried the patch https://issues.apache.org/jira/browse/SOLR-7412 but 
> still the issue seems not resolved



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10382) Documents UI screen still encourages index time doc boosting

2017-03-28 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10382:
---

 Summary: Documents UI screen still encourages index time doc 
boosting
 Key: SOLR-10382
 URL: https://issues.apache.org/jira/browse/SOLR-10382
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


LUCENE-6819 deprecated index time boosts, but the "Documents" screen in the 
Solr Admin UI still suggests this option to users...

{noformat}
hossman@tray:~/lucene/dev [detached] $ find solr/webapp/web/ -name \*.html | 
xargs grep Boost | grep 1.0
solr/webapp/web/tpl/documents.html:
solr/webapp/web/partials/documents.html:
{noformat}

Once this is fixed, the Admin UI screenshot needs updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9959) SolrInfoMBean-s category and hierarchy cleanup

2017-03-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946139#comment-15946139
 ] 

Hoss Man commented on SOLR-9959:



bq. So it's not true that exactly the same info is still available from 
/admin/metrics as in /admin/mbeans?stats=true.

Sure -- i wasn't suggesting that the *exact* same info was available (with the 
exact same names) -- my point is that as things stand on the branch, from a 
users point of view:
* {{admin/mbeans}} is virtually useless
* the {{stats=true}} param was explicitly removed
* there is no obviously straight forward replacement for users (particularly 
for the UI, and in conjunction with the {{diff}}) option.

meanwhile, as a developer: there seems to be a relatively straight forward way 
to keep *most* of the existing {{admin/mbeans}} functionality working (probably 
requiring clients to only some make some minor tweaks to the "stat" names they 
expect from each "key" in the {{infoRegistry}}) for the foreseeable future -- 
so why not make it work...
{quote}
* add a {{default MetricsMap getMetricsMap() \{return null;\}}} to 
{{SolrInfoBean}} (to replace {{getStatistics()}})
* any class implementing both {{SolrInfoBean}} and {{SolrMetricProducer}} 
_could_ implement {{initializeMetrics(...)}} such that it keeps a private 
reference to a {{MetricsMap}} it registers & return that from 
{{getMetricsMap()}}
** many of the {{SolrInfoBean}} classes already seem to be maintaining a 
{{private MetricsMap metricsMap;}} that is assigned in 
{{initializeMetrics(...)}} but never used (in the class) after that
* {{/admin/mbeans}} can call {{getMetricsMap()}} on each {{SolrInfoBean}} it 
loops over if {{stats=true}}
{quote}
...that seems easier then implementing equivalent Admin UI functionality based 
on {{admin/metrics}} (particularly in combination with the "diff" support) and 
would have other wins for other existing programtica consumers of 
{{/admin/mbeans}}

bq. I added metrics-core as a dep. to DIH because IntelliJ complained about not 
being able to access it when registering a MetricsMap, which is a Gauge - I 
guess it wants to be able to access all parent classes of the classes 
referenced here?

I don't know about IntelliJ orwhat it was trying to do, but you may want to 
double check that - it definitely wasn't needed in ivy (i tried removing it and 
everything compiled fine) 

bq. ...very good points, we should make it work in a similar way, eg. 
instantiating the reporter and server when a  element is present even if 
no explicit SolrJmxReporter config is present. 

To be clear: the {{}} element is solrconfig.xml specific, and the 
existing "default" logic is (whne no {{}} exists) is probably _not_ the 
best given solr is now a standalone app.

It seems like moving forward it makes sense to keep metrics reporters like 
{{SolrJmxReporter}} configured at the solr.xml/container level -- so moving 
forward with 7.0 and on, I definitely don't think we should use *exactly* the 
same defaults as how it's worked in the past.  We can let our defaults be 
driven by the existince of an existing MBeanServer created by the JVM based on 
startup options, but we definitely should *NOT* be calling 
{{ManagementFactory.getPlatformMBeanServer()}} and _forcing_ the JVM to create 
a platform MBeanServer.

bq. some of the metrics that we really wanted to report (eg. number of open 
file descriptors) used to be accessible via reflection, which is a no-no in JDK 
9 - but they are exposed via platform MBeans. If we need to be more careful 
about starting up the server then we have to make these metrics optional (which 
unfortunately also means that we can't depend on them being always present).

Please note there is an important difference between the platform *MBeans* APIs 
and the (virtually identical) platform *MXBeans* APIS ... i'm not an expert but 
one thing i do know is that accessing the platform *MBeans* requires an 
MBeanServer, but you can access MXBeans directly from the individual static 
methods in {{ManagementFactory}}

Hence my point about taking a look at SystemInfoHandler: it uses 
{{ManagementFactory.getOperatingSystemMXBean();}} to fetch and 
{{OperatingSystemMXBean}} (as well as using {{Class.forName()}} to try and load 
some JVM vendor specific MXBeans as well) which gives access to all the same 
info as an {{OperatingSystemMBean}} currently used in the 
{{OperatingSystemMetricSet}} -- *w/o needing an MbeanServer to be running*.  
This bean introspection code works in jdk9 just fine...

{noformat}
...
  Linux
  amd64
  4
  0.74
  3.19.0-51-generic
  4325052416
  3611639808
  16437768192
  0.010957238841522514
  1151000
  0.19951374172457412
  16513077248
  16860049408
  65536
  170
...
{noformat}

Similar info about BufferPools (the other place 
{{SolrDispatchFilter.setupJvmMetrics()}} uses the MBeanServer) should be 
available via {{BufferPoolMXBean}} , see example of how 

[jira] [Created] (SOLR-10381) StatelessScriptUpdateProcessor should not require core reload when script is changed

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-10381:
---

 Summary: StatelessScriptUpdateProcessor should not require core 
reload when script is changed
 Key: SOLR-10381
 URL: https://issues.apache.org/jira/browse/SOLR-10381
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya


If someone wishes to modify his script, a core reload is necessary. We should 
make it such that changes made to the scripts could take effect without 
requiring a core reload.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10357) When sow=false, edismax query parsers should handle per-fieldtype autoGeneratePhraseQueries by setting QueryBuilder.autoGenerateMultiTermSynonymsQuery

2017-03-28 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned SOLR-10357:
-

Assignee: Steve Rowe

> When sow=false, edismax query parsers should handle per-fieldtype 
> autoGeneratePhraseQueries by setting 
> QueryBuilder.autoGenerateMultiTermSynonymsQuery
> ---
>
> Key: SOLR-10357
> URL: https://issues.apache.org/jira/browse/SOLR-10357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10357.patch, SOLR-10357.patch, SOLR-10357.patch
>
>
> Right now, the options to not split on whitespace ({{sow=false}}) and to 
> autogenerate phrase queries ({{autoGeneratePhraseQueries="true"}}) will cause 
> queries to throw an exception, since they are incompatible.
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}}, introduced in 
> LUCENE-7638, is the graph query version of Solr's per-fieldtype 
> {{autoGeneratePhraseQueries}} option, and is not incompatible with 
> {{sow=false}}.  
> So {{autoGeneratePhraseQueries="true"}} should cause  
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}} to be set to true 
> when {{sow=false}}, rather than triggering an exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10357) When sow=false, edismax query parsers should handle per-fieldtype autoGeneratePhraseQueries by setting QueryBuilder.autoGenerateMultiTermSynonymsQuery

2017-03-28 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-10357.
---
   Resolution: Fixed
Fix Version/s: 6.6
   master (7.0)

> When sow=false, edismax query parsers should handle per-fieldtype 
> autoGeneratePhraseQueries by setting 
> QueryBuilder.autoGenerateMultiTermSynonymsQuery
> ---
>
> Key: SOLR-10357
> URL: https://issues.apache.org/jira/browse/SOLR-10357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10357.patch, SOLR-10357.patch, SOLR-10357.patch
>
>
> Right now, the options to not split on whitespace ({{sow=false}}) and to 
> autogenerate phrase queries ({{autoGeneratePhraseQueries="true"}}) will cause 
> queries to throw an exception, since they are incompatible.
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}}, introduced in 
> LUCENE-7638, is the graph query version of Solr's per-fieldtype 
> {{autoGeneratePhraseQueries}} option, and is not incompatible with 
> {{sow=false}}.  
> So {{autoGeneratePhraseQueries="true"}} should cause  
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}} to be set to true 
> when {{sow=false}}, rather than triggering an exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10357) When sow=false, edismax query parsers should handle per-fieldtype autoGeneratePhraseQueries by setting QueryBuilder.autoGenerateMultiTermSynonymsQuery

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946111#comment-15946111
 ] 

ASF subversion and git services commented on SOLR-10357:


Commit da2cfda02fe539c42f1794fc56a478a3acc7d111 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=da2cfda ]

SOLR-10357: Enable edismax and standard query parsers to handle the option 
combination sow=false / autoGeneratePhraseQueries=true by setting 
QueryBuilder.autoGenerateMultiTermSynonymsQuery


> When sow=false, edismax query parsers should handle per-fieldtype 
> autoGeneratePhraseQueries by setting 
> QueryBuilder.autoGenerateMultiTermSynonymsQuery
> ---
>
> Key: SOLR-10357
> URL: https://issues.apache.org/jira/browse/SOLR-10357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10357.patch, SOLR-10357.patch, SOLR-10357.patch
>
>
> Right now, the options to not split on whitespace ({{sow=false}}) and to 
> autogenerate phrase queries ({{autoGeneratePhraseQueries="true"}}) will cause 
> queries to throw an exception, since they are incompatible.
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}}, introduced in 
> LUCENE-7638, is the graph query version of Solr's per-fieldtype 
> {{autoGeneratePhraseQueries}} option, and is not incompatible with 
> {{sow=false}}.  
> So {{autoGeneratePhraseQueries="true"}} should cause  
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}} to be set to true 
> when {{sow=false}}, rather than triggering an exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10357) When sow=false, edismax query parsers should handle per-fieldtype autoGeneratePhraseQueries by setting QueryBuilder.autoGenerateMultiTermSynonymsQuery

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946112#comment-15946112
 ] 

ASF subversion and git services commented on SOLR-10357:


Commit 0a689f4d95e981e99ae0e80741e7cf1fa74ff60f in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0a689f4 ]

SOLR-10357: Enable edismax and standard query parsers to handle the option 
combination sow=false / autoGeneratePhraseQueries=true by setting 
QueryBuilder.autoGenerateMultiTermSynonymsQuery


> When sow=false, edismax query parsers should handle per-fieldtype 
> autoGeneratePhraseQueries by setting 
> QueryBuilder.autoGenerateMultiTermSynonymsQuery
> ---
>
> Key: SOLR-10357
> URL: https://issues.apache.org/jira/browse/SOLR-10357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10357.patch, SOLR-10357.patch, SOLR-10357.patch
>
>
> Right now, the options to not split on whitespace ({{sow=false}}) and to 
> autogenerate phrase queries ({{autoGeneratePhraseQueries="true"}}) will cause 
> queries to throw an exception, since they are incompatible.
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}}, introduced in 
> LUCENE-7638, is the graph query version of Solr's per-fieldtype 
> {{autoGeneratePhraseQueries}} option, and is not incompatible with 
> {{sow=false}}.  
> So {{autoGeneratePhraseQueries="true"}} should cause  
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}} to be set to true 
> when {{sow=false}}, rather than triggering an exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10380) Deprecate RunExecutableListener

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-10380:
---

 Summary: Deprecate RunExecutableListener
 Key: SOLR-10380
 URL: https://issues.apache.org/jira/browse/SOLR-10380
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya


We should deprecate (and remove) the RunExecutableListener. It is a relic of 
the past when we relied on shell scripts for replication. It serves no purpose 
in current codebase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10357) When sow=false, edismax query parsers should handle per-fieldtype autoGeneratePhraseQueries by setting QueryBuilder.autoGenerateMultiTermSynonymsQuery

2017-03-28 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10357:
--
Attachment: SOLR-10357.patch

Final patch.  Previous patches were missing QueryParser.java changes (I 
directly changed this file, then regenerated from QueryParser.jj, then made a 
patch...)

Solr tests and precommit pass.  Committing shortly.

> When sow=false, edismax query parsers should handle per-fieldtype 
> autoGeneratePhraseQueries by setting 
> QueryBuilder.autoGenerateMultiTermSynonymsQuery
> ---
>
> Key: SOLR-10357
> URL: https://issues.apache.org/jira/browse/SOLR-10357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10357.patch, SOLR-10357.patch, SOLR-10357.patch
>
>
> Right now, the options to not split on whitespace ({{sow=false}}) and to 
> autogenerate phrase queries ({{autoGeneratePhraseQueries="true"}}) will cause 
> queries to throw an exception, since they are incompatible.
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}}, introduced in 
> LUCENE-7638, is the graph query version of Solr's per-fieldtype 
> {{autoGeneratePhraseQueries}} option, and is not incompatible with 
> {{sow=false}}.  
> So {{autoGeneratePhraseQueries="true"}} should cause  
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}} to be set to true 
> when {{sow=false}}, rather than triggering an exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1224 - Unstable!

2017-03-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1224/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:153)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:110)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:110)
  at sun.reflect.GeneratedConstructorAccessor222.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:779)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:841)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1091)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:955)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:849)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:951)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:583)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:153)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:110)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:110)
at sun.reflect.GeneratedConstructorAccessor222.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:779)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:841)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1091)
at org.apache.solr.core.SolrCore.(SolrCore.java:955)
at org.apache.solr.core.SolrCore.(SolrCore.java:849)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:951)
at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:583)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([2C42446844A53DF1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:301)
at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 

[jira] [Commented] (SOLR-5970) Create collection API always has status 0

2017-03-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946069#comment-15946069
 ] 

Mark Miller commented on SOLR-5970:
---

Probably at the time my thinking was something like, well the request 
succeeded, but maybe some of the sub  request failed and you can read about the 
parts within the response. The request itself could also fail, and then you 
would get a failed status. 

We could change this for 7.

> Create collection API always has status 0
> -
>
> Key: SOLR-5970
> URL: https://issues.apache.org/jira/browse/SOLR-5970
> Project: Solr
>  Issue Type: Bug
>Reporter: Abraham Elmahrek
>
> The responses below are from a successful create collection API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-CreateormodifyanAliasforaCollection)
>  call and an unsuccessful create collection API call. It seems the 'status' 
> is always 0.
> Success:
> {u'responseHeader': {u'status': 0, u'QTime': 4421}, u'success': {u'': 
> {u'core': u'test1_shard1_replica1', u'responseHeader': {u'status': 0, 
> u'QTime': 3449
> Failure:
> {u'failure': 
>   {u'': 
> u"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
> CREATEing SolrCore 'test43_shard1_replica1': Unable to create core: 
> test43_shard1_replica1 Caused by: Could not find configName for collection 
> test43 found:[test1]"},
>  u'responseHeader': {u'status': 0, u'QTime': 17149}
> }
> It seems like the status should be 400 or something similar for an 
> unsuccessful attempt?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10359) User Interactions Logger Component

2017-03-28 Thread Michael Nilsson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945944#comment-15945944
 ] 

Michael Nilsson commented on SOLR-10359:


The ideas in this ticket are definitely something everyone encounters when 
needing to evaluate how good their search is performing.  I think the scope of 
this enhancement, for a first cut, could be narrowed down a bit though.  

1) If you are storing the user interactions + impressions in a parallel solr 
collection, you don't need a separate evaluation component initially.  You 
could use Solr JSON faceting, the analytics component, or streaming joins 
(which can work on databases too) to calculate the numbers instead.  The first 
cut could probably just provide documentation for the exact requests to send in 
order to calculate CTR, etc.

2) Also, you probably won't want to auto-log results returned from Solr as the 
impressions at first.  As mentioned above, results returned from Solr are not 
always 1 to 1 with results displayed.  Just like you will be providing a way to 
store user interactions on demand via an endpoint, you should probably just 
expand that to allow storing user impressions on demand as well.

3) You will need a way to link the user impressions with their interactions.  
You could supply a unique search id with the initial result set and let the 
client pass that back to you when sending the save impressions request and save 
interactions request.  However, for the first cut you could make it the 
client's responsibility of generating the unique id to then pass back to you.

For the use cases of Solr that use a federated search across multiple 
collections and merge the results into 1 list, points 2 and 3 become more 
important.  I might query 10 results from each of 3 collections, for a total of 
30 results, but only display the top 5 combined on my page.  If solr auto 
generates a search id, I will now have 3 ids instead of 1.  Also, there were 
only 5 total impressions, not 30 for the auto logging case.


> User Interactions Logger Component
> --
>
> Key: SOLR-10359
> URL: https://issues.apache.org/jira/browse/SOLR-10359
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alessandro Benedetti
>  Labels: CTR, evaluation
>
> *Introduction*
> Being able to evaluate the quality of your search engine is becoming more and 
> more important day by day.
> This issue is to put a milestone to integrate online evaluation metrics with 
> Solr.
> *Scope*
> Scope of this issue is to provide a set of components able to :
> 1) Collect Search Results impressions ( results shown per query)
> 2) Collect Users interactions ( user interactions on the search results per 
> query e.g. clicks, bookmarking,ect )
> 3) Calculate evaluation metrics on demand, such as Click Through Rate, DCG ...
> *Technical Design*
> A SearchComponent can be designed :
> *UsersEventsLoggerComponent*
> A property (such as storeDir) will define where the data collected will be 
> stored.
> Different data structures can be explored, to keep it simple, a first 
> implementation can be a Lucene Index.
> *Data Model*
> The user event can be modelled in the following way :
>  - the user query the event is related to
>  - the ID of the search result involved in the interaction
>  - the position in the ranking of the search result involved 
> in the interaction
>  - time when the interaction happened
>  - 0 for impressions, a value between 1-5 to identify the 
> type of user event, the semantic will depend on the domain and use cases
>  - this can identify a variant, in A/B testing
> *Impressions Logging*
> When the SearchComponent  is assigned to a request handler, everytime it 
> processes a request and return to the user a result set for a query, the 
> component will collect the impressions ( results returned) and index them in 
> the auxiliary lucene index.
> This will happen in parallel as soon as you return the results to avoid 
> affecting the query time.
> Of course an impact on CPU load and memory is expected, will be interesting 
> to minimise it.
> * User Events Logging *
> An UpdateHandler will be exposed to accept POST requests and collect user 
> events.
> Everytime a request is sent, the user event will be indexed in the underline 
> auxiliary Lucene Index.
> * Stats Calculation *
> A RequestHandler will be exposed to be able to calculate stats and 
> aggregations for the metrics :
> /evaluation?metric=ctr=query=testA,testB
> This request could calculate the CTR for our testA and testB to compare.
> Showing stats in total and per query ( to highlight the queries with 
> lower/higher CTR).
> The calculations will happen separating the  for an easy 
> comparison.
> Will be important to keep it as 

[jira] [Assigned] (SOLR-9057) CloudSolrClient should be able to work w/o ZK url

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-9057:
--

Assignee: Ishan Chattopadhyaya

> CloudSolrClient should be able to work w/o ZK url
> -
>
> Key: SOLR-9057
> URL: https://issues.apache.org/jira/browse/SOLR-9057
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Noble Paul
>Assignee: Ishan Chattopadhyaya
>
> It should be possible to pass one or more Solr urls to Solrj and it should be 
> able to get started from there. Exposing ZK to users should not be required. 
> it is a security vulnerability 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945895#comment-15945895
 ] 

ASF subversion and git services commented on SOLR-6736:
---

Commit 254218e80ca54203079a6591fa84edfaccaedea8 in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=254218e ]

SOLR-6736: Adding support for uploading zipped configsets using ConfigSets API


> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Ishan Chattopadhyaya
> Attachments: newzkconf.zip, newzkconf.zip, SOLR-6736-newapi.patch, 
> SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, test_private.pem, 
> test_pub.der, zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7756) Only record the major that was used to create the index rather than the full version

2017-03-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945863#comment-15945863
 ] 

Adrien Grand commented on LUCENE-7756:
--

Thanks for having a look! So say you have an index that was created with 
version X.Y and then you call addIndexes with an index that was created with 
version X.Z. What should be the created version after addIndexes has been 
called? Do you think we should just ignore the addIndexes call and retain X.Y 
as a created version, or maybe take the min or something?

Something else that I am considering, is to add the version of the oldest 
segment that contributed to a segment through merges, in addition to the index 
created version (major). This might address your concern if it is mainly about 
losing information that might be useful for debugging purposes?

> Only record the major that was used to create the index rather than the full 
> version
> 
>
> Key: LUCENE-7756
> URL: https://issues.apache.org/jira/browse/LUCENE-7756
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7756.patch
>
>
> LUCENE-7703 added information about the Lucene version that was used to 
> create the index to the segment infos. But since there is a single creation 
> version, it means we need to reject calls to addIndexes that can mix indices 
> that have different creation versions, which might be seen as an important 
> regression by some users. So I have been thinking about only recording the 
> major version that was used to create the index, which is still very valuable 
> information and would allow us to accept calls to addIndexes when all merged 
> indices have the same major version. This looks like a better trade-off to me.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9745) bin/solr* swallows errors from running example instances at least

2017-03-28 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945826#comment-15945826
 ] 

Mikhail Khludnev commented on SOLR-9745:


Yes. Please.

> bin/solr* swallows errors from running example instances at least
> -
>
> Key: SOLR-9745
> URL: https://issues.apache.org/jira/browse/SOLR-9745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3, master (7.0)
>Reporter: Mikhail Khludnev
>  Labels: newbie, newdev
>
> It occurs on mad scenario in LUCENE-7534:
> * solr.cmd weren't granted +x (it happens under cygwin, yes)
> * coolhacker worked it around with cmd /C solr.cmd start -e ..
> * but when SolrCLI runs solr instances with the same solr.cmd, it just 
> silently fails
> I think we can just pass ExecuteResultHandler which will dump exception to 
> console. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7756) Only record the major that was used to create the index rather than the full version

2017-03-28 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945828#comment-15945828
 ] 

Michael McCandless commented on LUCENE-7756:


Should we maybe record the complete version, but only check the major version 
in {{addIndexes}}?  It seems like it could be helpful at some point to know the 
minor/bugfix values of the release too, maybe?

> Only record the major that was used to create the index rather than the full 
> version
> 
>
> Key: LUCENE-7756
> URL: https://issues.apache.org/jira/browse/LUCENE-7756
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7756.patch
>
>
> LUCENE-7703 added information about the Lucene version that was used to 
> create the index to the segment infos. But since there is a single creation 
> version, it means we need to reject calls to addIndexes that can mix indices 
> that have different creation versions, which might be seen as an important 
> regression by some users. So I have been thinking about only recording the 
> major version that was used to create the index, which is still very valuable 
> information and would allow us to accept calls to addIndexes when all merged 
> indices have the same major version. This looks like a better trade-off to me.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7755) Join queries should not reference IndexReaders.

2017-03-28 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945811#comment-15945811
 ] 

Michael McCandless commented on LUCENE-7755:


+1

> Join queries should not reference IndexReaders.
> ---
>
> Key: LUCENE-7755
> URL: https://issues.apache.org/jira/browse/LUCENE-7755
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
> Attachments: LUCENE-7755.patch
>
>
> This is similar to LUCENE-7657 and can cause memory leaks when those queries 
> are cached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7753) Findbugs: mark certain final fields static

2017-03-28 Thread Daniel Jelinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Jelinski updated LUCENE-7753:

Attachment: LUCENE-7753.patch

reuploading with correct file name...

> Findbugs: mark certain final fields static
> --
>
> Key: LUCENE-7753
> URL: https://issues.apache.org/jira/browse/LUCENE-7753
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Daniel Jelinski
>Priority: Minor
> Attachments: LUCENE-7753.patch, LUCENE-7753.patch
>
>
> http://findbugs.sourceforge.net/bugDescriptions.html#SS_SHOULD_BE_STATIC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1275 - Failure

2017-03-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1275/

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
document count mismatch.  control=10164 sum(shards)=10163 cloudClient=10163

Stack Trace:
java.lang.AssertionError: document count mismatch.  control=10164 
sum(shards)=10163 cloudClient=10163
at 
__randomizedtesting.SeedInfo.seed([D4517FC2A0E521C4:5C0540180E194C3C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1337)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:236)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-7753) Findbugs: mark certain final fields static

2017-03-28 Thread Daniel Jelinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Jelinski updated LUCENE-7753:

Attachment: (was: LUCENE-7754.patch)

> Findbugs: mark certain final fields static
> --
>
> Key: LUCENE-7753
> URL: https://issues.apache.org/jira/browse/LUCENE-7753
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Daniel Jelinski
>Priority: Minor
> Attachments: LUCENE-7753.patch
>
>
> http://findbugs.sourceforge.net/bugDescriptions.html#SS_SHOULD_BE_STATIC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7753) Findbugs: mark certain final fields static

2017-03-28 Thread Daniel Jelinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Jelinski updated LUCENE-7753:

Attachment: LUCENE-7754.patch

Thank you for your review [~jpountz]; I modified the source .jj file for 
queryParser as well. I noticed that the jj files are not in sync with the 
generated files - there are some extra unused imports; I did not fix these.
The new patch renames final fields to uppercase and removes changes to egothor. 
Also apparently IDEA decided to remove some unused imports from one of the 
files, I left that in the patch. Let me know if that's OK.

> Findbugs: mark certain final fields static
> --
>
> Key: LUCENE-7753
> URL: https://issues.apache.org/jira/browse/LUCENE-7753
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Daniel Jelinski
>Priority: Minor
> Attachments: LUCENE-7753.patch, LUCENE-7754.patch
>
>
> http://findbugs.sourceforge.net/bugDescriptions.html#SS_SHOULD_BE_STATIC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9745) bin/solr* swallows errors from running example instances at least

2017-03-28 Thread gopikannan venugopalsamy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945791#comment-15945791
 ] 

gopikannan venugopalsamy commented on SOLR-9745:


Hi, 
  Shall I create a patch with this fix?

> bin/solr* swallows errors from running example instances at least
> -
>
> Key: SOLR-9745
> URL: https://issues.apache.org/jira/browse/SOLR-9745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3, master (7.0)
>Reporter: Mikhail Khludnev
>  Labels: newbie, newdev
>
> It occurs on mad scenario in LUCENE-7534:
> * solr.cmd weren't granted +x (it happens under cygwin, yes)
> * coolhacker worked it around with cmd /C solr.cmd start -e ..
> * but when SolrCLI runs solr instances with the same solr.cmd, it just 
> silently fails
> I think we can just pass ExecuteResultHandler which will dump exception to 
> console. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10341) SQL AVG function mis-interprets field type.

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945782#comment-15945782
 ] 

ASF subversion and git services commented on SOLR-10341:


Commit 4c979b84e8d5bd3eb4cc34f90834cedbf2a374ed in lucene-solr's branch 
refs/heads/branch_6_5 from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4c979b8 ]

SOLR-10341: SQL AVG function mis-interprets field type


> SQL AVG function mis-interprets field type.
> ---
>
> Key: SOLR-10341
> URL: https://issues.apache.org/jira/browse/SOLR-10341
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.5
>Reporter: Timothy Potter
> Attachments: Screen Shot 2017-03-22 at 8.12.33 AM.png, 
> SOLR-10341.patch, SOLR-10341.patch
>
>
> Using movielens data (users, movies, ratings), I tried the following SQL:
> {code}
> curl --data-urlencode "stmt=SELECT solr.title as title, avg(rating) as 
> avg_rating FROM ratings INNER JOIN (select movie_id,title from movies where 
> _query_='plot_txt_en:love') as solr ON ratings.movie_id = solr.movie_id GROUP 
> BY title ORDER BY avg_rating DESC LIMIT 10" 
> "http://localhost:8983/solr/movies/sql?aggregationMode=facet;
> {code}
> Solr returns this error: 
> {code}
> {"result-set":{"docs":[{"EXCEPTION":"Failed to execute sqlQuery 'SELECT 
> solr.title as title, avg(rating) as avg_rating FROM ratings INNER JOIN 
> (select movie_id,title from movies where _query_='plot_txt_en:love') as solr 
> ON ratings.movie_id = solr.movie_id GROUP BY title ORDER BY avg_rating DESC 
> LIMIT 10' against JDBC connection 'jdbc:calcitesolr:'.\nError while executing 
> SQL \"SELECT solr.title as title, avg(rating) as avg_rating FROM ratings 
> INNER JOIN (select movie_id,title from movies where 
> _query_='plot_txt_en:love') as solr ON ratings.movie_id = solr.movie_id GROUP 
> BY title ORDER BY avg_rating DESC LIMIT 10\": From line 1, column 29 to line 
> 1, column 39: Cannot apply 'AVG' to arguments of type 'AVG( JAVA.LANG.STRING)>)'. Supported form(s): 
> 'AVG()'","EOF":true,"RESPONSE_TIME":92}]}}
> {code}
> rating is a TrieInt with docValues enabled.
> {code}
>  indexed="true" stored="true"/>
> {code}
> see screenshot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10341) SQL AVG function mis-interprets field type.

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945767#comment-15945767
 ] 

ASF subversion and git services commented on SOLR-10341:


Commit e6b4d25289a240ff64eaeb858c4c06737999ee11 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e6b4d25 ]

SOLR-10341: SQL AVG function mis-interprets field type


> SQL AVG function mis-interprets field type.
> ---
>
> Key: SOLR-10341
> URL: https://issues.apache.org/jira/browse/SOLR-10341
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.5
>Reporter: Timothy Potter
> Attachments: Screen Shot 2017-03-22 at 8.12.33 AM.png, 
> SOLR-10341.patch, SOLR-10341.patch
>
>
> Using movielens data (users, movies, ratings), I tried the following SQL:
> {code}
> curl --data-urlencode "stmt=SELECT solr.title as title, avg(rating) as 
> avg_rating FROM ratings INNER JOIN (select movie_id,title from movies where 
> _query_='plot_txt_en:love') as solr ON ratings.movie_id = solr.movie_id GROUP 
> BY title ORDER BY avg_rating DESC LIMIT 10" 
> "http://localhost:8983/solr/movies/sql?aggregationMode=facet;
> {code}
> Solr returns this error: 
> {code}
> {"result-set":{"docs":[{"EXCEPTION":"Failed to execute sqlQuery 'SELECT 
> solr.title as title, avg(rating) as avg_rating FROM ratings INNER JOIN 
> (select movie_id,title from movies where _query_='plot_txt_en:love') as solr 
> ON ratings.movie_id = solr.movie_id GROUP BY title ORDER BY avg_rating DESC 
> LIMIT 10' against JDBC connection 'jdbc:calcitesolr:'.\nError while executing 
> SQL \"SELECT solr.title as title, avg(rating) as avg_rating FROM ratings 
> INNER JOIN (select movie_id,title from movies where 
> _query_='plot_txt_en:love') as solr ON ratings.movie_id = solr.movie_id GROUP 
> BY title ORDER BY avg_rating DESC LIMIT 10\": From line 1, column 29 to line 
> 1, column 39: Cannot apply 'AVG' to arguments of type 'AVG( JAVA.LANG.STRING)>)'. Supported form(s): 
> 'AVG()'","EOF":true,"RESPONSE_TIME":92}]}}
> {code}
> rating is a TrieInt with docValues enabled.
> {code}
>  indexed="true" stored="true"/>
> {code}
> see screenshot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945762#comment-15945762
 ] 

ASF subversion and git services commented on SOLR-6736:
---

Commit 6b0217b7cbff1216bb4ffbecdba02eb8c5dd3df6 in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6b0217b ]

SOLR-6736: Adding support for uploading zipped configsets using ConfigSets API


> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Ishan Chattopadhyaya
> Attachments: newzkconf.zip, newzkconf.zip, SOLR-6736-newapi.patch, 
> SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, test_private.pem, 
> test_pub.der, zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10365) Collection re-creation fails if previous collection creation had failed

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945746#comment-15945746
 ] 

ASF subversion and git services commented on SOLR-10365:


Commit c37cb7e94e312fbfe650cb4cc4e812dbc2034478 in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c37cb7e ]

SOLR-10365: Handle a SolrCoreInitializationException while publishing core 
state during SolrCore creation


> Collection re-creation fails if previous collection creation had failed
> ---
>
> Key: SOLR-10365
> URL: https://issues.apache.org/jira/browse/SOLR-10365
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-10365.patch, SOLR-10365.patch, SOLR-10365.patch, 
> SOLR-10365.patch
>
>
> Steps to reproduce:
> # Create collection using a bad configset that has some errors, due to which 
> collection creation fails.
> # Now, create a collection using the same name, but a good configset. This 
> fails sometimes (about 25-30% of the times, according to my rough estimate).
> Here's what happens during the second step (can be seen from stacktrace 
> below):
> # In CoreContainer's create(CoreDescriptor, boolean, boolean), there's a line 
> {{zkSys.getZkController().preRegister(dcore);}}.
> # This calls ZkController's publish(), which in turn calls CoreContainer's 
> getCore() method. This call *should* return null (since previous attempt of 
> core creation didn't succeed). But, it throws the exception associated with 
> the previous failure.
> Here's the stack trace for the same.
> {code}
> Caused by: org.apache.solr.common.SolrException: SolrCore 
> 'newcollection2_shard1_replica1' is not available due to init failure: 
> blahblah
>   at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1312)
>   at org.apache.solr.cloud.ZkController.publish(ZkController.java:1225)
>   at 
> org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1399)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:945)
> {code}
> While working on SOLR-6736, I ran into this (nasty?) issue. I'll try to 
> isolate this into a standalone test that demonstrates this issue. Otherwise, 
> as of now, this can be seen in the SOLR-6736's 
> testUploadWithScriptUpdateProcessor() test (which tries to re-create the 
> collection, but sometimes fails).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10365) Collection re-creation fails if previous collection creation had failed

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945741#comment-15945741
 ] 

ASF subversion and git services commented on SOLR-10365:


Commit 0322068ea4648c93405da5b60fcbcc3467f5b009 in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0322068 ]

SOLR-10365: Handle a SolrCoreInitializationException while publishing core 
state during SolrCore creation


> Collection re-creation fails if previous collection creation had failed
> ---
>
> Key: SOLR-10365
> URL: https://issues.apache.org/jira/browse/SOLR-10365
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-10365.patch, SOLR-10365.patch, SOLR-10365.patch, 
> SOLR-10365.patch
>
>
> Steps to reproduce:
> # Create collection using a bad configset that has some errors, due to which 
> collection creation fails.
> # Now, create a collection using the same name, but a good configset. This 
> fails sometimes (about 25-30% of the times, according to my rough estimate).
> Here's what happens during the second step (can be seen from stacktrace 
> below):
> # In CoreContainer's create(CoreDescriptor, boolean, boolean), there's a line 
> {{zkSys.getZkController().preRegister(dcore);}}.
> # This calls ZkController's publish(), which in turn calls CoreContainer's 
> getCore() method. This call *should* return null (since previous attempt of 
> core creation didn't succeed). But, it throws the exception associated with 
> the previous failure.
> Here's the stack trace for the same.
> {code}
> Caused by: org.apache.solr.common.SolrException: SolrCore 
> 'newcollection2_shard1_replica1' is not available due to init failure: 
> blahblah
>   at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1312)
>   at org.apache.solr.cloud.ZkController.publish(ZkController.java:1225)
>   at 
> org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1399)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:945)
> {code}
> While working on SOLR-6736, I ran into this (nasty?) issue. I'll try to 
> isolate this into a standalone test that demonstrates this issue. Otherwise, 
> as of now, this can be seen in the SOLR-6736's 
> testUploadWithScriptUpdateProcessor() test (which tries to re-create the 
> collection, but sometimes fails).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3926 - Unstable!

2017-03-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3926/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

91 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.SolrTestCaseJ4Test

Error Message:
org.apache.solr.SolrTestCaseJ4Test

Stack Trace:
java.lang.ClassNotFoundException: org.apache.solr.SolrTestCaseJ4Test
at java.net.URLClassLoader$1.run(URLClassLoader.java:370)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.instantiate(SlaveMain.java:273)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:233)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:355)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:13)
Caused by: java.io.FileNotFoundException: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/classes/test/org/apache/solr/SolrTestCaseJ4Test.class
 (Too many open files)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
sun.misc.URLClassPath$FileLoader$1.getInputStream(URLClassPath.java:1288)
at sun.misc.Resource.cachedInputStream(Resource.java:77)
at sun.misc.Resource.getByteBuffer(Resource.java:160)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:454)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
... 12 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.TestCursorMarkWithoutUniqueKey

Error Message:
org.apache.solr.TestCursorMarkWithoutUniqueKey

Stack Trace:
java.lang.ClassNotFoundException: org.apache.solr.TestCursorMarkWithoutUniqueKey
at java.net.URLClassLoader$1.run(URLClassLoader.java:370)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.instantiate(SlaveMain.java:273)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:233)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:355)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:13)
Caused by: java.io.FileNotFoundException: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/classes/test/org/apache/solr/TestCursorMarkWithoutUniqueKey.class
 (Too many open files)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
sun.misc.URLClassPath$FileLoader$1.getInputStream(URLClassPath.java:1288)
at sun.misc.Resource.cachedInputStream(Resource.java:77)
at sun.misc.Resource.getByteBuffer(Resource.java:160)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:454)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
... 12 more


FAILED:  junit.framework.TestSuite.org.apache.solr.TestDocumentBuilder

Error Message:
org.apache.solr.TestDocumentBuilder

Stack Trace:
java.lang.ClassNotFoundException: org.apache.solr.TestDocumentBuilder
at java.net.URLClassLoader$1.run(URLClassLoader.java:370)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 

[jira] [Commented] (SOLR-10341) SQL AVG function mis-interprets field type.

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945721#comment-15945721
 ] 

ASF subversion and git services commented on SOLR-10341:


Commit aa2b46a62a52c0d0117312add2a667bf6b14a709 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aa2b46a ]

SOLR-10341: SQL AVG function mis-interprets field type


> SQL AVG function mis-interprets field type.
> ---
>
> Key: SOLR-10341
> URL: https://issues.apache.org/jira/browse/SOLR-10341
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 6.5
>Reporter: Timothy Potter
> Attachments: Screen Shot 2017-03-22 at 8.12.33 AM.png, 
> SOLR-10341.patch, SOLR-10341.patch
>
>
> Using movielens data (users, movies, ratings), I tried the following SQL:
> {code}
> curl --data-urlencode "stmt=SELECT solr.title as title, avg(rating) as 
> avg_rating FROM ratings INNER JOIN (select movie_id,title from movies where 
> _query_='plot_txt_en:love') as solr ON ratings.movie_id = solr.movie_id GROUP 
> BY title ORDER BY avg_rating DESC LIMIT 10" 
> "http://localhost:8983/solr/movies/sql?aggregationMode=facet;
> {code}
> Solr returns this error: 
> {code}
> {"result-set":{"docs":[{"EXCEPTION":"Failed to execute sqlQuery 'SELECT 
> solr.title as title, avg(rating) as avg_rating FROM ratings INNER JOIN 
> (select movie_id,title from movies where _query_='plot_txt_en:love') as solr 
> ON ratings.movie_id = solr.movie_id GROUP BY title ORDER BY avg_rating DESC 
> LIMIT 10' against JDBC connection 'jdbc:calcitesolr:'.\nError while executing 
> SQL \"SELECT solr.title as title, avg(rating) as avg_rating FROM ratings 
> INNER JOIN (select movie_id,title from movies where 
> _query_='plot_txt_en:love') as solr ON ratings.movie_id = solr.movie_id GROUP 
> BY title ORDER BY avg_rating DESC LIMIT 10\": From line 1, column 29 to line 
> 1, column 39: Cannot apply 'AVG' to arguments of type 'AVG( JAVA.LANG.STRING)>)'. Supported form(s): 
> 'AVG()'","EOF":true,"RESPONSE_TIME":92}]}}
> {code}
> rating is a TrieInt with docValues enabled.
> {code}
>  indexed="true" stored="true"/>
> {code}
> see screenshot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10329) Rebuild Solr examples

2017-03-28 Thread Oussema Hidri (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945720#comment-15945720
 ] 

Oussema Hidri commented on SOLR-10329:
--

About the proposal; I have just submitted few minutes ago.

Okey I will do that, and get back to you.

Thank you for you help =D




> Rebuild Solr examples
> -
>
> Key: SOLR-10329
> URL: https://issues.apache.org/jira/browse/SOLR-10329
> Project: Solr
>  Issue Type: Wish
>  Components: examples
>Reporter: Alexandre Rafalovitch
>  Labels: gsoc2017
>
> Apache Solr ships with a number of examples. They evolved from a kitchen sync 
> example and are rather large. When new Solr features are added, they are 
> often shoehorned into the most appropriate example and sometimes are not 
> represented at all. 
> Often, for new users, it is hard to tell what part of example is relevant, 
> what part is default and what part is demonstrating something completely 
> different.
> It would take significant (and very appreciated) effort to review all the 
> examples and rebuild them to provide clean way to showcase best practices 
> around base and most recent features.
> Specific issues are around kitchen sync vs. minimal examples, better approach 
> to "schemaless" mode and creating examples and datasets that allow to create 
> both "hello world" and more-advanced tutorials.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10352) Low entropy warning in bin/solr script

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945718#comment-15945718
 ] 

Ishan Chattopadhyaya commented on SOLR-10352:
-

[~hossman], I think it is a good point that entropy could, in theory, go up and 
down over the course of the life of a Solr process. In practice, a host with 
high entropy (say baremetal) continues to remain in high entropy available 
state; and low entropy hosts (say VMs) continue to remain in a low entropy 
mode. So, although in theory a system could get in and out of the diminished 
available entropy state, in practice, afaik, a good system remains good and a 
bad one remains bad. Hence, a startup warning feels like a sensible thing to 
throw out there.

bq. Rather then warning about this in bin/solr I feel like this type of 
information should be exposed by the solr metrics code, so people can easily 
monitor it over the life of the solr server process
I feel that a start up warning should definitely be thrown, since we already 
know that there will be a problem. Having metrics support and UI warning is a 
great idea. However, I think we should do both (startup warning and metrics/UI 
warning).

> Low entropy warning in bin/solr script
> --
>
> Key: SOLR-10352
> URL: https://issues.apache.org/jira/browse/SOLR-10352
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Fix For: master (7.0), branch_6x
>
> Attachments: SOLR-10352.patch
>
>
> We should add a warning in the startup script for Linux, if the output of the 
> following is below a certain threshold (maybe 300?). The warning could 
> indicate that features like UUIDField, SSL etc. might not work properly (or 
> be slow). As a hint, we could then suggest the user to configure a non 
> blocking SecureRandom (SOLR-10338) or install rng-tools, haveged etc.
> {quote}
> cat /proc/sys/kernel/random/entropy_avail
> {quote}
> Original discussion:
> https://issues.apache.org/jira/browse/SOLR-10338?focusedCommentId=15938904=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15938904



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5970) Create collection API always has status 0

2017-03-28 Thread Esther Quansah (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945711#comment-15945711
 ] 

Esther Quansah commented on SOLR-5970:
--

processResponse() in OverseerCollectionMessageHandler.java logs the exception 
and outputs it in the response but never actually throws an exception, which is 
probably why the status is 0... I'm not able to replicate this issue 
though...I'm always getting a 400 status when I attempt to create a collection 
with a non-existent configset. [~abec] could you list simple steps to reproduce 
this? if not, do you have debug enabled logs? 

> Create collection API always has status 0
> -
>
> Key: SOLR-5970
> URL: https://issues.apache.org/jira/browse/SOLR-5970
> Project: Solr
>  Issue Type: Bug
>Reporter: Abraham Elmahrek
>
> The responses below are from a successful create collection API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-CreateormodifyanAliasforaCollection)
>  call and an unsuccessful create collection API call. It seems the 'status' 
> is always 0.
> Success:
> {u'responseHeader': {u'status': 0, u'QTime': 4421}, u'success': {u'': 
> {u'core': u'test1_shard1_replica1', u'responseHeader': {u'status': 0, 
> u'QTime': 3449
> Failure:
> {u'failure': 
>   {u'': 
> u"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
> CREATEing SolrCore 'test43_shard1_replica1': Unable to create core: 
> test43_shard1_replica1 Caused by: Could not find configName for collection 
> test43 found:[test1]"},
>  u'responseHeader': {u'status': 0, u'QTime': 17149}
> }
> It seems like the status should be 400 or something similar for an 
> unsuccessful attempt?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10329) Rebuild Solr examples

2017-03-28 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945708#comment-15945708
 ] 

Alexandre Rafalovitch commented on SOLR-10329:
--

You are on a right track, but the proposal does not have enough depth. I would 
recommend as a next step to look at the open JIRA issues and search for keyword 
*example*. And perhaps another search for component *examples*. This should 
give you a list of various issues seen with the examples. Without that, you 
will not know what goals you are satisfying in redoing the examples.

You can paste the list of all JIRAs dealing with examples and your 
categorization of issues in your proposal to show your understand the 
landscape. Also, I did not see this proposal on the GSOC site earlier today. Is 
it there?

> Rebuild Solr examples
> -
>
> Key: SOLR-10329
> URL: https://issues.apache.org/jira/browse/SOLR-10329
> Project: Solr
>  Issue Type: Wish
>  Components: examples
>Reporter: Alexandre Rafalovitch
>  Labels: gsoc2017
>
> Apache Solr ships with a number of examples. They evolved from a kitchen sync 
> example and are rather large. When new Solr features are added, they are 
> often shoehorned into the most appropriate example and sometimes are not 
> represented at all. 
> Often, for new users, it is hard to tell what part of example is relevant, 
> what part is default and what part is demonstrating something completely 
> different.
> It would take significant (and very appreciated) effort to review all the 
> examples and rebuild them to provide clean way to showcase best practices 
> around base and most recent features.
> Specific issues are around kitchen sync vs. minimal examples, better approach 
> to "schemaless" mode and creating examples and datasets that allow to create 
> both "hello world" and more-advanced tutorials.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10329) Rebuild Solr examples

2017-03-28 Thread Oussema Hidri (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945670#comment-15945670
 ] 

Oussema Hidri commented on SOLR-10329:
--

Hey Alexandre,

Thank you for your response.
I appreciated a lot your blog and I have played around with the provided 
examples.
This is the proposal I have written: 
https://docs.google.com/document/d/1xn9BqRs44f3aJWFMLLU55qj71MOYxKX0IWNH094W3hI/edit?usp=sharing
I will really appreciate your help and your opinion.
I hope I am on the right track.

With love.

> Rebuild Solr examples
> -
>
> Key: SOLR-10329
> URL: https://issues.apache.org/jira/browse/SOLR-10329
> Project: Solr
>  Issue Type: Wish
>  Components: examples
>Reporter: Alexandre Rafalovitch
>  Labels: gsoc2017
>
> Apache Solr ships with a number of examples. They evolved from a kitchen sync 
> example and are rather large. When new Solr features are added, they are 
> often shoehorned into the most appropriate example and sometimes are not 
> represented at all. 
> Often, for new users, it is hard to tell what part of example is relevant, 
> what part is default and what part is demonstrating something completely 
> different.
> It would take significant (and very appreciated) effort to review all the 
> examples and rebuild them to provide clean way to showcase best practices 
> around base and most recent features.
> Specific issues are around kitchen sync vs. minimal examples, better approach 
> to "schemaless" mode and creating examples and datasets that allow to create 
> both "hello world" and more-advanced tutorials.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7745) Explore GPU acceleration

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945637#comment-15945637
 ] 

Ishan Chattopadhyaya commented on LUCENE-7745:
--

bq. Java CUDA libraries exist and what their licenses
jCuda happens to be MIT, which is, afaik, compatible with Apache license.
http://www.jcuda.org/License.txt

> Explore GPU acceleration
> 
>
> Key: LUCENE-7745
> URL: https://issues.apache.org/jira/browse/LUCENE-7745
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
>
> There are parts of Lucene that can potentially be speeded up if computations 
> were to be offloaded from CPU to the GPU(s). With commodity GPUs having as 
> high as 12GB of high bandwidth RAM, we might be able to leverage GPUs to 
> speed parts of Lucene (indexing, search).
> First that comes to mind is spatial filtering, which is traditionally known 
> to be a good candidate for GPU based speedup (esp. when complex polygons are 
> involved). In the past, Mike McCandless has mentioned that "both initial 
> indexing and merging are CPU/IO intensive, but they are very amenable to 
> soaking up the hardware's concurrency."
> I'm opening this issue as an exploratory task, suitable for a GSoC project. I 
> volunteer to mentor any GSoC student willing to work on this this summer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7745) Explore GPU acceleration

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945632#comment-15945632
 ] 

Ishan Chattopadhyaya edited comment on LUCENE-7745 at 3/28/17 5:58 PM:
---

Hi Vikash,

Regarding licensing issue:
The work done in this project would be exploratory. That code won't necessarily 
go into Lucene. When we are at a point where we see clear benefits from the 
work done here, we would then have to explore all aspects of productionizing it 
(including licensing).

Regarding next steps:
{quote}
BooleanScorer calls a lot of classes, e.g. the BM25 similarity or TF-IDF to do 
the calculation that could possibly be parallelized.
{quote}
# First, understand how BooleanScorer calls these similarity classes and does 
the scoring. There are unit tests in Lucene that can help you get there. This 
might help: https://wiki.apache.org/lucene-java/HowToContribute
# Write a standalone CUDA/OpenCL project that does the same processing on the 
GPU.
# Benchmark the speed of doing so on GPU vs. speed observed when doing the same 
through the BooleanScorer. Preferably, on a large resultset. Include time for 
copying results and scores in and out of the device memory from/to the main 
memory.
# Optimize step 2, if possible.

Once this is achieved (which in itself could be a sufficient GSoC project), one 
can have stretch goals to try out other parts of Lucene to optimize (e.g. 
spatial search).

Another stretch goal, if the results for optimizations are positive, could be 
to integrate the solution into Lucene. Most suitable way to do so would be to 
create hooks into Lucene so that plugins can be built to delegate parts of the 
processing to external code. And then, write a plugin (that uses jCuda, for 
example) and do an integration test.


was (Author: ichattopadhyaya):
Hi Vikash,

Regarding licensing issue:
The work done in this project would be exploratory. That code won't necessarily 
go into Lucene. When we are at a point where we see clear benefits from the 
work done here, we would then have to explore all aspects of productionizing it 
(including licensing).

Regarding next steps:
{quote}
BooleanScorer calls a lot of classes, e.g. the BM25 similarity or TF-IDF to do 
the calculation that could possibly be parallelized.
{quote}
# First, understand how BooleanScorer calls these similarity classes and does 
the scoring. There are unit tests in Lucene that can help you get there. This 
might help: https://wiki.apache.org/lucene-java/HowToContribute
# Write a standalone CUDA/OpenCL project that does the same processing on the 
GPU.
# Benchmark the speed of doing so on GPU vs. speed observed when doing the same 
through the BooleanScorer. Preferably, on a large resultset. Include time for 
copying results and scores in and out of the device memory from/to the main 
memory.
# Optimize step 2, if possible.

Once this is achieved (which in itself could be a sufficient GSoC project), one 
can have stretch goals to try out other parts of Lucene to optimize (e.g. 
spatial search).

> Explore GPU acceleration
> 
>
> Key: LUCENE-7745
> URL: https://issues.apache.org/jira/browse/LUCENE-7745
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
>
> There are parts of Lucene that can potentially be speeded up if computations 
> were to be offloaded from CPU to the GPU(s). With commodity GPUs having as 
> high as 12GB of high bandwidth RAM, we might be able to leverage GPUs to 
> speed parts of Lucene (indexing, search).
> First that comes to mind is spatial filtering, which is traditionally known 
> to be a good candidate for GPU based speedup (esp. when complex polygons are 
> involved). In the past, Mike McCandless has mentioned that "both initial 
> indexing and merging are CPU/IO intensive, but they are very amenable to 
> soaking up the hardware's concurrency."
> I'm opening this issue as an exploratory task, suitable for a GSoC project. I 
> volunteer to mentor any GSoC student willing to work on this this summer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7745) Explore GPU acceleration

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945632#comment-15945632
 ] 

Ishan Chattopadhyaya commented on LUCENE-7745:
--

Hi Vikash,

Regarding licensing issue:
The work done in this project would be exploratory. That code won't necessarily 
go into Lucene. When we are at a point where we see clear benefits from the 
work done here, we would then have to explore all aspects of productionizing it 
(including licensing).

Regarding next steps:
{quote}
BooleanScorer calls a lot of classes, e.g. the BM25 similarity or TF-IDF to do 
the calculation that could possibly be parallelized.
{quote}
# First, understand how BooleanScorer calls these similarity classes and does 
the scoring. There are unit tests in Lucene that can help you get there. This 
might help: https://wiki.apache.org/lucene-java/HowToContribute
# Write a standalone CUDA/OpenCL project that does the same processing on the 
GPU.
# Benchmark the speed of doing so on GPU vs. speed observed when doing the same 
through the BooleanScorer. Preferably, on a large resultset. Include time for 
copying results and scores in and out of the device memory from/to the main 
memory.
# Optimize step 2, if possible.

Once this is achieved (which in itself could be a sufficient GSoC project), one 
can have stretch goals to try out other parts of Lucene to optimize (e.g. 
spatial search).

> Explore GPU acceleration
> 
>
> Key: LUCENE-7745
> URL: https://issues.apache.org/jira/browse/LUCENE-7745
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>  Labels: gsoc2017, mentor
>
> There are parts of Lucene that can potentially be speeded up if computations 
> were to be offloaded from CPU to the GPU(s). With commodity GPUs having as 
> high as 12GB of high bandwidth RAM, we might be able to leverage GPUs to 
> speed parts of Lucene (indexing, search).
> First that comes to mind is spatial filtering, which is traditionally known 
> to be a good candidate for GPU based speedup (esp. when complex polygons are 
> involved). In the past, Mike McCandless has mentioned that "both initial 
> indexing and merging are CPU/IO intensive, but they are very amenable to 
> soaking up the hardware's concurrency."
> I'm opening this issue as an exploratory task, suitable for a GSoC project. I 
> volunteer to mentor any GSoC student willing to work on this this summer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9178) ExtractingRequestHandler doesn't strip HTML and adds metadata to content body

2017-03-28 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945633#comment-15945633
 ] 

Alexandre Rafalovitch commented on SOLR-9178:
-

[~talli...@apache.org]
Looking at the Wiki link and the API, it seems that the only option for the 
ParserContext is to provide an alternative implementation class mapping to some 
pre-existing interface. There is no way to pass in values/settings as was 
suggested in TIKA-612?

In that case, perhaps it makes sense to restructure Wiki page to basically be 
the nested list of interfaces that can be overridden and options for such. 
Something like:

General usage: *parseContext.set(MyInterface.class, new MyInterfaceImpl());*

On the other hand, looking at Tika API (for 1.5), I am having troubles finding 
the valid values for the interface implementations. Is that something 
non-straightforward or is there just one default implementation in most cases? 

Actually, the Wiki is missing the HtmlMapper which does have two 
implementations. And it does list ExecutorService, which I cannot find in Tika 
API. It is a bit confusing. 

P.s. This discussion probably does not belong in this JIRA. Is there a 
Tika-side JIRA at improving ParseContext information?

> ExtractingRequestHandler doesn't strip HTML and adds metadata to content body
> -
>
> Key: SOLR-9178
> URL: https://issues.apache.org/jira/browse/SOLR-9178
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 5.0, 6.0.1
> Environment: java version "1.8.0_91" 64 bit
> Linux Mint 17, 64 bit
>Reporter: Simon Blandford
>
> Starting environment:
> solr-6.0.1.tgz is downloaded and extracted. We are in the solr-6.0.1 
> directory.
> The file, test.html, is downloaded from 
> https://wiki.apache.org/solr/UsingMailingLists.
> Affected versions: 4.10.3 is the last working version. 4.10.4 has some HTML 
> comments and Javascript breaking through. Versions >5.0 have full symptoms 
> described.
> Steps to reproduce:
> 1) bin/solr start
> 2) bin/solr create -c mycore
> 3) curl 
> "http://localhost:8983/solr/mycore/update/extract?literal.id=doc1=attr_=attr_content=true;
>  -F "content/tutorial=@test.html"
> 4) curl http://localhost:8983/solr/mycore/select?q=information
> Expected result: HTML->Text version of document indexed in  content 
> body.
> Actual result: Full HTML, but with anglebrackets removed, being indexed along 
> with other unwanted metadata in the content body including fragments of CSS 
> and Javascript that were in the source document. 
> Head of response body below...
> 
> 
> 0 name="QTime">0 name="q">information start="0">doc1 name="attr_stream_size">20440 name="attr_x_parsed_by">org.apache.tika.parser.DefaultParserorg.apache.tika.parser.html.HtmlParser  name="attr_stream_content_type">text/html name="attr_stream_name">test.html name="attr_stream_source_info">content/tutorial name="attr_dc_title">UsingMailingLists - Solr Wiki name="attr_content_encoding">UTF-8 name="attr_robots">index,nofollow name="attr_title">UsingMailingLists - Solr Wiki name="attr_content_type">text/html; charset=utf-8 name="attr_content"> 
>  
>  stylesheet text/css utf-8 all /wiki/modernized/css/common.css   stylesheet 
> text/css utf-8 screen /wiki/modernized/css/screen.css   stylesheet text/css 
> utf-8 print /wiki/modernized/css/print.css   stylesheet text/css utf-8 
> projection /wiki/modernized/css/projection.css   alternate Solr Wiki: 
> UsingMailingLists 
> /solr/UsingMailingLists?diffs=1show_att=1action=rss_rcunique=0page=UsingMailingListsddiffs=1
>  application/rss+xml   Start /solr/FrontPage   Alternate Wiki Markup 
> /solr/UsingMailingLists?action=raw   Alternate print Print View 
> /solr/UsingMailingLists?action=print   Search /solr/FindPage   Index 
> /solr/TitleIndex   Glossary /solr/WordIndex   Help /solr/HelpOnFormatting   
> stream_size 20440  
>  X-Parsed-By org.apache.tika.parser.DefaultParser  
>  X-Parsed-By org.apache.tika.parser.html.HtmlParser  
>  stream_content_type text/html  
>  stream_name test.html  
>  stream_source_info content/tutorial  
>  dc:title UsingMailingLists - Solr Wiki  
>  Content-Encoding UTF-8  
>  robots index,nofollow  
>  Content-Type text/html; charset=utf-8  
>  UsingMailingLists - Solr Wiki 
>  
>  
>  header 
>  application/x-www-form-urlencoded get searchform /solr/UsingMailingLists 
>  
>  hidden action fullsearch  
>  hidden context 180  
>  searchinput Search: 
>  text searchinput value  20 searchFocus(this) searchBlur(this) 
> searchChange(this) searchChange(this) Search  
>  submit titlesearch titlesearch Titles Search Titles  
>  submit fullsearch fullsearch Text Search Full Text  
>  
>  
>  text/javascript 
> !--// Initialize search form
> var f = document.getElementById('searchform');
> 

[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_121) - Build # 811 - Still unstable!

2017-03-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/811/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"http://127.0.0.1:51382/solr;,   
"node_name":"127.0.0.1:51382_solr",   "state":"active",   
"leader":"true"}, "core_node2":{   
"core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"http://127.0.0.1:51385/solr;,   
"node_name":"127.0.0.1:51385_solr",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"http://127.0.0.1:51382/solr;,
  "node_name":"127.0.0.1:51382_solr",
  "state":"active",
  "leader":"true"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"http://127.0.0.1:51385/solr;,
  "node_name":"127.0.0.1:51385_solr",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([9545F5939CFE69FB:C5106D90C5DFDFE6]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

[jira] [Commented] (SOLR-10352) Low entropy warning in bin/solr script

2017-03-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945570#comment-15945570
 ] 

Hoss Man commented on SOLR-10352:
-

Personally I think we really should not convolute the two issues of: 
* warning if entropy is low
* changing the default source of entropy.

those should really be 2 completely distinct discussions.

one is an simple choice/discussion: is there any cost/overhead of giving the 
user a warning about entropy?

The other is a more nuanced discussion about the risks/rewards of using diff 
sources of entropy and how that affects the confidence in our encryption based 
features: that deserves a lot more discussion in it's own jira.



With that said: here are my thoughts on the current patch/commit made so far in 
this jira...

I don't think it's useful as implemented.  

IIUC having this type of check solely on startup may be missleading to users -- 
just because there is "low" entropy available when solr starts up doesn't mean 
there will be low of entropy for the (long) life of the solr server process.  
LIkewise if there is "high" entropy on startup that doesn't mean everything 
will be fine and there's nothing to worry about: the available entropy could 
drop over time and cause performance issues later.

Rather then warning about this in {{bin/solr}} I feel like this type of 
information should be exposed by the solr metrics code, so people can easily 
monitor it over the life of the solr server process -- either via a command 
line script we could provide, or via JMX, or via the admin UI ... we could even 
consider putting incorporating some specific "node health" metrics (entropy 
level, max open files, free disk, etc...) directly into the main screen of the 
Admin UI along with specific warnings/suggestions such as the text this issue 
added about SSL & UUIDField.

> Low entropy warning in bin/solr script
> --
>
> Key: SOLR-10352
> URL: https://issues.apache.org/jira/browse/SOLR-10352
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Fix For: master (7.0), branch_6x
>
> Attachments: SOLR-10352.patch
>
>
> We should add a warning in the startup script for Linux, if the output of the 
> following is below a certain threshold (maybe 300?). The warning could 
> indicate that features like UUIDField, SSL etc. might not work properly (or 
> be slow). As a hint, we could then suggest the user to configure a non 
> blocking SecureRandom (SOLR-10338) or install rng-tools, haveged etc.
> {quote}
> cat /proc/sys/kernel/random/entropy_avail
> {quote}
> Original discussion:
> https://issues.apache.org/jira/browse/SOLR-10338?focusedCommentId=15938904=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15938904



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10338) Configure SecureRandom non blocking

2017-03-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945523#comment-15945523
 ] 

Hoss Man commented on SOLR-10338:
-

{quote}
If we are using a non-blocking source for the random number generator, I don't 
think we need to check entropy or warn about it. It seems that people who 
really understand what's going on say that the non-blocking source is preferred 
over the blocking source for all production usage, and only hardcore crypto 
researchers are likely to need the blocking source.

In my own development efforts outside of Solr, I have seen that when a Tomcat 
server starts up, configuring "/dev/./urandom" for the java random source can 
reduce the startup by ten to fifteen seconds.
{quote}

shawn: i feel like you are convoluting 3 distinct questions/ideas...

# overriding SecureRandom _*in tests*_ ... which is what this current issue is 
about.  In a test situation, there is no downside (that i can think of) in 
forcing a particular source of "randomness" -- and in most cases forcing a 
"consistent" use of randomness is a good idea to improve reproducibility.  In 
general in our unit tests, we're also already specifically not concerned with 
having truly "secure" randomness (see SSL parent issue SOLR-5776)
# overriding SecureRandom _*in production code*_ ... this is a much sensitive 
situation.  we should be very careful about arbitrarily deciding that 
{{bin/solr}} should override the source of secure randomness since that could 
cause security holes in SSL and security features that rely on encryption.
# warning the user about low entropy ... regardless of _what_ entropy source is 
being used, which is what the (original) point of SOLR-10352 was.

We should keep these issues/discussions isolated and discrete.  Choices we make 
regarding our test scaffolding (which may be fundamentally insecure, but 
helpful for speed) are not necessarily the same choices we want to make in our 
end user production scripts.

> Configure SecureRandom non blocking
> ---
>
> Key: SOLR-10338
> URL: https://issues.apache.org/jira/browse/SOLR-10338
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mihaly Toth
>Assignee: Mark Miller
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-10338.patch, SOLR-10338.patch
>
>
> It would be best if SecureRandom could be made non blocking. In that case we 
> could get rid of random entropy exhaustion issue related to all usages of 
> SecureRandom.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_121) - Build # 6481 - Still Unstable!

2017-03-28 Thread Erick Erickson
Breasted 1,000 times and couldn't reproduce on OSX. Probably something
silly with char separators. Won't have a chance to look until tonight
though, feel free to disable if you want.

On Mar 28, 2017 8:50 AM, "Policeman Jenkins Server" 
wrote:

> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6481/
> Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
>
> 1 tests failed.
> FAILED:  org.apache.solr.cloud.SolrCLIZkUtilsTest.testCp
>
> Error Message:
> Should have found /cp7/conf on Zookeeper
>
> Stack Trace:
> java.lang.AssertionError: Should have found /cp7/conf on Zookeeper
> at __randomizedtesting.SeedInfo.seed([C25A0B828C89B46A:
> 29B6F6CC48200A08]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.apache.solr.cloud.SolrCLIZkUtilsTest$1.checkPathOnZk(
> SolrCLIZkUtilsTest.java:670)
> at org.apache.solr.cloud.SolrCLIZkUtilsTest$1.preVisitDirectory(
> SolrCLIZkUtilsTest.java:686)
> at org.apache.solr.cloud.SolrCLIZkUtilsTest$1.preVisitDirectory(
> SolrCLIZkUtilsTest.java:666)
> at java.nio.file.Files.walkFileTree(Files.java:2677)
> at java.nio.file.Files.walkFileTree(Files.java:2742)
> at org.apache.solr.cloud.SolrCLIZkUtilsTest.
> verifyAllFilesAreZNodes(SolrCLIZkUtilsTest.java:666)
> at org.apache.solr.cloud.SolrCLIZkUtilsTest.
> verifyZkLocalPathsMatch(SolrCLIZkUtilsTest.java:642)
> at org.apache.solr.cloud.SolrCLIZkUtilsTest.testCp(
> SolrCLIZkUtilsTest.java:321)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(
> RandomizedRunner.java:1713)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(
> RandomizedRunner.java:907)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(
> RandomizedRunner.java:943)
> at com.carrotsearch.randomizedtesting.
> RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.
> java:57)
> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(
> TestRuleSetupTeardownChained.java:49)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
> TestRuleThreadAndTestName.java:48)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures
> $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at org.apache.lucene.util.TestRuleMarkFailure$1.
> evaluate(TestRuleMarkFailure.java:47)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$
> StatementRunner.run(ThreadLeakControl.java:368)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl.
> forkTimeoutingTask(ThreadLeakControl.java:817)
> at com.carrotsearch.randomizedtesting.
> ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.
> runSingleTest(RandomizedRunner.java:916)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(
> RandomizedRunner.java:802)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(
> RandomizedRunner.java:852)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(
> RandomizedRunner.java:863)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.
> java:57)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(
> TestRuleStoreClassName.java:41)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> 

[jira] [Commented] (SOLR-10338) Configure SecureRandom non blocking

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945467#comment-15945467
 ] 

Ishan Chattopadhyaya commented on SOLR-10338:
-

{code}
A read from the /dev/urandom device will not block waiting for more entropy.  
If there is not sufficient  entropy,  a  pseudorandom  number generator  is  
used to create the requested bytes.  As a result, in this case the returned 
values are theoretically vulnerable to a crypto‐graphic attack on the 
algorithms used by the driver.
{code}
Here's an excerpt from the {{man random}} page in GNU/Linux. Given this, I'd be 
reluctant to make the /dev/urandom as the default.

> Configure SecureRandom non blocking
> ---
>
> Key: SOLR-10338
> URL: https://issues.apache.org/jira/browse/SOLR-10338
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mihaly Toth
>Assignee: Mark Miller
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-10338.patch, SOLR-10338.patch
>
>
> It would be best if SecureRandom could be made non blocking. In that case we 
> could get rid of random entropy exhaustion issue related to all usages of 
> SecureRandom.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7751) Findbugs: boxing a primitive to compare

2017-03-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7751:
-
Fix Version/s: (was: 6.5)
   6.6

> Findbugs: boxing a primitive to compare
> ---
>
> Key: LUCENE-7751
> URL: https://issues.apache.org/jira/browse/LUCENE-7751
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Daniel Jelinski
>Priority: Minor
> Fix For: master (7.0), 6.6
>
> Attachments: LUCENE-7751.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7743) Findbugs: avoid new String(String)

2017-03-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7743:
-
Fix Version/s: (was: 6.5)
   6.6

> Findbugs: avoid new String(String)
> --
>
> Key: LUCENE-7743
> URL: https://issues.apache.org/jira/browse/LUCENE-7743
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Daniel Jelinski
>Priority: Minor
> Fix For: master (7.0), 6.6
>
> Attachments: LUCENE-7743.patch
>
>
> http://findbugs.sourceforge.net/bugDescriptions.html#DM_STRING_CTOR
> Removing the extra constructor calls will avoid heap allocations while 
> behaving just the same as the original code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7751) Findbugs: boxing a primitive to compare

2017-03-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7751.
--
   Resolution: Fixed
Fix Version/s: 6.5
   master (7.0)

> Findbugs: boxing a primitive to compare
> ---
>
> Key: LUCENE-7751
> URL: https://issues.apache.org/jira/browse/LUCENE-7751
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Daniel Jelinski
>Priority: Minor
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7751.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7743) Findbugs: avoid new String(String)

2017-03-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7743.
--
   Resolution: Fixed
Fix Version/s: 6.5
   master (7.0)

> Findbugs: avoid new String(String)
> --
>
> Key: LUCENE-7743
> URL: https://issues.apache.org/jira/browse/LUCENE-7743
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Daniel Jelinski
>Priority: Minor
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7743.patch
>
>
> http://findbugs.sourceforge.net/bugDescriptions.html#DM_STRING_CTOR
> Removing the extra constructor calls will avoid heap allocations while 
> behaving just the same as the original code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10352) Low entropy warning in bin/solr script

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945454#comment-15945454
 ] 

Ishan Chattopadhyaya commented on SOLR-10352:
-

bq. Just came to my mind what if we had non-blocking SecureRandom as a default 
in the startup scripts
My thought is that we should not change this by default, since /dev/random has 
been preferred by cryptographers and sysadmins for SSL. However, since the 
article argues that there are no downsides of using /dev/urandom, I think we 
can recommend that hte user use that when the entropy is low. This could be 
included in the warning message from the script. What do you think?

> Low entropy warning in bin/solr script
> --
>
> Key: SOLR-10352
> URL: https://issues.apache.org/jira/browse/SOLR-10352
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Fix For: master (7.0), branch_6x
>
> Attachments: SOLR-10352.patch
>
>
> We should add a warning in the startup script for Linux, if the output of the 
> following is below a certain threshold (maybe 300?). The warning could 
> indicate that features like UUIDField, SSL etc. might not work properly (or 
> be slow). As a hint, we could then suggest the user to configure a non 
> blocking SecureRandom (SOLR-10338) or install rng-tools, haveged etc.
> {quote}
> cat /proc/sys/kernel/random/entropy_avail
> {quote}
> Original discussion:
> https://issues.apache.org/jira/browse/SOLR-10338?focusedCommentId=15938904=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15938904



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10357) When sow=false, edismax query parsers should handle per-fieldtype autoGeneratePhraseQueries by setting QueryBuilder.autoGenerateMultiTermSynonymsQuery

2017-03-28 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10357:
--
Attachment: SOLR-10357.patch

Updated patch after committing SOLR-10343 and SOLR-10344.

Running tests and precommit now.

> When sow=false, edismax query parsers should handle per-fieldtype 
> autoGeneratePhraseQueries by setting 
> QueryBuilder.autoGenerateMultiTermSynonymsQuery
> ---
>
> Key: SOLR-10357
> URL: https://issues.apache.org/jira/browse/SOLR-10357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10357.patch, SOLR-10357.patch
>
>
> Right now, the options to not split on whitespace ({{sow=false}}) and to 
> autogenerate phrase queries ({{autoGeneratePhraseQueries="true"}}) will cause 
> queries to throw an exception, since they are incompatible.
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}}, introduced in 
> LUCENE-7638, is the graph query version of Solr's per-fieldtype 
> {{autoGeneratePhraseQueries}} option, and is not incompatible with 
> {{sow=false}}.  
> So {{autoGeneratePhraseQueries="true"}} should cause  
> {{QueryBuilder.autoGenerateMultiTermSynonymsPhraseQuery}} to be set to true 
> when {{sow=false}}, rather than triggering an exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10365) Collection re-creation fails if previous collection creation had failed

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945444#comment-15945444
 ] 

Ishan Chattopadhyaya commented on SOLR-10365:
-

Thanks for your review, Noble.

I think what is happening is the following:

How does a failed collection get cleaned up?
# At CoreContainer's create(CoreDescriptor,boolean,boolean) method, there's a 
preRegister step. This publishes the core as DOWN before even attempting to 
initialize the core.
# When there's a failure to initialize the core, the CoreContainer's 
coreInitFailures map gets populated with the exception.
# At OCMH, when there's a failure with the CreateCollection command, an attempt 
to clean up is performed. This actually calls DELETE, which in turn calls 
UNLOAD core admin command from DeleteCollectionCmd.java.
# This UNLOAD command is invoked from OCMH's collectionCmd() method, which 
calls UNLOAD core on every replica registered in step 1.
# At CoreContainer of the replica, when unload() method is invoked, the 
coreInitFailures map gets cleared.

This is all fine, when it works. However, the publish step in preRegister seems 
intermittent. Sometimes, the publish doesn't work. I can see that the state 
opertion is offered to the distributed queue properly, but that message 
actually doesn't seem to get processed. Hence, at step 4, no UNLOAD command is 
sent to the replica. The latest SOLR-6736 patch's 
TestConfigSetsAPI#testUploadWithScriptUpdateProcessor() demonstrates this.

While this maybe a larger issue with the way OCMH works, I can see that the 
patch I added here does the job in those circumstances, and the code path 
followed after the core is registered successfully properly removes the 
previous exception from the coreInitFailures map. Unless someone has any 
objections, I am inclined to commit this patch, and hence commit SOLR-6736 and 
then continue investigating the above scenario.

> Collection re-creation fails if previous collection creation had failed
> ---
>
> Key: SOLR-10365
> URL: https://issues.apache.org/jira/browse/SOLR-10365
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-10365.patch, SOLR-10365.patch, SOLR-10365.patch, 
> SOLR-10365.patch
>
>
> Steps to reproduce:
> # Create collection using a bad configset that has some errors, due to which 
> collection creation fails.
> # Now, create a collection using the same name, but a good configset. This 
> fails sometimes (about 25-30% of the times, according to my rough estimate).
> Here's what happens during the second step (can be seen from stacktrace 
> below):
> # In CoreContainer's create(CoreDescriptor, boolean, boolean), there's a line 
> {{zkSys.getZkController().preRegister(dcore);}}.
> # This calls ZkController's publish(), which in turn calls CoreContainer's 
> getCore() method. This call *should* return null (since previous attempt of 
> core creation didn't succeed). But, it throws the exception associated with 
> the previous failure.
> Here's the stack trace for the same.
> {code}
> Caused by: org.apache.solr.common.SolrException: SolrCore 
> 'newcollection2_shard1_replica1' is not available due to init failure: 
> blahblah
>   at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1312)
>   at org.apache.solr.cloud.ZkController.publish(ZkController.java:1225)
>   at 
> org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1399)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:945)
> {code}
> While working on SOLR-6736, I ran into this (nasty?) issue. I'll try to 
> isolate this into a standalone test that demonstrates this issue. Otherwise, 
> as of now, this can be seen in the SOLR-6736's 
> testUploadWithScriptUpdateProcessor() test (which tries to re-create the 
> collection, but sometimes fails).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Points types

2017-03-28 Thread Tom Mortimer
Thanks Adrien, that's very helpful.

This is for https://github.com/flaxsearch/marple , so I guess what I'll do
is give the user the option to select a type from the possible options.


tel +44 8700 118334 : mobile +44 7876 741014 : skype tommortimer

On 28 March 2017 at 16:39, Adrien Grand  wrote:

> You could check the value of pointNumBytes in the FieldInfo. Ints/Floats
> will use 4 bytes while Longs/Doubles will use 8 bytes. However there is no
> way to discern doubles from longs or floats from ints, applications need to
> maintain some schema information on top of Lucene.
>
> Le mar. 28 mars 2017 à 17:29, Tom Mortimer  a écrit :
>
>> Hi,
>>
>> Probably a daft question, but is there any way to get the type (int,
>> long, float, etc.) of a PointValues field in an index, if you don't already
>> know it?
>>
>> cheers,
>> Tom
>>
>> tel +44 8700 118334 <+44%20870%20011%208334> : mobile +44 7876 741014
>> <+44%207876%20741014> : skype tommortimer
>>
>


[jira] [Resolved] (LUCENE-7754) Findbugs: nested class should be static

2017-03-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7754.
--
   Resolution: Fixed
Fix Version/s: 6.6
   master (7.0)

> Findbugs: nested class should be static
> ---
>
> Key: LUCENE-7754
> URL: https://issues.apache.org/jira/browse/LUCENE-7754
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Daniel Jelinski
>Priority: Minor
> Fix For: master (7.0), 6.6
>
> Attachments: LUCENE-7754.patch
>
>
> http://findbugs.sourceforge.net/bugDescriptions.html#SIC_INNER_SHOULD_BE_STATIC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10343) Update Solr default/example and test configs to use SynonymGraphFilterFactory

2017-03-28 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-10343.
---
   Resolution: Fixed
Fix Version/s: 6.6
   master (7.0)

> Update Solr default/example and test configs to use SynonymGraphFilterFactory
> -
>
> Key: SOLR-10343
> URL: https://issues.apache.org/jira/browse/SOLR-10343
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10343.patch
>
>
> {{SynonymFilterFactory}} was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_121) - Build # 6481 - Still Unstable!

2017-03-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6481/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SolrCLIZkUtilsTest.testCp

Error Message:
Should have found /cp7/conf on Zookeeper

Stack Trace:
java.lang.AssertionError: Should have found /cp7/conf on Zookeeper
at 
__randomizedtesting.SeedInfo.seed([C25A0B828C89B46A:29B6F6CC48200A08]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest$1.checkPathOnZk(SolrCLIZkUtilsTest.java:670)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest$1.preVisitDirectory(SolrCLIZkUtilsTest.java:686)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest$1.preVisitDirectory(SolrCLIZkUtilsTest.java:666)
at java.nio.file.Files.walkFileTree(Files.java:2677)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest.verifyAllFilesAreZNodes(SolrCLIZkUtilsTest.java:666)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest.verifyZkLocalPathsMatch(SolrCLIZkUtilsTest.java:642)
at 
org.apache.solr.cloud.SolrCLIZkUtilsTest.testCp(SolrCLIZkUtilsTest.java:321)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-10343) Update Solr default/example and test configs to use SynonymGraphFilterFactory

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945422#comment-15945422
 ] 

ASF subversion and git services commented on SOLR-10343:


Commit 1a80e4d6942dd7af214c999e0e6540564efc02ac in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1a80e4d ]

SOLR-10343: Update Solr default/example and test configs to use 
SynonymGraphFilterFactory


> Update Solr default/example and test configs to use SynonymGraphFilterFactory
> -
>
> Key: SOLR-10343
> URL: https://issues.apache.org/jira/browse/SOLR-10343
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Steve Rowe
> Attachments: SOLR-10343.patch
>
>
> {{SynonymFilterFactory}} was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10343) Update Solr default/example and test configs to use SynonymGraphFilterFactory

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945421#comment-15945421
 ] 

ASF subversion and git services commented on SOLR-10343:


Commit 9705e95988060fd80d1c8c995fef56ab4ea8 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9705e95 ]

SOLR-10343: Update Solr default/example and test configs to use 
SynonymGraphFilterFactory


> Update Solr default/example and test configs to use SynonymGraphFilterFactory
> -
>
> Key: SOLR-10343
> URL: https://issues.apache.org/jira/browse/SOLR-10343
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Steve Rowe
> Attachments: SOLR-10343.patch
>
>
> {{SynonymFilterFactory}} was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10338) Configure SecureRandom non blocking

2017-03-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945420#comment-15945420
 ] 

Mark Miller commented on SOLR-10338:


bq.  so I am a bit unsure how to continue

If you already have support from someone for an idea, I'd just create a new 
JIRA issue and link it to the old one. Unless the old JIRA has not been 
released, in those cases you will generally reopen it. If you have no one 
discussing with you, and you question what you want to do, perhaps a dev list 
discussion, but I don't think that is needed here.

bq. say that the non-blocking source is preferred over the blocking source for 
all production usage

Is that true? Don't we want the real deal for production SSL?

> Configure SecureRandom non blocking
> ---
>
> Key: SOLR-10338
> URL: https://issues.apache.org/jira/browse/SOLR-10338
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mihaly Toth
>Assignee: Mark Miller
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-10338.patch, SOLR-10338.patch
>
>
> It would be best if SecureRandom could be made non blocking. In that case we 
> could get rid of random entropy exhaustion issue related to all usages of 
> SecureRandom.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10347) Remove index level boost support from "documents" section of the admin UI

2017-03-28 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945412#comment-15945412
 ] 

Alexandre Rafalovitch commented on SOLR-10347:
--

Patches are always welcome. It is not assigned to anybody at the moment. Just 
make sure the patch is for the new - Angular JS - Admin UI and is against the 
master brunch.

> Remove index level boost support from "documents" section of the admin UI
> -
>
> Key: SOLR-10347
> URL: https://issues.apache.org/jira/browse/SOLR-10347
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
>
> Index-time boost is deprecated since LUCENE-6819



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10347) Remove index level boost support from "documents" section of the admin UI

2017-03-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945403#comment-15945403
 ] 

Amrit Sarkar commented on SOLR-10347:
-

Anyone working on this? I can cook up a patch and post it soon.

> Remove index level boost support from "documents" section of the admin UI
> -
>
> Key: SOLR-10347
> URL: https://issues.apache.org/jira/browse/SOLR-10347
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
>
> Index-time boost is deprecated since LUCENE-6819



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Points types

2017-03-28 Thread Adrien Grand
You could check the value of pointNumBytes in the FieldInfo. Ints/Floats
will use 4 bytes while Longs/Doubles will use 8 bytes. However there is no
way to discern doubles from longs or floats from ints, applications need to
maintain some schema information on top of Lucene.

Le mar. 28 mars 2017 à 17:29, Tom Mortimer  a écrit :

> Hi,
>
> Probably a daft question, but is there any way to get the type (int, long,
> float, etc.) of a PointValues field in an index, if you don't already know
> it?
>
> cheers,
> Tom
>
> tel +44 8700 118334 <+44%20870%20011%208334> : mobile +44 7876 741014
> <+44%207876%20741014> : skype tommortimer
>


[jira] [Commented] (LUCENE-7751) Findbugs: boxing a primitive to compare

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945397#comment-15945397
 ] 

ASF subversion and git services commented on LUCENE-7751:
-

Commit 103a50153cabf66a20c6fef32e839fa2de8a6969 in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=103a501 ]

LUCENE-7751: Avoid boxing primitives only to call compareTo.


> Findbugs: boxing a primitive to compare
> ---
>
> Key: LUCENE-7751
> URL: https://issues.apache.org/jira/browse/LUCENE-7751
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Daniel Jelinski
>Priority: Minor
> Attachments: LUCENE-7751.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7754) Findbugs: nested class should be static

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945396#comment-15945396
 ] 

ASF subversion and git services commented on LUCENE-7754:
-

Commit a6083982180979aec1f5e782378055ef78089ff9 in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a608398 ]

LUCENE-7754: Inner classes should be static whenever possible.


> Findbugs: nested class should be static
> ---
>
> Key: LUCENE-7754
> URL: https://issues.apache.org/jira/browse/LUCENE-7754
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Daniel Jelinski
>Priority: Minor
> Attachments: LUCENE-7754.patch
>
>
> http://findbugs.sourceforge.net/bugDescriptions.html#SIC_INNER_SHOULD_BE_STATIC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7743) Findbugs: avoid new String(String)

2017-03-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945398#comment-15945398
 ] 

ASF subversion and git services commented on LUCENE-7743:
-

Commit 03e50781463827a5d8188fccf0307f72dea4e450 in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=03e5078 ]

LUCENE-7743: Avoid calling new String(String).


> Findbugs: avoid new String(String)
> --
>
> Key: LUCENE-7743
> URL: https://issues.apache.org/jira/browse/LUCENE-7743
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Daniel Jelinski
>Priority: Minor
> Attachments: LUCENE-7743.patch
>
>
> http://findbugs.sourceforge.net/bugDescriptions.html#DM_STRING_CTOR
> Removing the extra constructor calls will avoid heap allocations while 
> behaving just the same as the original code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10263) Different SpellcheckComponents should have their own suggestMode

2017-03-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945380#comment-15945380
 ] 

Amrit Sarkar edited comment on SOLR-10263 at 3/28/17 3:28 PM:
--

I am not sure whether it is a popular use case or not at this moment, and maybe 
changing the current implementation to accommodate this will make things 
complicated. 

bq. Any suggestions on how you would like to see the configuration in 
solrconfig.xml?
I can see the patch now. Overriding parameters makes sense.


was (Author: sarkaramr...@gmail.com):
I am not sure whether it is a popular use case or not at this moment, and maybe 
changing the current implementation to accommodate this will make things 
complicated. Any suggestions on how you would like to see the configuration in 
solrconfig.xml?

> Different SpellcheckComponents should have their own suggestMode
> 
>
> Key: SOLR-10263
> URL: https://issues.apache.org/jira/browse/SOLR-10263
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>Priority: Minor
>
> As of now, common spellcheck options are applied to all the 
> SpellCheckComponents.
> This can create problem in the following case:-
>  It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST 
> spellcheck suggestions. 
> But we may want *WordBreakSpellChecker* to suggest only if the token is not 
> in the index (SUGGEST_WHEN_NOT_IN_INDEX) . 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Points types

2017-03-28 Thread Tom Mortimer
Hi,

Probably a daft question, but is there any way to get the type (int, long,
float, etc.) of a PointValues field in an index, if you don't already know
it?

cheers,
Tom

tel +44 8700 118334 : mobile +44 7876 741014 : skype tommortimer


[jira] [Commented] (SOLR-10263) Different SpellcheckComponents should have their own suggestMode

2017-03-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945380#comment-15945380
 ] 

Amrit Sarkar commented on SOLR-10263:
-

I am not sure whether it is a popular use case or not at this moment, and maybe 
changing the current implementation to accommodate this will make things 
complicated. Any suggestions on how you would like to see the configuration in 
solrconfig.xml?

> Different SpellcheckComponents should have their own suggestMode
> 
>
> Key: SOLR-10263
> URL: https://issues.apache.org/jira/browse/SOLR-10263
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>Priority: Minor
>
> As of now, common spellcheck options are applied to all the 
> SpellCheckComponents.
> This can create problem in the following case:-
>  It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST 
> spellcheck suggestions. 
> But we may want *WordBreakSpellChecker* to suggest only if the token is not 
> in the index (SUGGEST_WHEN_NOT_IN_INDEX) . 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10343) Update Solr default/example and test configs to use SynonymGraphFilterFactory

2017-03-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945368#comment-15945368
 ] 

Steve Rowe commented on SOLR-10343:
---

bq. Not included here: switching ManagedSynonymFilterFactory to use 
SynonymGraphFilterFactory as its delegate. I'll make a separate issue.

Done: SOLR-10379

> Update Solr default/example and test configs to use SynonymGraphFilterFactory
> -
>
> Key: SOLR-10343
> URL: https://issues.apache.org/jira/browse/SOLR-10343
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Steve Rowe
> Attachments: SOLR-10343.patch
>
>
> {{SynonymFilterFactory}} was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10379) ManagedSynonymFilterFactory should switch to using SynonymGraphFilterFactory as its delegate

2017-03-28 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-10379:
-

 Summary: ManagedSynonymFilterFactory should switch to using 
SynonymGraphFilterFactory as its delegate
 Key: SOLR-10379
 URL: https://issues.apache.org/jira/browse/SOLR-10379
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe


SynonymFilterFactory was deprecated in LUCENE-6664




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10263) Different SpellcheckComponents should have their own suggestMode

2017-03-28 Thread Abhishek Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945363#comment-15945363
 ] 

Abhishek Kumar Singh commented on SOLR-10263:
-

Raised this PR for *WordBreakSolrSpellChecker*.  
https://github.com/apache/lucene-solr/pull/176/files

> Different SpellcheckComponents should have their own suggestMode
> 
>
> Key: SOLR-10263
> URL: https://issues.apache.org/jira/browse/SOLR-10263
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>Priority: Minor
>
> As of now, common spellcheck options are applied to all the 
> SpellCheckComponents.
> This can create problem in the following case:-
>  It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST 
> spellcheck suggestions. 
> But we may want *WordBreakSpellChecker* to suggest only if the token is not 
> in the index (SUGGEST_WHEN_NOT_IN_INDEX) . 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-10352) Low entropy warning in bin/solr script

2017-03-28 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reopened SOLR-10352:
-

Apologies, I missed Mihaly's comment on this issue before closing it. Reopening 
this so that discussion can continue.

> Low entropy warning in bin/solr script
> --
>
> Key: SOLR-10352
> URL: https://issues.apache.org/jira/browse/SOLR-10352
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Fix For: master (7.0), branch_6x
>
> Attachments: SOLR-10352.patch
>
>
> We should add a warning in the startup script for Linux, if the output of the 
> following is below a certain threshold (maybe 300?). The warning could 
> indicate that features like UUIDField, SSL etc. might not work properly (or 
> be slow). As a hint, we could then suggest the user to configure a non 
> blocking SecureRandom (SOLR-10338) or install rng-tools, haveged etc.
> {quote}
> cat /proc/sys/kernel/random/entropy_avail
> {quote}
> Original discussion:
> https://issues.apache.org/jira/browse/SOLR-10338?focusedCommentId=15938904=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15938904



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10343) Update Solr default/example and test configs to use SynonymGraphFilterFactory

2017-03-28 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10343:
--
Attachment: SOLR-10343.patch

Patch converting schema uses of SynonymFilterFactory to 
SynonymGraphFilterFactory. In index analyzers, a FlattenGraphFilterFactory is 
added as the last filter; and in fieldtypes with a dual purpose , the 
analyzer is split into index and query analyzers, and the 
FlattenGraphFilterFactory is added to the index analyzer.

Not included here: switching ManagedSynonymFilterFactory to use 
SynonymGraphFilterFactory as its delegate.  I'll make a separate issue.

Running all Solr tests and precommit now.

> Update Solr default/example and test configs to use SynonymGraphFilterFactory
> -
>
> Key: SOLR-10343
> URL: https://issues.apache.org/jira/browse/SOLR-10343
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Steve Rowe
> Attachments: SOLR-10343.patch
>
>
> {{SynonymFilterFactory}} was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #176: Override spellcheck's SuggestMode by WordBrea...

2017-03-28 Thread abhidemon
GitHub user abhidemon opened a pull request:

https://github.com/apache/lucene-solr/pull/176

Override spellcheck's SuggestMode by WordBreakSolrSpellChecker

Now `suggestMode` can be specified for WordBreakSolrSpellChecker, if this 
value is not null, it will

override the `suggestMode` of SpellCheckComponent.

Fixes Performance and Relevancy

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/abhidemon/lucene-solr SOLR-10263

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/176.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #176


commit 484e3ee857a7e0821aef13ff08d80794211e2daa
Author: abhidemon 
Date:   2017-03-28T15:12:17Z

feat(WordBreakSpellChecker): Override spellcheck's SuggestMode by 
WordBreakSolrSpellChecker

Now `suggestMode` can be specified for WordBreakSolrSpellChecker, if this 
value is not null, it will

override the `suggestMode` of SpellCheckComponent.

Fixes Performance and Relevancy




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7755) Join queries should not reference IndexReaders.

2017-03-28 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945343#comment-15945343
 ] 

Martijn van Groningen commented on LUCENE-7755:
---

+1 good catch!

> Join queries should not reference IndexReaders.
> ---
>
> Key: LUCENE-7755
> URL: https://issues.apache.org/jira/browse/LUCENE-7755
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
> Attachments: LUCENE-7755.patch
>
>
> This is similar to LUCENE-7657 and can cause memory leaks when those queries 
> are cached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7756) Only record the major that was used to create the index rather than the full version

2017-03-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7756:
-
Attachment: LUCENE-7756.patch

Here is a patch.

> Only record the major that was used to create the index rather than the full 
> version
> 
>
> Key: LUCENE-7756
> URL: https://issues.apache.org/jira/browse/LUCENE-7756
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7756.patch
>
>
> LUCENE-7703 added information about the Lucene version that was used to 
> create the index to the segment infos. But since there is a single creation 
> version, it means we need to reject calls to addIndexes that can mix indices 
> that have different creation versions, which might be seen as an important 
> regression by some users. So I have been thinking about only recording the 
> major version that was used to create the index, which is still very valuable 
> information and would allow us to accept calls to addIndexes when all merged 
> indices have the same major version. This looks like a better trade-off to me.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7756) Only record the major that was used to create the index rather than the full version

2017-03-28 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7756:


 Summary: Only record the major that was used to create the index 
rather than the full version
 Key: LUCENE-7756
 URL: https://issues.apache.org/jira/browse/LUCENE-7756
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor


LUCENE-7703 added information about the Lucene version that was used to create 
the index to the segment infos. But since there is a single creation version, 
it means we need to reject calls to addIndexes that can mix indices that have 
different creation versions, which might be seen as an important regression by 
some users. So I have been thinking about only recording the major version that 
was used to create the index, which is still very valuable information and 
would allow us to accept calls to addIndexes when all merged indices have the 
same major version. This looks like a better trade-off to me.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >