[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-12.0.1) - Build # 371 - Unstable!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/371/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  org.apache.solr.core.TestJmxIntegration.testJmxOnCoreReload

Error Message:
Number of registered MBeans is not the same as the number of core metrics: 443 
!= 444

Stack Trace:
java.lang.AssertionError: Number of registered MBeans is not the same as the 
number of core metrics: 443 != 444
at 
__randomizedtesting.SeedInfo.seed([3C1131AC765DA4C7:3EA8EAC0051E780A]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.core.TestJmxIntegration.testJmxOnCoreReload(TestJmxIntegration.java:260)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  
org.apache.solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest.testCatTime

Error Message:
Error from server at 
http://127.0.0.1:65521/solr/testCatTime__CRA__calico__TRA__2019-07-01: Expected 
mime type application/octet-stream but got 

[JENKINS] Lucene-Solr-Tests-8.2 - Build # 8 - Unstable

2019-07-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.2/8/

107 tests failed.
FAILED:  org.apache.lucene.analysis.ja.TestExtendedMode.testRandomStrings

Error Message:
Could not initialize class 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder

Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder
at 
__randomizedtesting.SeedInfo.seed([53EA8737B64CAB7E:DB6387891548FC4B]:0)
at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.getInstance(TokenInfoDictionary.java:62)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:215)
at 
org.apache.lucene.analysis.ja.TestExtendedMode$1.createComponents(TestExtendedMode.java:41)
at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:199)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:427)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:546)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:469)
at 
org.apache.lucene.analysis.ja.TestExtendedMode.testRandomStrings(TestExtendedMode.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (SOLR-13565) Node level runtime libs loaded from remote urls

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886649#comment-16886649
 ] 

ASF subversion and git services commented on SOLR-13565:


Commit cc21b53f75f5d8588e814845ae8899b060acd1fd in lucene-solr's branch 
refs/heads/jira/SOLR-13565 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cc21b53 ]

SOLR-13565: removed accidental change


> Node level runtime libs loaded from remote urls
> ---
>
> Key: SOLR-13565
> URL: https://issues.apache.org/jira/browse/SOLR-13565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Custom components to be loaded at a CorContainer level
> How to configure this?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> How to update your jars?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "update-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This only loads the components used in CoreContainer and it does not need to 
> restart the Solr node
> The configuration lives in the file {{/clusterprops.json}} in ZK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.2-Linux (32bit/jdk1.8.0_201) - Build # 432 - Still Unstable!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.2-Linux/432/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseParallelGC

312 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.spelling.suggest.TestAnalyzedSuggestions

Error Message:
Could not initialize class 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder

Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder
at __randomizedtesting.SeedInfo.seed([A83AB7782DA08D0F]:0)
at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.getInstance(TokenInfoDictionary.java:62)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:215)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizerFactory.create(JapaneseTokenizerFactory.java:150)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizerFactory.create(JapaneseTokenizerFactory.java:82)
at 
org.apache.solr.analysis.TokenizerChain.createComponents(TokenizerChain.java:116)
at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:199)
at 
org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester.toAutomaton(AnalyzingSuggester.java:846)
at 
org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester.build(AnalyzingSuggester.java:430)
at org.apache.lucene.search.suggest.Lookup.build(Lookup.java:190)
at org.apache.solr.spelling.suggest.Suggester.build(Suggester.java:161)
at 
org.apache.solr.handler.component.SpellCheckComponent.prepare(SpellCheckComponent.java:128)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:279)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2578)
at org.apache.solr.util.TestHarness.query(TestHarness.java:338)
at org.apache.solr.util.TestHarness.query(TestHarness.java:320)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:921)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:907)
at 
org.apache.solr.spelling.suggest.TestAnalyzedSuggestions.beforeClass(TestAnalyzedSuggestions.java:29)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.spelling.suggest.TestAnalyzedSuggestions

Error Message:
Could not initialize class 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder

Stack Trace:

[jira] [Commented] (SOLR-13565) Node level runtime libs loaded from remote urls

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886611#comment-16886611
 ] 

ASF subversion and git services commented on SOLR-13565:


Commit ab1bfab8a380b5cc64d3b5719a97643959449284 in lucene-solr's branch 
refs/heads/jira/SOLR-13565 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ab1bfab ]

SOLR-13565: removed accidental change


> Node level runtime libs loaded from remote urls
> ---
>
> Key: SOLR-13565
> URL: https://issues.apache.org/jira/browse/SOLR-13565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Custom components to be loaded at a CorContainer level
> How to configure this?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> How to update your jars?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "update-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This only loads the components used in CoreContainer and it does not need to 
> restart the Solr node
> The configuration lives in the file {{/clusterprops.json}} in ZK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13565) Node level runtime libs loaded from remote urls

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886612#comment-16886612
 ] 

ASF subversion and git services commented on SOLR-13565:


Commit fe44af69b6ca4d15374315696de912f89ae7217e in lucene-solr's branch 
refs/heads/jira/SOLR-13565 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fe44af6 ]

SOLR-13565: removed accidental change


> Node level runtime libs loaded from remote urls
> ---
>
> Key: SOLR-13565
> URL: https://issues.apache.org/jira/browse/SOLR-13565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Custom components to be loaded at a CorContainer level
> How to configure this?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> How to update your jars?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "update-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This only loads the components used in CoreContainer and it does not need to 
> restart the Solr node
> The configuration lives in the file {{/clusterprops.json}} in ZK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886610#comment-16886610
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 2c28dbb42a26e1f373a06ff4704a5c3e10b7c82d in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2c28dbb ]

SOLR-13105: Add transform copy


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13565) Node level runtime libs loaded from remote urls

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886608#comment-16886608
 ] 

ASF subversion and git services commented on SOLR-13565:


Commit 35f10f9220ae2dcad44f013bea2a999b9663e399 in lucene-solr's branch 
refs/heads/jira/SOLR-13565 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=35f10f9 ]

SOLR-13565: syncing with master


> Node level runtime libs loaded from remote urls
> ---
>
> Key: SOLR-13565
> URL: https://issues.apache.org/jira/browse/SOLR-13565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Custom components to be loaded at a CorContainer level
> How to configure this?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> How to update your jars?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "update-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This only loads the components used in CoreContainer and it does not need to 
> restart the Solr node
> The configuration lives in the file {{/clusterprops.json}} in ZK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13565) Node level runtime libs loaded from remote urls

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886607#comment-16886607
 ] 

ASF subversion and git services commented on SOLR-13565:


Commit a4efc376435547b8b229d27163dd8cce084ec027 in lucene-solr's branch 
refs/heads/jira/SOLR-13565 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a4efc37 ]

SOLR-13565: syncing with master


> Node level runtime libs loaded from remote urls
> ---
>
> Key: SOLR-13565
> URL: https://issues.apache.org/jira/browse/SOLR-13565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Custom components to be loaded at a CorContainer level
> How to configure this?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> How to update your jars?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "update-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This only loads the components used in CoreContainer and it does not need to 
> restart the Solr node
> The configuration lives in the file {{/clusterprops.json}} in ZK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 24408 - Failure!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24408/
Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 8904 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/temp/junit4-J0-20190717_012841_8415935420663927076435.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] # To suppress the following error report, specify this argument
   [junit4] # after -XX: or in .hotspotrc:  
SuppressErrorAt=/loopPredicate.cpp:315
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error 
(/home/buildbot/worker/jdk13-linux/build/src/hotspot/share/opto/loopPredicate.cpp:315),
 pid=28561, tid=28691
   [junit4] #  assert(dom_r->unique_ctrl_out()->is_Call()) failed: unc expected
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (13.0) (fastdebug build 
13-testing+0-builds.shipilev.net-openjdk-jdk13-b9-20190621-jdk-1326)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (fastdebug 
13-testing+0-builds.shipilev.net-openjdk-jdk13-b9-20190621-jdk-1326, mixed 
mode, tiered, serial gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x119f2cd]  
PhaseIdealLoop::clone_loop_predicates_fix_mem(ProjNode*, ProjNode*, 
PhaseIdealLoop*, PhaseIterGVN*)+0x12d
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/J0/hs_err_pid28561.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/J0/replay_pid28561.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] Current thread is 28691
   [junit4] Dumping core ...
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/temp/junit4-J0-20190717_012841_8413946049831988353504.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: increase O_BUFLEN in ostream.hpp 
-- output truncated
   [junit4] <<< JVM J0: EOF 

[...truncated 30 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-13-ea+shipilev-fastdebug/bin/java 
-XX:-UseCompressedOops -XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=EB46DB40315C4B12 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=9.0.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=9.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-master-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/J0
 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/temp
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dfile.encoding=ISO-8859-1 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[jira] [Comment Edited] (LUCENE-8920) Reduce size of FSTs due to use of direct-addressing encoding

2019-07-16 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886603#comment-16886603
 ] 

Mike Sokolov edited comment on LUCENE-8920 at 7/17/19 1:27 AM:
---

Yes, that makes sense. Because we reverted the "current version" in FST.java, 
we can no longer read FSTs created with the newer version, so we need to revert 
the dictionary file.  I'll do that and run a full suite of tests just to make 
sure something else isn't still broken. Thanks for pointing this out, 
[~hossman] and finding the fix [~tomoko], and sorry for not being more careful 
with the "fix" the first time!


was (Author: sokolov):
Yes, that makes sense. Because we reverted the "current version" in FST.java, 
we can no longer read FSTs created with the newer version, so we need to revert 
the dictionary file.  I'll do that and run a full suite of tests just to make 
sure something else isn't still broken

> Reduce size of FSTs due to use of direct-addressing encoding 
> -
>
> Key: LUCENE-8920
> URL: https://issues.apache.org/jira/browse/LUCENE-8920
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some data can lead to worst-case ~4x RAM usage due to this optimization. 
> Several ideas were suggested to combat this on the mailing list:
> bq. I think we can improve thesituation here by tracking, per-FST instance, 
> the size increase we're seeing while building (or perhaps do a preliminary 
> pass before building) in order to decide whether to apply the encoding. 
> bq. we could also make the encoding a bit more efficient. For instance I 
> noticed that arc metadata is pretty large in some cases (in the 10-20 bytes) 
> which make gaps very costly. Associating each label with a dense id and 
> having an intermediate lookup, ie. lookup label -> id and then id->arc offset 
> instead of doing label->arc directly could save a lot of space in some cases? 
> Also it seems that we are repeating the label in the arc metadata when 
> array-with-gaps is used, even though it shouldn't be necessary since the 
> label is implicit from the address?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8920) Reduce size of FSTs due to use of direct-addressing encoding

2019-07-16 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886603#comment-16886603
 ] 

Mike Sokolov commented on LUCENE-8920:
--

Yes, that makes sense. Because we reverted the "current version" in FST.java, 
we can no longer read FSTs created with the newer version, so we need to revert 
the dictionary file.  I'll do that and run a full suite of tests just to make 
sure something else isn't still broken

> Reduce size of FSTs due to use of direct-addressing encoding 
> -
>
> Key: LUCENE-8920
> URL: https://issues.apache.org/jira/browse/LUCENE-8920
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some data can lead to worst-case ~4x RAM usage due to this optimization. 
> Several ideas were suggested to combat this on the mailing list:
> bq. I think we can improve thesituation here by tracking, per-FST instance, 
> the size increase we're seeing while building (or perhaps do a preliminary 
> pass before building) in order to decide whether to apply the encoding. 
> bq. we could also make the encoding a bit more efficient. For instance I 
> noticed that arc metadata is pretty large in some cases (in the 10-20 bytes) 
> which make gaps very costly. Associating each label with a dense id and 
> having an intermediate lookup, ie. lookup label -> id and then id->arc offset 
> instead of doing label->arc directly could save a lot of space in some cases? 
> Also it seems that we are repeating the label in the arc metadata when 
> array-with-gaps is used, even though it shouldn't be necessary since the 
> label is implicit from the address?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13565) Node level runtime libs loaded from remote urls

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886597#comment-16886597
 ] 

ASF subversion and git services commented on SOLR-13565:


Commit b728566ca3f72a1fdb870872b17c265e4651342a in lucene-solr's branch 
refs/heads/jira/SOLR-13565 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b728566 ]

SOLR-13565: set proper permission name


> Node level runtime libs loaded from remote urls
> ---
>
> Key: SOLR-13565
> URL: https://issues.apache.org/jira/browse/SOLR-13565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Custom components to be loaded at a CorContainer level
> How to configure this?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> How to update your jars?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "update-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This only loads the components used in CoreContainer and it does not need to 
> restart the Solr node
> The configuration lives in the file {{/clusterprops.json}} in ZK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13534) Dynamic loading of jars from a url

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886594#comment-16886594
 ] 

ASF subversion and git services commented on SOLR-13534:


Commit be0289f4459da8ce6ffb91d85388690435cde2bc in lucene-solr's branch 
refs/heads/jira/SOLR-13565 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=be0289f ]

SOLR-13534: Fix test

Remove buggy 'port roulette' code that can easily fail if OS gives the selected 
port to a different process just before creating the server

Use jetty's built in support for listining on an OS selected port instead

Also increase timeouts to better account for slow/heavily loaded (ie:jenkins) 
VMs where SolrCore reloading may take longer then 10 seconds


> Dynamic loading of jars from a url
> --
>
> Key: SOLR-13534
> URL: https://issues.apache.org/jira/browse/SOLR-13534
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Dynamic loading is possible from {{.system}} collection. It's much easier to 
> host the jars on a remote service and load it from there. This way the user 
> should have no problem in loading jars when the {{.system}} collection is not 
> available for some reason.
> The steps should look as follows
>  # get the hash of your jar file.  {{openssl dgst -sha512 }}
>  # upload it your hosting service . say the location is 
> {{[http://host:port/my-jar/location|http://hostport/]}}
>  # create a runtime lib entry for the collection as follows
> {code:java}
> curl http://localhost:8983/solr/techproducts/config -H 
> 'Content-type:application/json' -d '{
>"add-runtimelib": { "name":"jarblobname", 
> "sha512":"e94bb3990b39aacdabaa3eef7ca6102d96fa46766048da50269f25fd41164440a4e024d7a7fb0d5ec328cd8322bb65f5ba7886e076a8f224f78cb310fd45896d"
>  , "url" : "http://host:port/my-jar/loaction"}
> }'
> {code}
> to update the jar, just repeat the steps and use the {{update-runtimelib}} to 
> update the sha512 hash



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13565) Node level runtime libs loaded from remote urls

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886593#comment-16886593
 ] 

ASF subversion and git services commented on SOLR-13565:


Commit 5ec7212ef74555a7dfcba28ff4e185dd00cae046 in lucene-solr's branch 
refs/heads/jira/SOLR-13565 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5ec7212 ]

SOLR-13565: fixing tests


> Node level runtime libs loaded from remote urls
> ---
>
> Key: SOLR-13565
> URL: https://issues.apache.org/jira/browse/SOLR-13565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Custom components to be loaded at a CorContainer level
> How to configure this?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> How to update your jars?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "update-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This only loads the components used in CoreContainer and it does not need to 
> restart the Solr node
> The configuration lives in the file {{/clusterprops.json}} in ZK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8920) Reduce size of FSTs due to use of direct-addressing encoding

2019-07-16 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886592#comment-16886592
 ] 

Tomoko Uchida commented on LUCENE-8920:
---

FYI: This is not related to my revert, I think a kuromoji dictionary data 
(o.a.l.a.ja.dict.TokenInfoDictionary$fst.dat) also should be reverted 
(regenerated) when FST version is reverted.
This works for me.

{code}
$ ant -f lucene/analysis/kuromoji/build.xml regenerate
$ git status
Changes not staged for commit:
  (use "git add ..." to update what will be committed)
  (use "git checkout -- ..." to discard changes in working directory)

modified:   
lucene/analysis/kuromoji/src/resources/org/apache/lucene/analysis/ja/dict/TokenInfoDictionary$fst.dat
$ ant -f lucene/analysis/kuromoji/build.xml test
BUILD SUCCESSFUL
Total time: 33 seconds
{code}

> Reduce size of FSTs due to use of direct-addressing encoding 
> -
>
> Key: LUCENE-8920
> URL: https://issues.apache.org/jira/browse/LUCENE-8920
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some data can lead to worst-case ~4x RAM usage due to this optimization. 
> Several ideas were suggested to combat this on the mailing list:
> bq. I think we can improve thesituation here by tracking, per-FST instance, 
> the size increase we're seeing while building (or perhaps do a preliminary 
> pass before building) in order to decide whether to apply the encoding. 
> bq. we could also make the encoding a bit more efficient. For instance I 
> noticed that arc metadata is pretty large in some cases (in the 10-20 bytes) 
> which make gaps very costly. Associating each label with a dense id and 
> having an intermediate lookup, ie. lookup label -> id and then id->arc offset 
> instead of doing label->arc directly could save a lot of space in some cases? 
> Also it seems that we are repeating the label in the arc metadata when 
> array-with-gaps is used, even though it shouldn't be necessary since the 
> label is implicit from the address?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.2-Windows (64bit/jdk-13-ea+26) - Build # 149 - Unstable!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.2-Windows/149/
Java: 64bit/jdk-13-ea+26 -XX:+UseCompressedOops -XX:+UseParallelGC

282 tests failed.
FAILED:  org.apache.lucene.analysis.ja.TestExtendedMode.testSurrogates

Error Message:
Could not initialize class 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder

Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder
at 
__randomizedtesting.SeedInfo.seed([D75A6740552A592F:92A27C1706EA8236]:0)
at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.getInstance(TokenInfoDictionary.java:62)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:215)
at 
org.apache.lucene.analysis.ja.TestExtendedMode$1.createComponents(TestExtendedMode.java:41)
at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:199)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:427)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:382)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:399)
at 
org.apache.lucene.analysis.ja.TestExtendedMode.testSurrogates(TestExtendedMode.java:55)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 151 - Still Failing

2019-07-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/151/

No tests ran.

Build Log:
[...truncated 24989 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2590 links (2119 relative) to 3405 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.3.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[JENKINS] Lucene-Solr-BadApples-Tests-8.x - Build # 156 - Unstable

2019-07-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/156/

1 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI

Error Message:
{} expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: {} expected:<2> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([7D2D7796B71223E5:62FAEBBAC419DAAE]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI(AliasIntegrationTest.java:303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14721 lines...]
   [junit4] Suite: org.apache.solr.cloud.AliasIntegrationTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1901 - Unstable

2019-07-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1901/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple2 Timeout waiting to see state for 
collection=testSimple2 
:DocCollection(testSimple2//collections/testSimple2/state.json/25)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node3/data/", 
  "base_url":"http://127.0.0.1:36553/solr;,   
"node_name":"127.0.0.1:36553_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node5":{   
"dataDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node5/data/", 
  "base_url":"http://127.0.0.1:35975/solr;,   
"node_name":"127.0.0.1:35975_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"down"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node7":{   
"dataDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node7/data/", 
  "base_url":"http://127.0.0.1:36553/solr;,   
"node_name":"127.0.0.1:36553_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node7/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node8":{   
"dataDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node8/data/", 
  "base_url":"http://127.0.0.1:35975/solr;,   
"node_name":"127.0.0.1:35975_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node8/data/tlog",
   "core":"testSimple2_shard2_replica_n6",   
"shared_storage":"true",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"true",   "nrtReplicas":"2",   "tlogReplicas":"0"} Live 
Nodes: [127.0.0.1:36553_solr, 127.0.0.1:45878_solr] Last available state: 
DocCollection(testSimple2//collections/testSimple2/state.json/25)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node3/data/", 
  "base_url":"http://127.0.0.1:36553/solr;,   
"node_name":"127.0.0.1:36553_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node5":{   
"dataDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node5/data/", 
  "base_url":"http://127.0.0.1:35975/solr;,   
"node_name":"127.0.0.1:35975_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"down"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node7":{   
"dataDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node7/data/", 
  "base_url":"http://127.0.0.1:36553/solr;,   
"node_name":"127.0.0.1:36553_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node7/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node8":{   
"dataDir":"hdfs://localhost:45817/solr_hdfs_home/testSimple2/core_node8/data/", 
  "base_url":"http://127.0.0.1:35975/solr;,   
"node_name":"127.0.0.1:35975_solr",   "type":"NRT",   
"force_set_state":"false",   

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.3) - Build # 8057 - Failure!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8057/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2058 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\temp\junit4-J1-20190716_201054_85611171931642024308540.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\temp\junit4-J0-20190716_201054_85417911558558601385120.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 316 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\temp\junit4-J1-20190716_202126_26314900251150662464474.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\temp\junit4-J0-20190716_202126_26310493642421293986990.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 1083 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\temp\junit4-J0-20190716_202304_9006537320604210423532.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\temp\junit4-J1-20190716_202304_9005197402350381699500.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 243 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\icu\test\temp\junit4-J1-20190716_202557_1216076638261795688484.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\icu\test\temp\junit4-J0-20190716_202557_12115117190521553983505.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 218 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\kuromoji\test\temp\junit4-J1-20190716_202615_6034040266618071466895.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\kuromoji\test\temp\junit4-J0-20190716_202615_60314964817772283596436.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 155 lines...]
   [junit4] JVM J0: stderr was not empty, see: 

[jira] [Commented] (LUCENE-8920) Reduce size of FSTs due to use of direct-addressing encoding

2019-07-16 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886513#comment-16886513
 ] 

Hoss Man commented on LUCENE-8920:
--

[~sokolov] - your revert on branch_8_2 seems to have broken most of the 
lucene/analysis/kuromoji tests with a common root cause...

{noformat}
  [junit4] ERROR   0.44s J0 | TestFactories.test <<<
   [junit4]> Throwable #1: java.lang.ExceptionInInitializerError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B1B94D34D92CDA93:39ED72EE77D0B76B]:0)
   [junit4]>at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.getInstance(TokenInfoDictionary.java:62)
   [junit4]>at 
org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:215)
   [junit4]>at 
org.apache.lucene.analysis.ja.JapaneseTokenizerFactory.create(JapaneseTokenizerFactory.java:150)
   [junit4]>at 
org.apache.lucene.analysis.ja.JapaneseTokenizerFactory.create(JapaneseTokenizerFactory.java:82)
   [junit4]>at 
org.apache.lucene.analysis.ja.TestFactories$FactoryAnalyzer.createComponents(TestFactories.java:174)
   [junit4]>at 
org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:199)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:427)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:546)
   [junit4]>at 
org.apache.lucene.analysis.ja.TestFactories.doTestTokenizer(TestFactories.java:81)
   [junit4]>at 
org.apache.lucene.analysis.ja.TestFactories.test(TestFactories.java:60)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]> Caused by: java.lang.RuntimeException: Cannot load 
TokenInfoDictionary.
   [junit4]>at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder.(TokenInfoDictionary.java:71)
   [junit4]>... 46 more
   [junit4]> Caused by: org.apache.lucene.index.IndexFormatTooNewException: 
Format version is not supported (resource 
org.apache.lucene.store.InputStreamDataInput@5f0dbb2f): 7 (needs to be between 
6 and 6)
   [junit4]>at 
org.apache.lucene.codecs.CodecUtil.checkHeaderNoMagic(CodecUtil.java:216)
   [junit4]>at 
org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:198)
   [junit4]>at org.apache.lucene.util.fst.FST.(FST.java:275)
   [junit4]>at org.apache.lucene.util.fst.FST.(FST.java:263)
   [junit4]>at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.(TokenInfoDictionary.java:47)
   [junit4]>at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.(TokenInfoDictionary.java:54)
   [junit4]>at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.(TokenInfoDictionary.java:32)
   [junit4]>at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder.(TokenInfoDictionary.java:69)
   [junit4]>... 46 more

{noformat}

...perhaps due to "conflicting reverts" w/ LUCENE-8907 / LUCENE-8778 ?
/cc [~tomoko]

> Reduce size of FSTs due to use of direct-addressing encoding 
> -
>
> Key: LUCENE-8920
> URL: https://issues.apache.org/jira/browse/LUCENE-8920
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some data can lead to worst-case ~4x RAM usage due to this optimization. 
> Several ideas were suggested to combat this on the mailing list:
> bq. I think we can improve thesituation here by tracking, per-FST instance, 
> the size increase we're seeing while building (or perhaps do a preliminary 
> pass before building) in order to decide whether to apply the encoding. 
> bq. we could also make the encoding a bit more efficient. For instance I 
> noticed that arc metadata is pretty large in some cases (in the 10-20 bytes) 
> which make gaps very costly. Associating each label with a dense id and 
> having an intermediate lookup, ie. lookup label -> id and then id->arc offset 
> instead of doing label->arc directly could save a lot of space in some cases? 
> Also it seems that we are repeating the label in the arc metadata when 
> array-with-gaps is used, even though it shouldn't be necessary since the 
> label is implicit from the address?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.2-Linux (64bit/jdk-13-ea+26) - Build # 431 - Still Unstable!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.2-Linux/431/
Java: 64bit/jdk-13-ea+26 -XX:+UseCompressedOops -XX:+UseG1GC

354 tests failed.
FAILED:  org.apache.lucene.analysis.ja.TestExtendedMode.testRandomHugeStrings

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([FAC3336878E387F8:62E054AB26953BB0]:0)
at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.getInstance(TokenInfoDictionary.java:62)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:215)
at 
org.apache.lucene.analysis.ja.TestExtendedMode$1.createComponents(TestExtendedMode.java:41)
at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:199)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:427)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:546)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:474)
at 
org.apache.lucene.analysis.ja.TestExtendedMode.testRandomHugeStrings(TestExtendedMode.java:84)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:830)
Caused by: java.lang.RuntimeException: Cannot load TokenInfoDictionary.
at 

[jira] [Commented] (SOLR-13534) Dynamic loading of jars from a url

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886481#comment-16886481
 ] 

ASF subversion and git services commented on SOLR-13534:


Commit 4ccef38d48dbe414c43dd511ec3aa92db75b111a in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4ccef38 ]

SOLR-13534: Fix test

Remove buggy 'port roulette' code that can easily fail if OS gives the selected 
port to a different process just before creating the server

Use jetty's built in support for listining on an OS selected port instead

Also increase timeouts to better account for slow/heavily loaded (ie:jenkins) 
VMs where SolrCore reloading may take longer then 10 seconds

(cherry picked from commit 19c78ddf98b1cef86f7a1c6d124811af8726b41d)


> Dynamic loading of jars from a url
> --
>
> Key: SOLR-13534
> URL: https://issues.apache.org/jira/browse/SOLR-13534
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Dynamic loading is possible from {{.system}} collection. It's much easier to 
> host the jars on a remote service and load it from there. This way the user 
> should have no problem in loading jars when the {{.system}} collection is not 
> available for some reason.
> The steps should look as follows
>  # get the hash of your jar file.  {{openssl dgst -sha512 }}
>  # upload it your hosting service . say the location is 
> {{[http://host:port/my-jar/location|http://hostport/]}}
>  # create a runtime lib entry for the collection as follows
> {code:java}
> curl http://localhost:8983/solr/techproducts/config -H 
> 'Content-type:application/json' -d '{
>"add-runtimelib": { "name":"jarblobname", 
> "sha512":"e94bb3990b39aacdabaa3eef7ca6102d96fa46766048da50269f25fd41164440a4e024d7a7fb0d5ec328cd8322bb65f5ba7886e076a8f224f78cb310fd45896d"
>  , "url" : "http://host:port/my-jar/loaction"}
> }'
> {code}
> to update the jar, just repeat the steps and use the {{update-runtimelib}} to 
> update the sha512 hash



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13534) Dynamic loading of jars from a url

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886476#comment-16886476
 ] 

ASF subversion and git services commented on SOLR-13534:


Commit 19c78ddf98b1cef86f7a1c6d124811af8726b41d in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=19c78dd ]

SOLR-13534: Fix test

Remove buggy 'port roulette' code that can easily fail if OS gives the selected 
port to a different process just before creating the server

Use jetty's built in support for listining on an OS selected port instead

Also increase timeouts to better account for slow/heavily loaded (ie:jenkins) 
VMs where SolrCore reloading may take longer then 10 seconds


> Dynamic loading of jars from a url
> --
>
> Key: SOLR-13534
> URL: https://issues.apache.org/jira/browse/SOLR-13534
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Dynamic loading is possible from {{.system}} collection. It's much easier to 
> host the jars on a remote service and load it from there. This way the user 
> should have no problem in loading jars when the {{.system}} collection is not 
> available for some reason.
> The steps should look as follows
>  # get the hash of your jar file.  {{openssl dgst -sha512 }}
>  # upload it your hosting service . say the location is 
> {{[http://host:port/my-jar/location|http://hostport/]}}
>  # create a runtime lib entry for the collection as follows
> {code:java}
> curl http://localhost:8983/solr/techproducts/config -H 
> 'Content-type:application/json' -d '{
>"add-runtimelib": { "name":"jarblobname", 
> "sha512":"e94bb3990b39aacdabaa3eef7ca6102d96fa46766048da50269f25fd41164440a4e024d7a7fb0d5ec328cd8322bb65f5ba7886e076a8f224f78cb310fd45896d"
>  , "url" : "http://host:port/my-jar/loaction"}
> }'
> {code}
> to update the jar, just repeat the steps and use the {{update-runtimelib}} to 
> update the sha512 hash



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9961) RestoreCore needs the option to download files in parallel.

2019-07-16 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9961:
---
Attachment: SOLR-9961.patch

> RestoreCore needs the option to download files in parallel.
> ---
>
> Key: SOLR-9961
> URL: https://issues.apache.org/jira/browse/SOLR-9961
> Project: Solr
>  Issue Type: Improvement
>  Components: Backup/Restore
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>Priority: Major
> Attachments: SOLR-9961.patch, SOLR-9961.patch, SOLR-9961.patch, 
> SOLR-9961.patch, SOLR-9961.patch
>
>
> My backup to cloud storage (Google cloud storage in this case, but I think 
> this is a general problem) takes 8 minutes ... the restore of the same core 
> takes hours. The restore loop in RestoreCore is serial and doesn't allow me 
> to parallelize the expensive part of this operation (the IO from the remote 
> cloud storage service). We need the option to parallelize the download (like 
> distcp). 
> Also, I tried downloading the same directory using gsutil and it was very 
> fast, like 2 minutes. So I know it's not the pipe that's limiting perf here.
> Here's a very rough patch that does the parallelization. We may also want to 
> consider a two-step approach: 1) download in parallel to a temp dir, 2) 
> perform all the of the checksum validation against the local temp dir. That 
> will save round trips to the remote cloud storage.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 239 - Unstable!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/239/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest.testReadApi

Error Message:
expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([F93145040D2F6F52:AE18BEB1D6DD8D49]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest.testReadApi(AutoScalingHandlerTest.java:898)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest.testSuggestionsWithPayload

Error Message:


Stack Trace:
java.lang.AssertionError
at 

[jira] [Assigned] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.

2019-07-16 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-11556:
---

Assignee: Mikhail Khludnev  (was: Timothy Potter)

> Backup/Restore with multiple BackupRepository objects defined results in the 
> wrong repo being used.
> ---
>
> Key: SOLR-11556
> URL: https://issues.apache.org/jira/browse/SOLR-11556
> Project: Solr
>  Issue Type: Bug
>  Components: Backup/Restore
>Affects Versions: 6.3
>Reporter: Timothy Potter
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-11556.patch
>
>
> I defined two repos for backup/restore, one local and one remote on GCS, e.g.
> {code}
> 
>  class="org.apache.solr.core.backup.repository.HdfsBackupRepository" 
> default="false">
>  ...
> 
>  class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" 
> default="false">
>   /tmp/solr-backups
> 
>  
> {code}
> Since the CollectionHandler does not pass the "repository" param along, once 
> the BackupCmd gets the ZkNodeProps, it selects the wrong repo! 
> The error I'm seeing is:
> {code}
> 2017-10-26 17:07:27.326 ERROR 
> (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [   ] 
> o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: 
> backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not 
> installed
> at java.nio.file.Paths.get(Paths.java:147)
> at 
> org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82)
> at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99)
> at 
> org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Notice the Local backup repo is being selected in the BackupCmd even though I 
> passed repository=hdfs in my backup command, e.g.
> {code}
> curl 
> "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs;
> {code} 
> I think the fix here is to include the repository param, see patch. I'll fix 
> for the next 7.x release and those on 6 can just apply the patch here.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13629) Remove trailing whitespace from analytics package

2019-07-16 Thread Neal Sidhwaney (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neal Sidhwaney updated SOLR-13629:
--
Status: Open  (was: Patch Available)

> Remove trailing whitespace from analytics package
> -
>
> Key: SOLR-13629
> URL: https://issues.apache.org/jira/browse/SOLR-13629
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.1.1
>Reporter: Neal Sidhwaney
>Priority: Trivial
> Attachments: SOLR-13629.patch
>
>
> I"m making some changes to analytics and noticed that the guidelines ask to 
> create separate patches for formatting/whitespace changes.  This issue is 
> meant for the patch to remove trailing whitespace, but preserves newlines if 
> the trailing whitespace was on a line with only whitespace.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13634) ResponseBuilderTest should be in same package as ReponseBuilder

2019-07-16 Thread Neal Sidhwaney (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neal Sidhwaney updated SOLR-13634:
--
Status: Open  (was: Patch Available)

> ResponseBuilderTest should be in same package as ReponseBuilder
> ---
>
> Key: SOLR-13634
> URL: https://issues.apache.org/jira/browse/SOLR-13634
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.1.1
>Reporter: Neal Sidhwaney
>Priority: Trivial
> Attachments: SOLR-13634.patch
>
>
> While playing around with the analytics package, I noticed ResponseBuilder is 
> in Java package org.apache.solr.handler.component, whereas 
> ResponseBuilderTest is in org.apache.solr.handler.  We should make them 
> consistent.  I'll send a patch to move ResponseBuilderTest into the same 
> package as ResponsBuilder.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11266) V2 API returning wrong content-type

2019-07-16 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886363#comment-16886363
 ] 

Gus Heck commented on SOLR-11266:
-

This behavior has been a peeve of mine... I've always found it irritating that 
the content type is not appropriate for the content. This forces clients to be 
written to make hard coded assumptions about content type (the most common 
solution), do checks to find out what was requested on the request line (or 
included in the POST data from the form), or to maintain state across the 
request or do any one of a number of other non-standard things rather than 
simply choose a parser based on content type. And that's if one has good 
control over that communications code. Could be even more fun if the client 
wants to use 3rd party code that refuses to ignore the header. I'm not of the 
opinion that users looking at a response in a browser are our main target 
audience. Javascript, Java, PHP and Python code probably constitute 1000x more 
use cases than humans in browser window inspection, which is mostly an initial 
development thing. If plain text is desired, that should be produced as the 
result of an override.  Looking back at that the referenced issue, the number 
of folks rooting for application/json seemed way larger than those insisting on 
text/plain

Standards are standards so that people can rely on them, and less inane 
glue-code is needed for systems to talk to each other. I never like to see 
software (be it open source or closed) half supporting or mutating or 
"extending" standards. I'm very +1 that we target returning sane content types 
that match the content actually returned for 9.x and provide a text/plain back 
compatibility option for the request params and also as a global config 
somewhere. It should not take special configuration to turn on "standards mode".

We should also be able to do Accept: negotiation and not require a url param 
relating to content type at all, at which point a browser that didn't accept 
json would likely fall back to text/plain...  

> V2 API returning wrong content-type
> ---
>
> Key: SOLR-11266
> URL: https://issues.apache.org/jira/browse/SOLR-11266
> Project: Solr
>  Issue Type: Bug
>  Components: v2 API
>Reporter: Ishan Chattopadhyaya
>Priority: Major
>
> The content-type of the returned value is wrong in many places. It should 
> return "application/json", but instead returns "application/text-plan".
> Here's an example:
> {code}
> [ishan@t430 ~] $ curl -v 
> "http://localhost:8983/api/collections/products/select?q=*:*=0;
> *   Trying 127.0.0.1...
> * TCP_NODELAY set
> * Connected to localhost (127.0.0.1) port 8983 (#0)
> > GET /api/collections/products/select?q=*:*=0 HTTP/1.1
> > Host: localhost:8983
> > User-Agent: curl/7.51.0
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Content-Type: text/plain;charset=utf-8
> < Content-Length: 184
> < 
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":1,
> "params":{
>   "q":"*:*",
>   "rows":"0"}},
>   "response":{"numFound":260,"start":0,"docs":[]
>   }}
> * Curl_http_done: called premature == 0
> * Connection #0 to host localhost left intact
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13640) Invalid link in README.md file (lucene-solr)

2019-07-16 Thread Prince Manohar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prince Manohar updated SOLR-13640:
--
Description: 
In the _*Development/IDEs*_ section of the README.md file (See 
[https://github.com/apache/lucene-solr#developmentides] ) the links 
corresponding to _Eclipse,_ _IntelliJ and_ _Netbeans are not working._

 

As discussed in the _user mailing list_ this is happening because of Apache has 
taken down the _*MoinMoin wiki system*_ which was has been migrated to 
confluence.

I have fixed the urls in the README file in the below PR   

 [https://github.com/apache/lucene-solr/pull/790]

  was:
In the _*Development/IDEs*_ section of the README.md file (See 
[https://github.com/apache/lucene-solr#developmentides] ) the links 
corresponding to _Eclipse,_ _IntelliJ and_ _Netbeans are not working._

 

As discussed in the _user mailing list_ this is happening because of Apache has 
taken down the _*MoinMoin wiki system*_ which was has been migrated to 
confluence.

I have fixed the urls in the README file in the below PR   

 


> Invalid link in README.md file (lucene-solr)
> 
>
> Key: SOLR-13640
> URL: https://issues.apache.org/jira/browse/SOLR-13640
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Prince Manohar
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the _*Development/IDEs*_ section of the README.md file (See 
> [https://github.com/apache/lucene-solr#developmentides] ) the links 
> corresponding to _Eclipse,_ _IntelliJ and_ _Netbeans are not working._
>  
> As discussed in the _user mailing list_ this is happening because of Apache 
> has taken down the _*MoinMoin wiki system*_ which was has been migrated to 
> confluence.
> I have fixed the urls in the README file in the below PR   
>  [https://github.com/apache/lucene-solr/pull/790]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] princemanohar opened a new pull request #790: SOLR-13640 : Fix page linking to build project on IDE

2019-07-16 Thread GitBox
princemanohar opened a new pull request #790: SOLR-13640 : Fix page linking to 
build project on IDE
URL: https://github.com/apache/lucene-solr/pull/790
 
 
   
   
   
   # Description
   
   Please provide a short description of the changes you're making with this 
pull request.
   
   # Solution
   
   Please provide a short description of the approach taken to implement your 
solution.
   
   # Tests
   
   Please describe the tests you've developed or run to confirm this patch 
implements the feature or solves the problem.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [] I am authorized to contribute this code to the ASF and have removed any 
code I do not have a license to distribute.
   - [] I have developed this patch against the `master` branch.
   - [] I have run `ant precommit` and the appropriate test suite.
   - [] I have added tests for my changes.
   - [] I have added documentation for the Ref Guide (for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13640) Invalid link in README.md file (lucene-solr)

2019-07-16 Thread Prince Manohar (JIRA)
Prince Manohar created SOLR-13640:
-

 Summary: Invalid link in README.md file (lucene-solr)
 Key: SOLR-13640
 URL: https://issues.apache.org/jira/browse/SOLR-13640
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Prince Manohar


In the _*Development/IDEs*_ section of the README.md file (See 
[https://github.com/apache/lucene-solr#developmentides] ) the links 
corresponding to _Eclipse,_ _IntelliJ and_ _Netbeans are not working._

 

As discussed in the _user mailing list_ this is happening because of Apache has 
taken down the _*MoinMoin wiki system*_ which was has been migrated to 
confluence.

I have fixed the urls in the README file in the below PR   

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.2-Linux (64bit/jdk-11.0.3) - Build # 430 - Unstable!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.2-Linux/430/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

319 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.spelling.suggest.TestAnalyzedSuggestions

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at __randomizedtesting.SeedInfo.seed([CFB33E6B0638792F]:0)
at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.getInstance(TokenInfoDictionary.java:62)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:215)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizerFactory.create(JapaneseTokenizerFactory.java:150)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizerFactory.create(JapaneseTokenizerFactory.java:82)
at 
org.apache.solr.analysis.TokenizerChain.createComponents(TokenizerChain.java:116)
at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:199)
at 
org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester.toAutomaton(AnalyzingSuggester.java:846)
at 
org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester.build(AnalyzingSuggester.java:430)
at org.apache.lucene.search.suggest.Lookup.build(Lookup.java:190)
at org.apache.solr.spelling.suggest.Suggester.build(Suggester.java:161)
at 
org.apache.solr.handler.component.SpellCheckComponent.prepare(SpellCheckComponent.java:128)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:279)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2578)
at org.apache.solr.util.TestHarness.query(TestHarness.java:338)
at org.apache.solr.util.TestHarness.query(TestHarness.java:320)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:921)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:907)
at 
org.apache.solr.spelling.suggest.TestAnalyzedSuggestions.beforeClass(TestAnalyzedSuggestions.java:29)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.RuntimeException: Cannot load TokenInfoDictionary.
at 
org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder.(TokenInfoDictionary.java:71)
... 42 more
Caused by: org.apache.lucene.index.IndexFormatTooNewException: Format version 
is not supported (resource 
org.apache.lucene.store.InputStreamDataInput@6865b3c9): 7 (needs to be between 
6 and 6)
at 

Re: [JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 150 - Still Failing

2019-07-16 Thread Chris Hostetter


Ugh, sorry steve -- thatnks for being on top of this.

(i should have realized there was probably an open Jira for this already, 
but i didn't remember seeing any replies to other recent jenkins failures 
... probably missed it over the July4 weekend)


: Date: Tue, 16 Jul 2019 13:14:33 -0400
: From: Steve Rowe 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: Re: [JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 150 - Still
: Failing
: 
: https://issues.apache.org/jira/browse/INFRA-18701 
 <- report of problem to 
Infra, who don't seem interested (follow linked issue to find other projects 
reporting the same problem)
: 
: related: https://issues.apache.org/jira/browse/INFRA-18505 
 <- lucene1 VM 
reconfiguration
: 
: --
: Steve
: 
: > On Jul 16, 2019, at 1:11 PM, Chris Hostetter  
wrote:
: > 
: > 
: > Same problem as the Lucene-Solr-SmokeRelease-8.2 jenkins job ... is the 
: > problem that java9 isn't installed on the apache jenkins VMs?
: > 
: > : prepare-release-no-sign:
: > : [mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/smokeTestRelease/dist
: > :  [copy] Copying 502 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/smokeTestRelease/dist/lucene
: > :  [copy] Copying 226 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/smokeTestRelease/dist/solr
: > :[smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
: > :[smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
: > :[smoker] Traceback (most recent call last):
: > :[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1485, in 
: > :[smoker] main()
: > :[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1403, in main
: > :[smoker] c = parse_config()
: > :[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1266, in parse_config
: > :[smoker] c.java = make_java_config(parser, c.test_java9)
: > :[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1213, in make_java_config
: > :[smoker] run_java9 = _make_runner(java9_home, '9')
: > :[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1201, in _make_runner
: > :[smoker] shell=True, stderr=subprocess.STDOUT).decode('utf-8')
: > :[smoker]   File "/usr/lib/python3.4/subprocess.py", line 620, in 
check_output
: > :[smoker] raise CalledProcessError(retcode, process.args, 
output=output)
: > :[smoker] subprocess.CalledProcessError: Command 'export 
JAVA_HOME="/home/jenkins/tools/java/latest1.9" 
PATH="/home/jenkins/tools/java/latest1.9/bin:$PATH" 
JAVACMD="/home/jenkins/tools/java/latest1.9/bin/java"; java -version' returned 
non-zero exit status 127
: > : 
: > : BUILD FAILED
: > : 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/build.xml:462: 
exec returned: 1
: > : 
: > : Total time: 11 minutes 30 seconds
: > : Build step 'Invoke Ant' marked build as failure
: > : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: > : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: > : Email was triggered for: Failure - Any
: > : Sending email for trigger: Failure - Any
: > : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: > : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: > : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: > : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: > : 
: > 
: > -Hoss
: > http://www.lucidworks.com/ 
: > 
: > -
: > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 

: > For additional commands, e-mail: dev-h...@lucene.apache.org 

: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 150 - Still Failing

2019-07-16 Thread Steve Rowe
https://issues.apache.org/jira/browse/INFRA-18701 
 <- report of problem to 
Infra, who don't seem interested (follow linked issue to find other projects 
reporting the same problem)

related: https://issues.apache.org/jira/browse/INFRA-18505 
 <- lucene1 VM 
reconfiguration

--
Steve

> On Jul 16, 2019, at 1:11 PM, Chris Hostetter  wrote:
> 
> 
> Same problem as the Lucene-Solr-SmokeRelease-8.2 jenkins job ... is the 
> problem that java9 isn't installed on the apache jenkins VMs?
> 
> : prepare-release-no-sign:
> : [mkdir] Created dir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/smokeTestRelease/dist
> :  [copy] Copying 502 files to 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/smokeTestRelease/dist/lucene
> :  [copy] Copying 226 files to 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/smokeTestRelease/dist/solr
> :[smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
> :[smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
> :[smoker] Traceback (most recent call last):
> :[smoker]   File 
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
>  line 1485, in 
> :[smoker] main()
> :[smoker]   File 
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
>  line 1403, in main
> :[smoker] c = parse_config()
> :[smoker]   File 
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
>  line 1266, in parse_config
> :[smoker] c.java = make_java_config(parser, c.test_java9)
> :[smoker]   File 
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
>  line 1213, in make_java_config
> :[smoker] run_java9 = _make_runner(java9_home, '9')
> :[smoker]   File 
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
>  line 1201, in _make_runner
> :[smoker] shell=True, stderr=subprocess.STDOUT).decode('utf-8')
> :[smoker]   File "/usr/lib/python3.4/subprocess.py", line 620, in 
> check_output
> :[smoker] raise CalledProcessError(retcode, process.args, 
> output=output)
> :[smoker] subprocess.CalledProcessError: Command 'export 
> JAVA_HOME="/home/jenkins/tools/java/latest1.9" 
> PATH="/home/jenkins/tools/java/latest1.9/bin:$PATH" 
> JAVACMD="/home/jenkins/tools/java/latest1.9/bin/java"; java -version' 
> returned non-zero exit status 127
> : 
> : BUILD FAILED
> : 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/build.xml:462:
>  exec returned: 1
> : 
> : Total time: 11 minutes 30 seconds
> : Build step 'Invoke Ant' marked build as failure
> : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
> : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
> : Email was triggered for: Failure - Any
> : Sending email for trigger: Failure - Any
> : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
> : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
> : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
> : Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
> : 
> 
> -Hoss
> http://www.lucidworks.com/ 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
> 


Re: [JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 150 - Still Failing

2019-07-16 Thread Chris Hostetter


Same problem as the Lucene-Solr-SmokeRelease-8.2 jenkins job ... is the 
problem that java9 isn't installed on the apache jenkins VMs?

: prepare-release-no-sign:
: [mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/smokeTestRelease/dist
:  [copy] Copying 502 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/smokeTestRelease/dist/lucene
:  [copy] Copying 226 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/smokeTestRelease/dist/solr
:[smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
:[smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
:[smoker] Traceback (most recent call last):
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1485, in 
:[smoker] main()
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1403, in main
:[smoker] c = parse_config()
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1266, in parse_config
:[smoker] c.java = make_java_config(parser, c.test_java9)
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1213, in make_java_config
:[smoker] run_java9 = _make_runner(java9_home, '9')
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/dev-tools/scripts/smokeTestRelease.py",
 line 1201, in _make_runner
:[smoker] shell=True, stderr=subprocess.STDOUT).decode('utf-8')
:[smoker]   File "/usr/lib/python3.4/subprocess.py", line 620, in 
check_output
:[smoker] raise CalledProcessError(retcode, process.args, output=output)
:[smoker] subprocess.CalledProcessError: Command 'export 
JAVA_HOME="/home/jenkins/tools/java/latest1.9" 
PATH="/home/jenkins/tools/java/latest1.9/bin:$PATH" 
JAVACMD="/home/jenkins/tools/java/latest1.9/bin/java"; java -version' returned 
non-zero exit status 127
: 
: BUILD FAILED
: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/build.xml:462: 
exec returned: 1
: 
: Total time: 11 minutes 30 seconds
: Build step 'Invoke Ant' marked build as failure
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Email was triggered for: Failure - Any
: Sending email for trigger: Failure - Any
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-8.2 - Build # 5 - Still Failing

2019-07-16 Thread Chris Hostetter


Uh ... what?

:[smoker] subprocess.CalledProcessError: Command 'export 
JAVA_HOME="/home/jenkins/tools/java/latest1.9" 
PATH="/home/jenkins/tools/java/latest1.9/bin:$PATH" 
JAVACMD="/home/jenkins/tools/java/latest1.9/bin/java"; java -version' returned 
non-zero exit status 127

 

: prepare-release-no-sign:
: [mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/build/smokeTestRelease/dist
:  [copy] Copying 502 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/build/smokeTestRelease/dist/lucene
:  [copy] Copying 226 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/build/smokeTestRelease/dist/solr
:[smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
:[smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
:[smoker] Traceback (most recent call last):
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/dev-tools/scripts/smokeTestRelease.py",
 line 1485, in 
:[smoker] main()
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/dev-tools/scripts/smokeTestRelease.py",
 line 1403, in main
:[smoker] c = parse_config()
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/dev-tools/scripts/smokeTestRelease.py",
 line 1266, in parse_config
:[smoker] c.java = make_java_config(parser, c.test_java9)
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/dev-tools/scripts/smokeTestRelease.py",
 line 1213, in make_java_config
:[smoker] run_java9 = _make_runner(java9_home, '9')
:[smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/dev-tools/scripts/smokeTestRelease.py",
 line 1201, in _make_runner
:[smoker] shell=True, stderr=subprocess.STDOUT).decode('utf-8')
:[smoker]   File "/usr/lib/python3.4/subprocess.py", line 620, in 
check_output
:[smoker] raise CalledProcessError(retcode, process.args, output=output)
:[smoker] subprocess.CalledProcessError: Command 'export 
JAVA_HOME="/home/jenkins/tools/java/latest1.9" 
PATH="/home/jenkins/tools/java/latest1.9/bin:$PATH" 
JAVACMD="/home/jenkins/tools/java/latest1.9/bin/java"; java -version' returned 
non-zero exit status 127
: 
: BUILD FAILED
: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/build.xml:462: 
exec returned: 1
: 
: Total time: 12 minutes 8 seconds
: Build step 'Invoke Ant' marked build as failure
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Email was triggered for: Failure - Any
: Sending email for trigger: Failure - Any
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3438 - Still Failing

2019-07-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3438/

All tests passed

Build Log:
[...truncated 64753 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj133492339
 [ecj-lint] Compiling 48 source files to /tmp/ecj133492339
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 23)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 28)
 [ecj-lint] public class MockInitialContextFactory implements 
InitialContextFactory {
 [ecj-lint]  ^
 [ecj-lint] The type MockInitialContextFactory must implement the inherited 
abstract method InitialContextFactory.getInitialContext(Hashtable)
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 30)
 [ecj-lint] private final javax.naming.Context context;
 [ecj-lint]   
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint] ^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 36)
 [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> 
objects.get(invocation.getArgument(0)));
 [ecj-lint]  ^^^
 [ecj-lint] context cannot be resolved
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 38)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 45)
 [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) {
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 46)
 [ecj-lint] return context;
 [ecj-lint]^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 9 problems (9 errors)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:634: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build.xml:651:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/common-build.xml:479:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2015:
 The following error occurred while executing this line:

[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-07-16 Thread Tim Owen (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886282#comment-16886282
 ] 

Tim Owen commented on SOLR-13240:
-

Yes it looks like the code fix has shown up other (autoscaling) tests that now 
fail, perhaps as you suggest they were relying on the previous sorting order.

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
>  
> 

[jira] [Commented] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.

2019-07-16 Thread Richard Goodman (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886277#comment-16886277
 ] 

Richard Goodman commented on SOLR-11556:


Hey,

With my current use case, we backup our indexes to a remote HDFS cluster. But 
we are slowly going to move this into AWS. We wanted to have more than 1 backup 
repository defined so that whilst we were testing backing up to AWS we could 
still backup to our current pipeline.

I came across the exact same error as you _(currently running v7.4)_, and I 
applied the patch you submitted, and it solved the problem for me. So it looks 
like this didn't get chased up in newer versions.

> Backup/Restore with multiple BackupRepository objects defined results in the 
> wrong repo being used.
> ---
>
> Key: SOLR-11556
> URL: https://issues.apache.org/jira/browse/SOLR-11556
> Project: Solr
>  Issue Type: Bug
>  Components: Backup/Restore
>Affects Versions: 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Major
> Attachments: SOLR-11556.patch
>
>
> I defined two repos for backup/restore, one local and one remote on GCS, e.g.
> {code}
> 
>  class="org.apache.solr.core.backup.repository.HdfsBackupRepository" 
> default="false">
>  ...
> 
>  class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" 
> default="false">
>   /tmp/solr-backups
> 
>  
> {code}
> Since the CollectionHandler does not pass the "repository" param along, once 
> the BackupCmd gets the ZkNodeProps, it selects the wrong repo! 
> The error I'm seeing is:
> {code}
> 2017-10-26 17:07:27.326 ERROR 
> (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [   ] 
> o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: 
> backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not 
> installed
> at java.nio.file.Paths.get(Paths.java:147)
> at 
> org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82)
> at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99)
> at 
> org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Notice the Local backup repo is being selected in the BackupCmd even though I 
> passed repository=hdfs in my backup command, e.g.
> {code}
> curl 
> "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs;
> {code} 
> I think the fix here is to include the repository param, see patch. I'll fix 
> for the next 7.x release and those on 6 can just apply the patch here.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11266) V2 API returning wrong content-type

2019-07-16 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886276#comment-16886276
 ] 

Munendra S N commented on SOLR-11266:
-

Current behavior of V2 API is consistent with v1 API but when {{wt=json}} is 
passed explicitly, then content-type is {{text/plain}} (in both versions) this 
is due to SOLR-1123 (expected behavior)

I'm not sure what should to be done here - resolve this since behavior is 
expected(from changeLog) or comment out the content-type override from the 
default configs??
[~ichattopadhyaya] please suggest

> V2 API returning wrong content-type
> ---
>
> Key: SOLR-11266
> URL: https://issues.apache.org/jira/browse/SOLR-11266
> Project: Solr
>  Issue Type: Bug
>  Components: v2 API
>Reporter: Ishan Chattopadhyaya
>Priority: Major
>
> The content-type of the returned value is wrong in many places. It should 
> return "application/json", but instead returns "application/text-plan".
> Here's an example:
> {code}
> [ishan@t430 ~] $ curl -v 
> "http://localhost:8983/api/collections/products/select?q=*:*=0;
> *   Trying 127.0.0.1...
> * TCP_NODELAY set
> * Connected to localhost (127.0.0.1) port 8983 (#0)
> > GET /api/collections/products/select?q=*:*=0 HTTP/1.1
> > Host: localhost:8983
> > User-Agent: curl/7.51.0
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Content-Type: text/plain;charset=utf-8
> < Content-Length: 184
> < 
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":1,
> "params":{
>   "q":"*:*",
>   "rows":"0"}},
>   "response":{"numFound":260,"start":0,"docs":[]
>   }}
> * Curl_http_done: called premature == 0
> * Connection #0 to host localhost left intact
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn commented on a change in pull request #597: [SOLR-13272] feat(facet/interval): support json facet requests for interval facet

2019-07-16 Thread GitBox
munendrasn commented on a change in pull request #597: [SOLR-13272] 
feat(facet/interval): support json facet requests for interval facet
URL: https://github.com/apache/lucene-solr/pull/597#discussion_r303986476
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/search/facet/FacetRange.java
 ##
 @@ -299,8 +323,127 @@ private void createRangeList() throws IOException {
   actual_end = null;
 }
   }
-  
-  
+
+  private List parseInterval(Object input) {
+//intervals :[{key:"set1",value:"[0,10]"}]
+//@todo apoorv handle exception, sort orders
 
 Review comment:
   could you please validate the `input`? What is the `sort orders` here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn commented on a change in pull request #597: [SOLR-13272] feat(facet/interval): support json facet requests for interval facet

2019-07-16 Thread GitBox
munendrasn commented on a change in pull request #597: [SOLR-13272] 
feat(facet/interval): support json facet requests for interval facet
URL: https://github.com/apache/lucene-solr/pull/597#discussion_r303988029
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/search/facet/TestJsonFacets.java
 ##
 @@ -49,8 +49,8 @@
 //   TestCloudJSONFacetJoinDomain for random field faceting tests with domain 
modifications
 //   TestJsonFacetRefinement for refinement tests
 
-@LuceneTestCase.SuppressCodecs({"Lucene3x","Lucene40","Lucene41","Lucene42","Lucene45","Appending"})
-public class TestJsonFacets extends SolrTestCaseHS {
+@LuceneTestCase.SuppressCodecs({"Lucene3x","Lucene40","Lucene41","FST50","Direct","Lucene42","Lucene45","Appending","BlockTreeOrds","FSTOrd50"})
 
 Review comment:
   any particular reason for suppressing this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn commented on a change in pull request #597: [SOLR-13272] feat(facet/interval): support json facet requests for interval facet

2019-07-16 Thread GitBox
munendrasn commented on a change in pull request #597: [SOLR-13272] 
feat(facet/interval): support json facet requests for interval facet
URL: https://github.com/apache/lucene-solr/pull/597#discussion_r303986989
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/search/facet/TestJsonFacets.java
 ##
 @@ -3225,6 +3224,58 @@ public void testDomainErrors() throws Exception {
 
   }
 
+  @Test
+  public void testIntervalFacets() throws Exception {
 
 Review comment:
   Please add test cases to cover `other`, `include` and refinement


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq

2019-07-16 Thread GitBox
atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For 
Doc + Freq
URL: https://github.com/apache/lucene-solr/pull/779#issuecomment-511874550
 
 
   @jpountz Three Luceneutil runs with wikimedium2m:
   
   https://gist.github.com/atris/5057e3372b287873a4e840c5f4e65a0d
   https://gist.github.com/atris/76719423071662d5d25eed4bdb7bf4a3
   https://gist.github.com/atris/dd7b7336147da138ac340670c81e5452


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn commented on a change in pull request #597: [SOLR-13272] feat(facet/interval): support json facet requests for interval facet

2019-07-16 Thread GitBox
munendrasn commented on a change in pull request #597: [SOLR-13272] 
feat(facet/interval): support json facet requests for interval facet
URL: https://github.com/apache/lucene-solr/pull/597#discussion_r303986476
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/search/facet/FacetRange.java
 ##
 @@ -299,8 +323,127 @@ private void createRangeList() throws IOException {
   actual_end = null;
 }
   }
-  
-  
+
+  private List parseInterval(Object input) {
+//intervals :[{key:"set1",value:"[0,10]"}]
+//@todo apoorv handle exception, sort orders
 
 Review comment:
   could you please validate the input? What is the `sort orders` here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13600) Basic Authentication for read role is not working

2019-07-16 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886222#comment-16886222
 ] 

Ishan Chattopadhyaya commented on SOLR-13600:
-

Recently, SOLR-13472 was fixed. You might want to check if you were affected by 
it.

> Basic Authentication for read role is not working
> -
>
> Key: SOLR-13600
> URL: https://issues.apache.org/jira/browse/SOLR-13600
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authorization
>Affects Versions: 8.1.1
> Environment: DEV environment
>Reporter: Nitin Asati
>Priority: Major
>  Labels: security
>
> Hello Team,
> I have upgraded the SOLR instance from 7.x to 8.1.1 and my READ role users 
> are not able to search results. 
> Upon trying to access below URL, getting the error:
> [http://localhost:8984/solr/testcore/select?q=*%3A*|http://localhost:8984/solr/xcelerate/select?q=*%3A*]
> h2. HTTP ERROR 403
> Problem accessing /solr/xcelerate/select. Reason:
> Unauthorized request, Response code: 403
>  
> Below is the content of security.json file.
>  
> {
>  "authentication":{
>  "blockUnknown":true,
>  "class":"solr.BasicAuthPlugin",
>  "credentials":{
>  "solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= 
> Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c=",
>  "searchuser":"hzx9wjm6baNqx08LpfevT8dNaojdMqIJMAF8cXanL1o= 
> CLDitkrBjs2FbqhOZN9Ey9Qc+5xcOJHfQTbPMC2p1eU=",
>  "solradmin":"ovgoJKFnFo43fgt5Pd7bfXBwq3+vfCO3uZXVRUi7H0Q= 
> gKRUTDGkg5RtTIgXDiKFkefuaelAWU18KlRTAv4LfFQ="},
>  "realm":"My Solr users",
>  "forwardCredentials":false,
>  "":\{"v":0}},
>  "authorization":{
>  "class":"solr.RuleBasedAuthorizationPlugin",
>  "permissions":[
>  {
>  "name":"all",
>  "role":"admin",
>  "index":1},
>  {
>  "name":"read",
>  "role":"search",
>  "index":2}],
>  "user-role":{
>  "solr":"admin",
>  "searchuser":["read"],
>  "solradmin":["admin"]},
>  "":\{"v":0}}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-11.0.3) - Build # 5261 - Still Failing!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5261/
Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2039 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/core/test/temp/junit4-J1-20190716_132555_08913910815186731476399.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/core/test/temp/junit4-J0-20190716_132555_0896076270848643970962.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 293 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/test-framework/test/temp/junit4-J1-20190716_133753_50916757450141272565088.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/test-framework/test/temp/junit4-J0-20190716_133753_50917617310586697881386.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 1083 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/common/test/temp/junit4-J0-20190716_133915_340146200408759655.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/common/test/temp/junit4-J1-20190716_133915_34016663573349537412151.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 242 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/icu/test/temp/junit4-J0-20190716_134319_6467494984778118810222.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/icu/test/temp/junit4-J1-20190716_134319_64612369825891935505429.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 217 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/kuromoji/test/temp/junit4-J1-20190716_134340_27718256002485302218139.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/kuromoji/test/temp/junit4-J0-20190716_134340_2779348738682783998857.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 154 lines...]
   [junit4] JVM J0: stderr was not empty, see: 

[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 152 - Still Unstable

2019-07-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/152/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple2 Timeout waiting to see state for 
collection=testSimple2 
:DocCollection(testSimple2//collections/testSimple2/state.json/26)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node3/data/", 
  "base_url":"https://127.0.0.1:41177/solr;,   
"node_name":"127.0.0.1:41177_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"down"}, "core_node5":{  
 
"dataDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node5/data/", 
  "base_url":"https://127.0.0.1:43125/solr;,   
"node_name":"127.0.0.1:43125_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node7":{   
"dataDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node7/data/", 
  "base_url":"https://127.0.0.1:41177/solr;,   
"node_name":"127.0.0.1:41177_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node7/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"down"}, "core_node8":{  
 
"dataDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node8/data/", 
  "base_url":"https://127.0.0.1:43125/solr;,   
"node_name":"127.0.0.1:43125_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node8/data/tlog",
   "core":"testSimple2_shard2_replica_n6",   
"shared_storage":"true",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"2",   "autoAddReplicas":"true",   "nrtReplicas":"2",   
"tlogReplicas":"0"} Live Nodes: [127.0.0.1:42073_solr, 127.0.0.1:43125_solr] 
Last available state: 
DocCollection(testSimple2//collections/testSimple2/state.json/26)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node3/data/", 
  "base_url":"https://127.0.0.1:41177/solr;,   
"node_name":"127.0.0.1:41177_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"down"}, "core_node5":{  
 
"dataDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node5/data/", 
  "base_url":"https://127.0.0.1:43125/solr;,   
"node_name":"127.0.0.1:43125_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node7":{   
"dataDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node7/data/", 
  "base_url":"https://127.0.0.1:41177/solr;,   
"node_name":"127.0.0.1:41177_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node7/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"down"}, "core_node8":{  
 
"dataDir":"hdfs://localhost:35131/solr_hdfs_home/testSimple2/core_node8/data/", 
  "base_url":"https://127.0.0.1:43125/solr;,   
"node_name":"127.0.0.1:43125_solr",   "type":"NRT",   
"force_set_state":"false",   

[jira] [Commented] (SOLR-13125) Optimize Queries when sorting by router.field

2019-07-16 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886151#comment-16886151
 ] 

Gus Heck commented on SOLR-13125:
-

The idea behind this patch is interesting. Unless I misunderstand the intent, 
the idea is to short circuit the response collection when the TRA collection 
names tell us that further responses will all return docs that are too far down 
the result list to ever be included. Unfortunately I don't think this patch 
does that. Issues I see:
 * This patch overrides finishStage() instead of handleResponses, which means 
that by the time your logic runs all responses have been received.
 * I don't see logic to handle values for the start parameter
 * Also not sure I like the tests checking debug messages rather than actual 
code behavior. That could get out of sync.

In any case, it's unclear to me if this can be handled in a search component 
without core changes, even if you override handleResponses() instead, you can't 
stop SearchHandler from looping and attempting to take() the results of every 
request that was sent (unless you throw an exception, but that wont be good). 
What you would need to do is somehow influence the futures that solr is waiting 
on to return early and empty once your request has been filled up from the most 
recent collections. (see 
org/apache/solr/handler/component/HttpShardHandler.java:281). Baring that, you 
could perhaps find a way to empty the pending queue, but that means you still 
have to wait for at least one uninteresting request to complete. The futures 
themselves would be waiting on the 
org/apache/solr/handler/component/HttpShardHandler.java:201. call to 
makeLoadBalancedRequest(), so I think this optimization requires the addition 
of an explicit short-circuit enabling hook. Possibly this could be a new method 
for SearchComponents to override, but we need to think some about how that 
would play with assumptions of existing code some. 

> Optimize Queries when sorting by router.field
> -
>
> Key: SOLR-13125
> URL: https://issues.apache.org/jira/browse/SOLR-13125
> Project: Solr
>  Issue Type: Sub-task
>Reporter: mosh
>Priority: Minor
> Attachments: SOLR-13125-no-commit.patch, SOLR-13125.patch, 
> SOLR-13125.patch, SOLR-13125.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are currently testing TRA using Solr 7.7, having >300 shards in the alias, 
> with much growth in the coming months.
> The "hot" data(in our case, more recent) will be stored on stronger 
> nodes(SSD, more RAM, etc).
> A proposal of optimizing queries sorted by router.field(the field which TRA 
> uses to route the data to the correct collection) has emerged.
> Perhaps, in queries which are sorted by router.field, Solr could be smart 
> enough to wait for the more recent collections, and in case the limit was 
> reached cancel other queries(or just not block and wait for the results)?
> For example:
> When querying a TRA which with a filter on a different field than 
> router.field, but sorting by router.field desc, limit=100.
> Since this is a TRA, solr will issue queries for all the collections in the 
> alias.
> But to optimize this particular type of query, Solr could wait for the most 
> recent collection in the TRA, see whether the result set matches or exceeds 
> the limit. If so, the query could be returned to the user without waiting for 
> the rest of the shards. If not, the issuing node will block until the second 
> query returns, and so forth, until the limit of the request is reached.
> This might also be useful for deep paging, querying each collection and only 
> skipping to the next once there are no more results in the specified 
> collection.
> Thoughts or inputs are always welcome.
> This is just my two cents, and I'm always happy to brainstorm.
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on issue #769: LUCENE-8905: Better Error Handling For Illegal Arguments

2019-07-16 Thread GitBox
atris commented on issue #769: LUCENE-8905: Better Error Handling For Illegal 
Arguments
URL: https://github.com/apache/lucene-solr/pull/769#issuecomment-511831246
 
 
   > @atris I think the check you introduced always fails a query when there 
are no hits?
   
   @jpountz Thanks for highlighting that. Interesting that TopDocsCollector 
handles no hits and illegal argument cases in the same check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] bruno-roustant commented on a change in pull request #780: SOLR-11866: Support efficient subset matching in query elevation rules

2019-07-16 Thread GitBox
bruno-roustant commented on a change in pull request #780: SOLR-11866: Support 
efficient subset matching in query elevation rules
URL: https://github.com/apache/lucene-solr/pull/780#discussion_r303909793
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
 ##
 @@ -701,28 +703,20 @@ protected ElevationProvider 
createElevationProvider(Map queryTerms, 
Appendable concatenatedTerms) {
+  protected void analyzeQuery(String query, Consumer termsConsumer) {
 try (TokenStream tokens = queryAnalyzer.tokenStream("", query)) {
   tokens.reset();
   CharTermAttribute termAtt = tokens.addAttribute(CharTermAttribute.class);
   while (tokens.incrementToken()) {
-if (queryTerms != null) {
-  queryTerms.add(termAtt.toString());
-}
-if (concatenatedTerms != null) {
-  concatenatedTerms.append(termAtt);
-}
+termsConsumer.accept(termAtt.toString());
 
 Review comment:
   I'll change to a Consumer of CharSequence.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8883) CHANGES.txt: Auto add issue categories on new releases

2019-07-16 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886114#comment-16886114
 ] 

Adrien Grand commented on LUCENE-8883:
--

I have a slight preference for having "Optimizations" as one category.

> CHANGES.txt: Auto add issue categories on new releases
> --
>
> Key: LUCENE-8883
> URL: https://issues.apache.org/jira/browse/LUCENE-8883
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-8883.patch, LUCENE-8883.patch
>
>
> As I write this, looking at Solr's CHANGES.txt for 8.2 I see we have some 
> sections: "Upgrade Notes", "New Features", "Bug Fixes", and "Other Changes".  
> There is no "Improvements" so no surprise here, the New Features category 
> has issues that ought to be listed as such.  I think the order vary as well.  
> I propose that on new releases, the initial state of the next release in 
> CHANGES.txt have these sections.  They can easily be removed at the upcoming 
> release if there are no such sections, or they could stay as empty.  It seems 
> addVersion.py is the code that sets this up and it could be enhanced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8883) CHANGES.txt: Auto add issue categories on new releases

2019-07-16 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886112#comment-16886112
 ] 

David Smiley commented on LUCENE-8883:
--

Thanks for the review Christine; I'll rename that variable.
Adrien I can add Optimizations as well; I'm torn either way and accept your 
preference.
Then I can commit this I think.

> CHANGES.txt: Auto add issue categories on new releases
> --
>
> Key: LUCENE-8883
> URL: https://issues.apache.org/jira/browse/LUCENE-8883
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-8883.patch, LUCENE-8883.patch
>
>
> As I write this, looking at Solr's CHANGES.txt for 8.2 I see we have some 
> sections: "Upgrade Notes", "New Features", "Bug Fixes", and "Other Changes".  
> There is no "Improvements" so no surprise here, the New Features category 
> has issues that ought to be listed as such.  I think the order vary as well.  
> I propose that on new releases, the initial state of the next release in 
> CHANGES.txt have these sections.  They can easily be removed at the upcoming 
> release if there are no such sections, or they could stay as empty.  It seems 
> addVersion.py is the code that sets this up and it could be enhanced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] bruno-roustant commented on a change in pull request #780: SOLR-11866: Support efficient subset matching in query elevation rules

2019-07-16 Thread GitBox
bruno-roustant commented on a change in pull request #780: SOLR-11866: Support 
efficient subset matching in query elevation rules
URL: https://github.com/apache/lucene-solr/pull/780#discussion_r303910866
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
 ##
 @@ -857,31 +851,33 @@ public int size() {
* 
* The terms are tokenized with the query analyzer.
*/
-  protected class SubsetMatchElevationProvider implements ElevationProvider {
+  protected class DefaultElevationProvider implements ElevationProvider {
 
-private final SubsetMatcher subsetMatcher;
+private final TrieSubsetMatcher subsetMatcher;
 private final Map exactMatchElevationMap;
 
 /**
- * @param subsetMatcherBuilder The {@link SubsetMatcher.Builder} to build 
the {@link SubsetMatcher}.
+ * @param subsetMatcherBuilder The {@link TrieSubsetMatcher.Builder} to 
build the {@link TrieSubsetMatcher}.
  * @param elevationBuilderMap The map of elevation rules.
  */
-protected SubsetMatchElevationProvider(SubsetMatcher.Builder subsetMatcherBuilder,
-   Map elevationBuilderMap) {
+protected DefaultElevationProvider(TrieSubsetMatcher.Builder subsetMatcherBuilder,
+   Map 
elevationBuilderMap) {
   exactMatchElevationMap = new LinkedHashMap<>();
   Collection queryTerms = new ArrayList<>();
-  StringBuilder concatenatedTerms = new StringBuilder();
+  Consumer termsConsumer = queryTerms::add;
 
 Review comment:
   Yes it makes a difference. The lambda instance is created once, compared to 
inside the loop where a lambda instance is created for each step of the loop 
because the collection is not static.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] bruno-roustant commented on a change in pull request #780: SOLR-11866: Support efficient subset matching in query elevation rules

2019-07-16 Thread GitBox
bruno-roustant commented on a change in pull request #780: SOLR-11866: Support 
efficient subset matching in query elevation rules
URL: https://github.com/apache/lucene-solr/pull/780#discussion_r303909793
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
 ##
 @@ -701,28 +703,20 @@ protected ElevationProvider 
createElevationProvider(Map queryTerms, 
Appendable concatenatedTerms) {
+  protected void analyzeQuery(String query, Consumer termsConsumer) {
 try (TokenStream tokens = queryAnalyzer.tokenStream("", query)) {
   tokens.reset();
   CharTermAttribute termAtt = tokens.addAttribute(CharTermAttribute.class);
   while (tokens.incrementToken()) {
-if (queryTerms != null) {
-  queryTerms.add(termAtt.toString());
-}
-if (concatenatedTerms != null) {
-  concatenatedTerms.append(termAtt);
-}
+termsConsumer.accept(termAtt.toString());
 
 Review comment:
   I'll change to a Consumer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (32bit/jdk1.8.0_201) - Build # 370 - Failure!

2019-07-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/370/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 16248 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\solr\build\solr-core\test\temp\junit4-J0-20190716_111758_641395798375964897894.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\heapdumps\java_pid10500.hprof
 ...
   [junit4] Heap dump file created [222063227 bytes in 2.520 secs]
   [junit4] <<< JVM J0: EOF 

[...truncated 9496 lines...]
BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\build.xml:634: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\build.xml:586: Some of the 
tests produced a heap dump, but did not fail. Maybe a suppressed 
OutOfMemoryError? Dumps created:
* java_pid10500.hprof

Total time: 164 minutes 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=C:\Users\jenkins\tools\hudson.tasks.Ant_AntInstallation\ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=C:\Users\jenkins\tools\hudson.tasks.Ant_AntInstallation\ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=C:\Users\jenkins\tools\hudson.tasks.Ant_AntInstallation\ANT_1.8.2
Setting 
ANT_1_8_2_HOME=C:\Users\jenkins\tools\hudson.tasks.Ant_AntInstallation\ANT_1.8.2
Setting 
ANT_1_8_2_HOME=C:\Users\jenkins\tools\hudson.tasks.Ant_AntInstallation\ANT_1.8.2
Setting 
ANT_1_8_2_HOME=C:\Users\jenkins\tools\hudson.tasks.Ant_AntInstallation\ANT_1.8.2
Setting 
ANT_1_8_2_HOME=C:\Users\jenkins\tools\hudson.tasks.Ant_AntInstallation\ANT_1.8.2
Setting 
ANT_1_8_2_HOME=C:\Users\jenkins\tools\hudson.tasks.Ant_AntInstallation\ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] [lucene-solr] magibney commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences

2019-07-16 Thread GitBox
magibney commented on a change in pull request #677: SOLR-13257: support for 
stable replica routing preferences
URL: https://github.com/apache/lucene-solr/pull/677#discussion_r303906601
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/HttpShardHandlerFactory.java
 ##
 @@ -449,9 +556,83 @@ private static boolean hasReplicaType(Object o, String 
preferred) {
 }
   }
 
+  private final ReplicaListTransformerFactory randomRltFactory = (String 
configSpec, SolrQueryRequest request,
+  ReplicaListTransformerFactory fallback) -> 
shufflingReplicaListTransformer;
+  private ReplicaListTransformerFactory stableRltFactory;
+  private ReplicaListTransformerFactory defaultRltFactory;
+
+  /**
+   * Private class responsible for applying pairwise sort based on inherent 
replica attributes,
+   * and subsequently reordering any equivalent replica sets according to 
behavior specified
+   * by the baseReplicaListTransformer.
+   */
+  private static final class TopLevelReplicaListTransformer implements 
ReplicaListTransformer {
+
+private final NodePreferenceRulesComparator replicaComp;
+private final ReplicaListTransformer baseReplicaListTransformer;
+
+public TopLevelReplicaListTransformer(NodePreferenceRulesComparator 
replicaComp, ReplicaListTransformer baseReplicaListTransformer) {
+  this.replicaComp = replicaComp;
+  this.baseReplicaListTransformer = baseReplicaListTransformer;
+}
+
+@Override
+public void transform(List choices) {
 
 Review comment:
   True, random-with-seed should work, and I'm open to that. One use case that 
would benefit from a list-rotation-based implementation (as opposed to 
random-with-seed) would be if you wanted to set up a tiered or grouped routing 
system.
   
   For example, replication factor of 2, say you have two distinct types of 
users with different access patterns (that would result in different cache 
usage patterns). Rather than simply routing deterministically for a given 
user/request, you could choose one of exactly 2 routing params, bifurcating 
traffic depending on expected use. With list-rotation, you could specify 
`routingParam=0` or `routingParam=1` and get the desired behavior; 
random-with-seed might not work that way, depending on the exact seeds chosen.
   
   Granted you could achieve this tiered/grouped routing in other ways, but 
with a list-rotation-based implementation it would be trivial. The cost, as you 
say, would be some additional complexity (code-wise, not performance-wise) in 
`HttpShardHandlerFactory`.
   
   In fact, it's this type of use case that I had in mind when providing 
support for the ability to specify `dividend` directly (as opposed to always 
hashing the routing param, which would be similarly opaque and thus 
incompatible with a tiered/grouped routing strategy).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq

2019-07-16 Thread GitBox
atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For 
Doc + Freq
URL: https://github.com/apache/lucene-solr/pull/779#issuecomment-511813095
 
 
   @jpountz I have fixed your comments, please let me know if they look fine.
   
   luceneutil is running right now with 10k documents, will post results ASAP


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8911) Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x

2019-07-16 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886104#comment-16886104
 ] 

Tomoko Uchida commented on LUCENE-8911:
---

Thank you, it seems that a mock tokenizer class which was added here does not 
generate proper token stream under certain conditions. I will fix the mock 
class.
Anyway I agree with you that we should wait to see how the changes here behave.

> Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x
> -
>
> Key: LUCENE-8911
> URL: https://issues.apache.org/jira/browse/LUCENE-8911
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In LUCENE-8907 I reverted LUCENE-8778 from the 8x branch.
> Can we backport it to 8x branch again, with transparent backwards 
> compatibility (by emulating the factory loading method of Lucene 8.1)?
> I am not so sure about it would be better or not to backport the changes, 
> however, maybe it is good for Solr to have SOLR-13593 without waiting for 
> release 9.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #780: SOLR-11866: Support efficient subset matching in query elevation rules

2019-07-16 Thread GitBox
dsmiley commented on a change in pull request #780: SOLR-11866: Support 
efficient subset matching in query elevation rules
URL: https://github.com/apache/lucene-solr/pull/780#discussion_r303890649
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
 ##
 @@ -701,28 +703,20 @@ protected ElevationProvider 
createElevationProvider(Map queryTerms, 
Appendable concatenatedTerms) {
+  protected void analyzeQuery(String query, Consumer termsConsumer) {
 try (TokenStream tokens = queryAnalyzer.tokenStream("", query)) {
   tokens.reset();
   CharTermAttribute termAtt = tokens.addAttribute(CharTermAttribute.class);
   while (tokens.incrementToken()) {
-if (queryTerms != null) {
-  queryTerms.add(termAtt.toString());
-}
-if (concatenatedTerms != null) {
-  concatenatedTerms.append(termAtt);
-}
+termsConsumer.accept(termAtt.toString());
 
 Review comment:
   the termAtt.toString() is a bit of a shame since as a CharSequence, the 
string appending consumer needn't get the String.  But it's not a big deal.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #780: SOLR-11866: Support efficient subset matching in query elevation rules

2019-07-16 Thread GitBox
dsmiley commented on a change in pull request #780: SOLR-11866: Support 
efficient subset matching in query elevation rules
URL: https://github.com/apache/lucene-solr/pull/780#discussion_r303891904
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
 ##
 @@ -896,13 +892,18 @@ public Elevation getElevationForQuery(String 
queryString) {
 }
 return exactMatchElevationMap.get(analyzeQuery(queryString));
   }
-  StringBuilder concatenatedTerms = hasExactMatchElevationRules ? new 
StringBuilder() : null;
   Collection queryTerms = new ArrayList<>();
-  analyzeQuery(queryString, queryTerms, concatenatedTerms);
+  Consumer termsConsumer = queryTerms::add;
+  StringBuilder concatTerms = null;
+  if (hasExactMatchElevationRules) {
+concatTerms = new StringBuilder();
+termsConsumer = termsConsumer.andThen(concatTerms::append);
 
 Review comment:
   Ha!  Love it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #780: SOLR-11866: Support efficient subset matching in query elevation rules

2019-07-16 Thread GitBox
dsmiley commented on a change in pull request #780: SOLR-11866: Support 
efficient subset matching in query elevation rules
URL: https://github.com/apache/lucene-solr/pull/780#discussion_r303891477
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
 ##
 @@ -857,31 +851,33 @@ public int size() {
* 
* The terms are tokenized with the query analyzer.
*/
-  protected class SubsetMatchElevationProvider implements ElevationProvider {
+  protected class DefaultElevationProvider implements ElevationProvider {
 
-private final SubsetMatcher subsetMatcher;
+private final TrieSubsetMatcher subsetMatcher;
 private final Map exactMatchElevationMap;
 
 /**
- * @param subsetMatcherBuilder The {@link SubsetMatcher.Builder} to build 
the {@link SubsetMatcher}.
+ * @param subsetMatcherBuilder The {@link TrieSubsetMatcher.Builder} to 
build the {@link TrieSubsetMatcher}.
  * @param elevationBuilderMap The map of elevation rules.
  */
-protected SubsetMatchElevationProvider(SubsetMatcher.Builder subsetMatcherBuilder,
-   Map elevationBuilderMap) {
+protected DefaultElevationProvider(TrieSubsetMatcher.Builder subsetMatcherBuilder,
+   Map 
elevationBuilderMap) {
   exactMatchElevationMap = new LinkedHashMap<>();
   Collection queryTerms = new ArrayList<>();
-  StringBuilder concatenatedTerms = new StringBuilder();
+  Consumer termsConsumer = queryTerms::add;
 
 Review comment:
   Is pulling the method reference outside the loop actually more efficient?  
Please point to info showing this.  I'm kinda sad if it is... I like the 
clean/simplicity of it used where needed.  @uschindler I have a feeling you know


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #780: SOLR-11866: Support efficient subset matching in query elevation rules

2019-07-16 Thread GitBox
dsmiley commented on a change in pull request #780: SOLR-11866: Support 
efficient subset matching in query elevation rules
URL: https://github.com/apache/lucene-solr/pull/780#discussion_r303893047
 
 

 ##
 File path: solr/solr-ref-guide/src/the-query-elevation-component.adoc
 ##
 @@ -93,11 +93,19 @@ Elevated query results can be configured in an external 
XML file specified in th
   
  
   
+
+  
+
+  
 
 
 
 In this example, the query "foo bar" would first return documents 1, 2 and 3, 
then whatever normally appears for the same query. For the query "ipod", it 
would first return "MA147LL/A", and would make sure that "IW-02" is not in the 
result set.
 
+Notice the `match` parameter with the value "subset" for the third rule. A 
query "bill bar foo" would trigger this rule because the rule defines a subset 
of terms to appear in the query, in any order. This query would elevate 
document 11 on top.
+The `match` parameter accepts either "exact" (by default) or "subset" values.
+Subset matching is scalable, one can add many rules with the match="subset" 
parameter.
 
 Review comment:
   the match="subset" needs code style.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #769: LUCENE-8905: Better Error Handling For Illegal Arguments

2019-07-16 Thread GitBox
jpountz commented on a change in pull request #769: LUCENE-8905: Better Error 
Handling For Illegal Arguments
URL: https://github.com/apache/lucene-solr/pull/769#discussion_r303890610
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/TopDocsCollector.java
 ##
 @@ -136,12 +136,14 @@ public TopDocs topDocs(int start, int howMany) {
 // pq.size() or totalHits.
 int size = topDocsSize();
 
-// Don't bother to throw an exception, just return an empty TopDocs in case
-// the parameters are invalid or out of range.
-// TODO: shouldn't we throw IAE if apps give bad params here so they dont
-// have sneaky silent bugs?
-if (start < 0 || start >= size || howMany <= 0) {
-  return newTopDocs(null, start);
+
+if (start < 0 || start >= size) {
+  throw new IllegalArgumentException("Expected value of starting position 
is between 0 and " + size +
+  ", got " + start);
+}
+
+if (howMany <= 0) {
+  throw new IllegalArgumentException("Number of hits requested must be 
greater than 0");
 
 Review comment:
   can you add the value of `howMany` to the error message?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13600) Basic Authentication for read role is not working

2019-07-16 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-13600.

Resolution: Invalid

> Basic Authentication for read role is not working
> -
>
> Key: SOLR-13600
> URL: https://issues.apache.org/jira/browse/SOLR-13600
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authorization
>Affects Versions: 8.1.1
> Environment: DEV environment
>Reporter: Nitin Asati
>Priority: Major
>  Labels: security
>
> Hello Team,
> I have upgraded the SOLR instance from 7.x to 8.1.1 and my READ role users 
> are not able to search results. 
> Upon trying to access below URL, getting the error:
> [http://localhost:8984/solr/testcore/select?q=*%3A*|http://localhost:8984/solr/xcelerate/select?q=*%3A*]
> h2. HTTP ERROR 403
> Problem accessing /solr/xcelerate/select. Reason:
> Unauthorized request, Response code: 403
>  
> Below is the content of security.json file.
>  
> {
>  "authentication":{
>  "blockUnknown":true,
>  "class":"solr.BasicAuthPlugin",
>  "credentials":{
>  "solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= 
> Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c=",
>  "searchuser":"hzx9wjm6baNqx08LpfevT8dNaojdMqIJMAF8cXanL1o= 
> CLDitkrBjs2FbqhOZN9Ey9Qc+5xcOJHfQTbPMC2p1eU=",
>  "solradmin":"ovgoJKFnFo43fgt5Pd7bfXBwq3+vfCO3uZXVRUi7H0Q= 
> gKRUTDGkg5RtTIgXDiKFkefuaelAWU18KlRTAv4LfFQ="},
>  "realm":"My Solr users",
>  "forwardCredentials":false,
>  "":\{"v":0}},
>  "authorization":{
>  "class":"solr.RuleBasedAuthorizationPlugin",
>  "permissions":[
>  {
>  "name":"all",
>  "role":"admin",
>  "index":1},
>  {
>  "name":"read",
>  "role":"search",
>  "index":2}],
>  "user-role":{
>  "solr":"admin",
>  "searchuser":["read"],
>  "solradmin":["admin"]},
>  "":\{"v":0}}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13600) Basic Authentication for read role is not working

2019-07-16 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886083#comment-16886083
 ] 

Jason Gerlowski commented on SOLR-13600:


The Solr JIRA is not a support portal.  We try to keep it clear of everything 
except confirmed bugs and proposed improvements.

If you're still looking for help with this issue, please start a thread on the 
solr-user mailing list or ask in our IRC channel.

(Before doing so, you might want to read some about what order permissions are 
evaluated in, and how that can affect authz results.  "all" rules should almost 
always come last in your security.json.)

> Basic Authentication for read role is not working
> -
>
> Key: SOLR-13600
> URL: https://issues.apache.org/jira/browse/SOLR-13600
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authorization
>Affects Versions: 8.1.1
> Environment: DEV environment
>Reporter: Nitin Asati
>Priority: Major
>  Labels: security
>
> Hello Team,
> I have upgraded the SOLR instance from 7.x to 8.1.1 and my READ role users 
> are not able to search results. 
> Upon trying to access below URL, getting the error:
> [http://localhost:8984/solr/testcore/select?q=*%3A*|http://localhost:8984/solr/xcelerate/select?q=*%3A*]
> h2. HTTP ERROR 403
> Problem accessing /solr/xcelerate/select. Reason:
> Unauthorized request, Response code: 403
>  
> Below is the content of security.json file.
>  
> {
>  "authentication":{
>  "blockUnknown":true,
>  "class":"solr.BasicAuthPlugin",
>  "credentials":{
>  "solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= 
> Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c=",
>  "searchuser":"hzx9wjm6baNqx08LpfevT8dNaojdMqIJMAF8cXanL1o= 
> CLDitkrBjs2FbqhOZN9Ey9Qc+5xcOJHfQTbPMC2p1eU=",
>  "solradmin":"ovgoJKFnFo43fgt5Pd7bfXBwq3+vfCO3uZXVRUi7H0Q= 
> gKRUTDGkg5RtTIgXDiKFkefuaelAWU18KlRTAv4LfFQ="},
>  "realm":"My Solr users",
>  "forwardCredentials":false,
>  "":\{"v":0}},
>  "authorization":{
>  "class":"solr.RuleBasedAuthorizationPlugin",
>  "permissions":[
>  {
>  "name":"all",
>  "role":"admin",
>  "index":1},
>  {
>  "name":"read",
>  "role":"search",
>  "index":2}],
>  "user-role":{
>  "solr":"admin",
>  "searchuser":["read"],
>  "solradmin":["admin"]},
>  "":\{"v":0}}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8920) Reduce size of FSTs due to use of direct-addressing encoding

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886076#comment-16886076
 ] 

ASF subversion and git services commented on LUCENE-8920:
-

Commit d8b510bead86d4c6ec59063519894d207ee99d5e in lucene-solr's branch 
refs/heads/branch_8_2 from Michael Sokolov
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d8b510b ]

LUCENE-8920: disable FST direct-addressing pending size reduction
  revert to FST version 6
  removed CHANGES entry


> Reduce size of FSTs due to use of direct-addressing encoding 
> -
>
> Key: LUCENE-8920
> URL: https://issues.apache.org/jira/browse/LUCENE-8920
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some data can lead to worst-case ~4x RAM usage due to this optimization. 
> Several ideas were suggested to combat this on the mailing list:
> bq. I think we can improve thesituation here by tracking, per-FST instance, 
> the size increase we're seeing while building (or perhaps do a preliminary 
> pass before building) in order to decide whether to apply the encoding. 
> bq. we could also make the encoding a bit more efficient. For instance I 
> noticed that arc metadata is pretty large in some cases (in the 10-20 bytes) 
> which make gaps very costly. Associating each label with a dense id and 
> having an intermediate lookup, ie. lookup label -> id and then id->arc offset 
> instead of doing label->arc directly could save a lot of space in some cases? 
> Also it seems that we are repeating the label in the arc metadata when 
> array-with-gaps is used, even though it shouldn't be necessary since the 
> label is implicit from the address?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8911) Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x

2019-07-16 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886058#comment-16886058
 ] 

Ignacio Vera commented on LUCENE-8911:
--

I have my doubts for this change to make 8.2. Elasticsearch CI has reported a 
failure that I think it is related with. this change:
{code:java}
ant test  -Dtestcase=TestFactories -Dtests.method=test 
-Dtests.seed=FEA8D71DFC111060 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=zh-TW -Dtests.timezone=Europe/Guernsey -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1{code}

> Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x
> -
>
> Key: LUCENE-8911
> URL: https://issues.apache.org/jira/browse/LUCENE-8911
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In LUCENE-8907 I reverted LUCENE-8778 from the 8x branch.
> Can we backport it to 8x branch again, with transparent backwards 
> compatibility (by emulating the factory loading method of Lucene 8.1)?
> I am not so sure about it would be better or not to backport the changes, 
> however, maybe it is good for Solr to have SOLR-13593 without waiting for 
> release 9.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq

2019-07-16 Thread GitBox
jpountz commented on a change in pull request #779: LUCENE-8762: Introduce 
Specialized Impacts For Doc + Freq
URL: https://github.com/apache/lucene-solr/pull/779#discussion_r303852373
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50PostingsReader.java
 ##
 @@ -1761,6 +1763,198 @@ public long cost() {
 
   }
 
+  final class BlockImpactsDocsEnum extends ImpactsEnum {
+
+private final byte[] encoded;
+
+private final int[] docDeltaBuffer = new int[MAX_DATA_SIZE];
+private final int[] freqBuffer = new int[MAX_DATA_SIZE];
+
+private int docBufferUpto;
+
+private final Lucene50ScoreSkipReader skipper;
+
+final IndexInput docIn;
+
+final boolean indexHasPos;
+final boolean indexHasOffsets;
+final boolean indexHasPayloads;
+final boolean indexHasFreq;
+
+private int docFreq;  // number of docs in 
this posting list
+private int docUpto;  // how many docs we've 
read
+private int doc;  // doc we last read
+private int accum;// accumulator for doc 
deltas
+private int freq; // freq we last read
+
+// Where this term's postings start in the .doc file:
+private long docTermStartFP;
+
+// Where this term's postings start in the .pos file:
+private long posTermStartFP;
+
+// Where this term's payloads/offsets start in the .pay
+// file:
+private long payTermStartFP;
+
+private int nextSkipDoc = -1;
+
+private long seekTo = -1;
+
+public BlockImpactsDocsEnum(FieldInfo fieldInfo, IntBlockTermState 
termState) throws IOException {
+  indexHasOffsets = 
fieldInfo.getIndexOptions().compareTo(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS)
 >= 0;
+  indexHasPayloads = fieldInfo.hasPayloads();
+  indexHasPos = 
fieldInfo.getIndexOptions().compareTo(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS)
 >= 0;
+  indexHasFreq = 
fieldInfo.getIndexOptions().compareTo(IndexOptions.DOCS_AND_FREQS) >= 0;
+
+  this.docIn = Lucene50PostingsReader.this.docIn.clone();
+
+  encoded = new byte[MAX_ENCODED_SIZE];
+
+  docFreq = termState.docFreq;
+  docTermStartFP = termState.docStartFP;
+  posTermStartFP = termState.posStartFP;
+  payTermStartFP = termState.payStartFP;
+  docIn.seek(docTermStartFP);
+
+  doc = -1;
+  accum = 0;
+  docUpto = 0;
+  docBufferUpto = BLOCK_SIZE;
+
+  if (indexHasFreq == false) {
+Arrays.fill(freqBuffer, 1);
+  }
+
+  skipper = new Lucene50ScoreSkipReader(version,
+  docIn.clone(),
+  MAX_SKIP_LEVELS,
+  indexHasPos,
+  indexHasOffsets,
+  indexHasPayloads);
+  skipper.init(docTermStartFP+termState.skipOffset, docTermStartFP, 
posTermStartFP, payTermStartFP, docFreq);
+}
+
+@Override
+public int freq() {
+  return freq;
+}
+
+@Override
+public int docID() {
+  return doc;
+}
+
+private void refillDocs() throws IOException {
+  final int left = docFreq - docUpto;
+  assert left > 0;
+
+  if (left >= BLOCK_SIZE) {
+forUtil.readBlock(docIn, encoded, docDeltaBuffer);
+if (indexHasFreq) {
 
 Review comment:
   can you read freqs lazily like BlockDocsEnum?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq

2019-07-16 Thread GitBox
jpountz commented on a change in pull request #779: LUCENE-8762: Introduce 
Specialized Impacts For Doc + Freq
URL: https://github.com/apache/lucene-solr/pull/779#discussion_r303851882
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50PostingsReader.java
 ##
 @@ -234,7 +234,9 @@ public ImpactsEnum impacts(FieldInfo fieldInfo, 
BlockTermState state, int flags)
 final boolean indexHasOffsets = 
fieldInfo.getIndexOptions().compareTo(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS)
 >= 0;
 final boolean indexHasPayloads = fieldInfo.hasPayloads();
 
-if (indexHasPositions &&
+   if (indexHasPositions == false || PostingsEnum.featureRequested(flags, 
PostingsEnum.POSITIONS) == false) {
+  return new BlockImpactsDocsEnum(fieldInfo, (IntBlockTermState) state);
+} else if (indexHasPositions &&
 
 Review comment:
   indentation looks wrong?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8884) Add Directory wrapper to track per-query IO counters

2019-07-16 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886043#comment-16886043
 ] 

Adrien Grand commented on LUCENE-8884:
--

I'm not seeing any attachement on this JIRA, did you forget to attach a patch?

> Add Directory wrapper to track per-query IO counters
> 
>
> Key: LUCENE-8884
> URL: https://issues.apache.org/jira/browse/LUCENE-8884
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
>
> Lucene's IO abstractions ({{Directory, IndexInput/Output}}) make it really 
> easy to track counters of how many IOPs and net bytes are read for each 
> query, which is a useful metric to track/aggregate/alarm on in production or 
> dev benchmarks.
> At my day job we use these wrappers in our nightly benchmarks to catch any 
> accidental performance regressions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8894) Add APIs to tokenizer/charfilter/tokenfilter factories to get their SPI names from concrete classes

2019-07-16 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886041#comment-16886041
 ] 

Tomoko Uchida edited comment on LUCENE-8894 at 7/16/19 11:09 AM:
-

Hi [~ivera],
thanks for caring that, the changes for LUCENE-8874, LUCENE-8894, and 
LUCENE-8778 should be moved to under Lucene 8.3.0. The branch_8x's CHANGES.txt 
is correct.
Seems the Lucene 8.3.0 section does not exist yet in the master branch, so I 
just delay editing that for now. 
Please let me know if I can/should create the 8.3.0 section in the master 
branch on my own.


was (Author: tomoko uchida):
Hi [~ivera],
thanks for caring that, the changes for LUCENE-8874, LUCENE-8894, and 
LUCENE-8778 should be moved to under Lucene 8.3.0. The branch_8x's CHANGES.txt 
is correct.
Seems the Lucene 8.3.0 section does not exist yet in the master branch, so I 
just delay editing that for now. 

> Add APIs to tokenizer/charfilter/tokenfilter factories to get their SPI names 
> from concrete classes
> ---
>
> Key: LUCENE-8894
> URL: https://issues.apache.org/jira/browse/LUCENE-8894
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Fix For: master (9.0), 8.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, reflection tricks are needed to obtain SPI name (this is now 
> stored in static NAME fields in each factory class) from a concrete factory 
> class. While it is easy to implement that logic, it would be much better to 
> provide unified APIs to get SPI name from a factory class. In other words, 
> the APIs would provide "inverse" operation of {{lookupClass(String)}} method.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #786: LUCENE-8916: GraphTokenStreamFiniteStrings preserves all incoming attributes

2019-07-16 Thread GitBox
jpountz commented on a change in pull request #786: LUCENE-8916: 
GraphTokenStreamFiniteStrings preserves all incoming attributes
URL: https://github.com/apache/lucene-solr/pull/786#discussion_r303849314
 
 

 ##
 File path: 
lucene/core/src/test/org/apache/lucene/util/graph/TestGraphTokenStreamFiniteStrings.java
 ##
 @@ -44,13 +48,13 @@ private void assertTokenStream(TokenStream ts, String[] 
terms, int[] increments)
 assertNotNull(terms);
 assertNotNull(increments);
 assertEquals(terms.length, increments.length);
-BytesTermAttribute termAtt = ts.getAttribute(BytesTermAttribute.class);
+CharTermAttribute termAtt = ts.getAttribute(CharTermAttribute.class);
 PositionIncrementAttribute incrAtt = 
ts.getAttribute(PositionIncrementAttribute.class);
 int offset = 0;
 while (ts.incrementToken()) {
   // verify term and increment
   assert offset < terms.length;
-  assertEquals(terms[offset], termAtt.getBytesRef().utf8ToString());
+  assertEquals(terms[offset], termAtt.toString());
 
 Review comment:
   ok thanks, I was confusing BytesTermAttribute with TermToBytesRefAttribute


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8894) Add APIs to tokenizer/charfilter/tokenfilter factories to get their SPI names from concrete classes

2019-07-16 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886041#comment-16886041
 ] 

Tomoko Uchida commented on LUCENE-8894:
---

Hi [~ivera],
thanks for caring that, the changes for LUCENE-8874, LUCENE-8894, and 
LUCENE-8778 should be moved to under Lucene 8.3.0. The branch_8x's CHANGES.txt 
is correct.
Seems the Lucene 8.3.0 section does not exist yet in the master branch, so I 
just delay editing that for now. 

> Add APIs to tokenizer/charfilter/tokenfilter factories to get their SPI names 
> from concrete classes
> ---
>
> Key: LUCENE-8894
> URL: https://issues.apache.org/jira/browse/LUCENE-8894
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Fix For: master (9.0), 8.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, reflection tricks are needed to obtain SPI name (this is now 
> stored in static NAME fields in each factory class) from a concrete factory 
> class. While it is easy to implement that logic, it would be much better to 
> provide unified APIs to get SPI name from a factory class. In other words, 
> the APIs would provide "inverse" operation of {{lookupClass(String)}} method.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #786: LUCENE-8916: GraphTokenStreamFiniteStrings preserves all incoming attributes

2019-07-16 Thread GitBox
jpountz commented on a change in pull request #786: LUCENE-8916: 
GraphTokenStreamFiniteStrings preserves all incoming attributes
URL: https://github.com/apache/lucene-solr/pull/786#discussion_r303849314
 
 

 ##
 File path: 
lucene/core/src/test/org/apache/lucene/util/graph/TestGraphTokenStreamFiniteStrings.java
 ##
 @@ -44,13 +48,13 @@ private void assertTokenStream(TokenStream ts, String[] 
terms, int[] increments)
 assertNotNull(terms);
 assertNotNull(increments);
 assertEquals(terms.length, increments.length);
-BytesTermAttribute termAtt = ts.getAttribute(BytesTermAttribute.class);
+CharTermAttribute termAtt = ts.getAttribute(CharTermAttribute.class);
 PositionIncrementAttribute incrAtt = 
ts.getAttribute(PositionIncrementAttribute.class);
 int offset = 0;
 while (ts.incrementToken()) {
   // verify term and increment
   assert offset < terms.length;
-  assertEquals(terms[offset], termAtt.getBytesRef().utf8ToString());
+  assertEquals(terms[offset], termAtt.toString());
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8894) Add APIs to tokenizer/charfilter/tokenfilter factories to get their SPI names from concrete classes

2019-07-16 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886022#comment-16886022
 ] 

Ignacio Vera commented on LUCENE-8894:
--

In master the entry in CHANGES.txt is in Lucene 9.0.0 bit from branch_8x the 
entry is under Lucene 8.3.0, is that correct?

 

> Add APIs to tokenizer/charfilter/tokenfilter factories to get their SPI names 
> from concrete classes
> ---
>
> Key: LUCENE-8894
> URL: https://issues.apache.org/jira/browse/LUCENE-8894
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Fix For: master (9.0), 8.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, reflection tricks are needed to obtain SPI name (this is now 
> stored in static NAME fields in each factory class) from a concrete factory 
> class. While it is easy to implement that logic, it would be much better to 
> provide unified APIs to get SPI name from a factory class. In other words, 
> the APIs would provide "inverse" operation of {{lookupClass(String)}} method.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13579) Create resource management API

2019-07-16 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13579:
-
Description: Resource management framework API supporting the goals 
outlined in SOLR-13578.

> Create resource management API
> --
>
> Key: SOLR-13579
> URL: https://issues.apache.org/jira/browse/SOLR-13579
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13579.patch, SOLR-13579.patch
>
>
> Resource management framework API supporting the goals outlined in SOLR-13578.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13579) Create resource management API

2019-07-16 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13579:
-
Issue Type: New Feature  (was: Sub-task)
Parent: (was: SOLR-13578)

> Create resource management API
> --
>
> Key: SOLR-13579
> URL: https://issues.apache.org/jira/browse/SOLR-13579
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13579.patch, SOLR-13579.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8911) Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x

2019-07-16 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886015#comment-16886015
 ] 

Tomoko Uchida commented on LUCENE-8911:
---

I will update lucene/MIGRATE.txt later on.

Seems it would take some time before the first RC will be created so we could 
merge this into 8_2 branch again, but I hesitate a bit to ask to the RM to do 
so. Let me know if it would be better getting into this 8.2 rather than waiting 
8.3 \:)

> Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x
> -
>
> Key: LUCENE-8911
> URL: https://issues.apache.org/jira/browse/LUCENE-8911
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In LUCENE-8907 I reverted LUCENE-8778 from the 8x branch.
> Can we backport it to 8x branch again, with transparent backwards 
> compatibility (by emulating the factory loading method of Lucene 8.1)?
> I am not so sure about it would be better or not to backport the changes, 
> however, maybe it is good for Solr to have SOLR-13593 without waiting for 
> release 9.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13558) Allow dynamic resizing of SolrCache-s

2019-07-16 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13558:
-
Issue Type: Improvement  (was: Sub-task)
Parent: (was: SOLR-13578)

> Allow dynamic resizing of SolrCache-s
> -
>
> Key: SOLR-13558
> URL: https://issues.apache.org/jira/browse/SOLR-13558
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13558.patch
>
>
> Currently SolrCache limits are configured statically and can't be 
> reconfigured without cache re-initialization (core reload), which is costly. 
> In some situations it would help to be able to dynamically re-size the cache 
> based on the resource contention (such as the total heap size used for 
> caching across all cores in a node).
> Each cache implementation already knows how to evict its entries when it runs 
> into configured limits - what is missing is to expose this mechanism using a 
> uniform API.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13579) Create resource management API

2019-07-16 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886014#comment-16886014
 ] 

Andrzej Bialecki  commented on SOLR-13579:
--

Updated patch, with one significant change (based on the work in SOLR-13558): 
allow arbitrary limit types, ie. Object instead of Float. This way the API can 
support controllable parameters that are expressed as eg. booleans, enums, etc.

> Create resource management API
> --
>
> Key: SOLR-13579
> URL: https://issues.apache.org/jira/browse/SOLR-13579
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13579.patch, SOLR-13579.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13579) Create resource management API

2019-07-16 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13579:
-
Attachment: SOLR-13579.patch

> Create resource management API
> --
>
> Key: SOLR-13579
> URL: https://issues.apache.org/jira/browse/SOLR-13579
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13579.patch, SOLR-13579.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8884) Add Directory wrapper to track per-query IO counters

2019-07-16 Thread Michael McCandless (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-8884:
---
Status: Open  (was: Open)

Here's an initial patch, adding {{IOTrackingDirectoryWrapper.}}

Whenever a given thread is "working" on a particular query it must first call 
{{setQueryForThread}} so the wrapper knows which query's counters to increment.

It tracks number of IOPs and how many total bytes were read.

It's likely it impacts search performance, so it should only be used during 
profiling/benchmarking.

> Add Directory wrapper to track per-query IO counters
> 
>
> Key: LUCENE-8884
> URL: https://issues.apache.org/jira/browse/LUCENE-8884
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
>
> Lucene's IO abstractions ({{Directory, IndexInput/Output}}) make it really 
> easy to track counters of how many IOPs and net bytes are read for each 
> query, which is a useful metric to track/aggregate/alarm on in production or 
> dev benchmarks.
> At my day job we use these wrappers in our nightly benchmarks to catch any 
> accidental performance regressions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] bruno-roustant commented on a change in pull request #780: SOLR-11866: Support efficient subset matching in query elevation rules

2019-07-16 Thread GitBox
bruno-roustant commented on a change in pull request #780: SOLR-11866: Support 
efficient subset matching in query elevation rules
URL: https://github.com/apache/lucene-solr/pull/780#discussion_r303824411
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
 ##
 @@ -682,35 +686,48 @@ public String getDescription() {
* Creates the {@link ElevationProvider} to set during configuration 
loading. The same instance will be used later
* when elevating results for queries.
*
-   * @param queryAnalyzer to analyze and tokenize the query.
* @param elevationBuilderMap map of all {@link ElevatingQuery} and their 
corresponding {@link ElevationBuilder}.
* @return The created {@link ElevationProvider}.
*/
-  protected ElevationProvider createElevationProvider(Analyzer queryAnalyzer, 
Map elevationBuilderMap) {
-return new MapElevationProvider(elevationBuilderMap);
+  protected ElevationProvider createElevationProvider(Map elevationBuilderMap) {
+return new SubsetMatchElevationProvider(new TrieSubsetMatcher.Builder<>(), 
elevationBuilderMap);
   }
 
   
//-
   // Query analysis and tokenization
   
//-
 
   /**
-   * Analyzes the provided query string and returns a space concatenation of 
the analyzed tokens.
+   * Analyzes the provided query string and returns a concatenation of the 
analyzed tokens.
*/
   public String analyzeQuery(String query) {
-//split query terms with analyzer then join
-StringBuilder norm = new StringBuilder();
+StringBuilder concatenatedTerms = new StringBuilder();
+analyzeQuery(query, null, concatenatedTerms);
+return concatenatedTerms.toString();
+  }
+
+  /**
+   * Analyzes the provided query string, tokenizes the terms and add them to 
either the provided {@link Collection} or {@link Appendable}.
+   *
+   * @param queryTerms The {@link Collection} that receives the terms; or null 
if none.
+   * @param concatenatedTerms The {@link Appendable} that receives the terms; 
or null if none.
+   */
+  protected void analyzeQuery(String query, Collection queryTerms, 
Appendable concatenatedTerms) {
 
 Review comment:
   I pondered for a while on this signature, because I wanted it to be clear 
but also performant where it is used, to avoid the creation of lots of lambdas 
in loops.
   I'll change back to the Consumer. It will create just an additional lambda 
instance per query to process, given that I'll declare the lambdas out of the 
loops.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8884) Add Directory wrapper to track per-query IO counters

2019-07-16 Thread Michael McCandless (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-8884:
--

Assignee: Michael McCandless

> Add Directory wrapper to track per-query IO counters
> 
>
> Key: LUCENE-8884
> URL: https://issues.apache.org/jira/browse/LUCENE-8884
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/store
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
>
> Lucene's IO abstractions ({{Directory, IndexInput/Output}}) make it really 
> easy to track counters of how many IOPs and net bytes are read for each 
> query, which is a useful metric to track/aggregate/alarm on in production or 
> dev benchmarks.
> At my day job we use these wrappers in our nightly benchmarks to catch any 
> accidental performance regressions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] romseygeek commented on a change in pull request #786: LUCENE-8916: GraphTokenStreamFiniteStrings preserves all incoming attributes

2019-07-16 Thread GitBox
romseygeek commented on a change in pull request #786: LUCENE-8916: 
GraphTokenStreamFiniteStrings preserves all incoming attributes
URL: https://github.com/apache/lucene-solr/pull/786#discussion_r303822204
 
 

 ##
 File path: 
lucene/core/src/test/org/apache/lucene/util/graph/TestGraphTokenStreamFiniteStrings.java
 ##
 @@ -44,13 +48,13 @@ private void assertTokenStream(TokenStream ts, String[] 
terms, int[] increments)
 assertNotNull(terms);
 assertNotNull(increments);
 assertEquals(terms.length, increments.length);
-BytesTermAttribute termAtt = ts.getAttribute(BytesTermAttribute.class);
+CharTermAttribute termAtt = ts.getAttribute(CharTermAttribute.class);
 PositionIncrementAttribute incrAtt = 
ts.getAttribute(PositionIncrementAttribute.class);
 int offset = 0;
 while (ts.incrementToken()) {
   // verify term and increment
   assert offset < terms.length;
-  assertEquals(terms[offset], termAtt.getBytesRef().utf8ToString());
+  assertEquals(terms[offset], termAtt.toString());
 
 Review comment:
   It needs to change from `BytesTermAttribute` to something else - I could 
change it to a `TermToBytesRefAttribute`, but `CharTermAttribute` seemed 
simpler and involves less byte-wrangling


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #786: LUCENE-8916: GraphTokenStreamFiniteStrings preserves all incoming attributes

2019-07-16 Thread GitBox
jpountz commented on a change in pull request #786: LUCENE-8916: 
GraphTokenStreamFiniteStrings preserves all incoming attributes
URL: https://github.com/apache/lucene-solr/pull/786#discussion_r303816286
 
 

 ##
 File path: 
lucene/core/src/test/org/apache/lucene/util/graph/TestGraphTokenStreamFiniteStrings.java
 ##
 @@ -44,13 +48,13 @@ private void assertTokenStream(TokenStream ts, String[] 
terms, int[] increments)
 assertNotNull(terms);
 assertNotNull(increments);
 assertEquals(terms.length, increments.length);
-BytesTermAttribute termAtt = ts.getAttribute(BytesTermAttribute.class);
+CharTermAttribute termAtt = ts.getAttribute(CharTermAttribute.class);
 PositionIncrementAttribute incrAtt = 
ts.getAttribute(PositionIncrementAttribute.class);
 int offset = 0;
 while (ts.incrementToken()) {
   // verify term and increment
   assert offset < terms.length;
-  assertEquals(terms[offset], termAtt.getBytesRef().utf8ToString());
+  assertEquals(terms[offset], termAtt.toString());
 
 Review comment:
   Was this change necessary, it should be equivalent right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on issue #788: LUCENE-8920: disable FST direct-addressing pending size reduction

2019-07-16 Thread GitBox
jpountz commented on issue #788: LUCENE-8920: disable FST direct-addressing 
pending size reduction
URL: https://github.com/apache/lucene-solr/pull/788#issuecomment-511739179
 
 
   You might need to move CHANGES entries as well?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8778) Define analyzer SPI names as static final fields and document the names in Javadocs

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885969#comment-16885969
 ] 

ASF subversion and git services commented on LUCENE-8778:
-

Commit b5e8dc3af4227401233289fdf7433be9d7440ca1 in lucene-solr's branch 
refs/heads/branch_8x from Tomoko Uchida
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b5e8dc3 ]

LUCENE-8911: Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x 
(#782)

This also keeps old names for backwards compatibility on 8.x


> Define analyzer SPI names as static final fields and document the names in 
> Javadocs
> ---
>
> Key: LUCENE-8778
> URL: https://issues.apache.org/jira/browse/LUCENE-8778
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Fix For: master (9.0)
>
> Attachments: LUCENE-8778-koreanNumber.patch, 
> ListAnalysisComponents.java, SPINamesGenerator.java, Screenshot from 
> 2019-04-26 02-17-48.png, Screenshot from 2019-05-25 23-25-24.png, 
> TestSPINames.java
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Each built-in analysis component (factory of tokenizer / char filter / token 
> filter)  has a SPI name but currently this is not  documented anywhere.
> The goals of this issue:
>  * Define SPI names as static final field for each analysis component so that 
> users can get the component by name (via {{NAME}} static field.) This also 
> provides compile time safety.
>  * Officially document the SPI names in Javadocs.
>  * Add proper source validation rules to ant {{validate-source-patterns}} 
> target so that we can make sure that all analysis components have correct 
> field definitions and documentation
> and,
>  * Lookup SPI names on the new {{NAME}} fields. Instead deriving those from 
> class names.
> (Just for quick reference) we now have:
>  * *19* Tokenizers ({{TokenizerFactory.availableTokenizers()}})
>  * *6* CharFilters ({{CharFilterFactory.availableCharFilters()}})
>  * *118* TokenFilters ({{TokenFilterFactory.availableTokenFilters()}})



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta merged pull request #782: LUCENE-8911: Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x

2019-07-16 Thread GitBox
mocobeta merged pull request #782: LUCENE-8911: Backport LUCENE-8778 (improved 
analysis SPI name handling) to 8.x
URL: https://github.com/apache/lucene-solr/pull/782
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8911) Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885968#comment-16885968
 ] 

ASF subversion and git services commented on LUCENE-8911:
-

Commit b5e8dc3af4227401233289fdf7433be9d7440ca1 in lucene-solr's branch 
refs/heads/branch_8x from Tomoko Uchida
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b5e8dc3 ]

LUCENE-8911: Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x 
(#782)

This also keeps old names for backwards compatibility on 8.x


> Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x
> -
>
> Key: LUCENE-8911
> URL: https://issues.apache.org/jira/browse/LUCENE-8911
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In LUCENE-8907 I reverted LUCENE-8778 from the 8x branch.
> Can we backport it to 8x branch again, with transparent backwards 
> compatibility (by emulating the factory loading method of Lucene 8.1)?
> I am not so sure about it would be better or not to backport the changes, 
> however, maybe it is good for Solr to have SOLR-13593 without waiting for 
> release 9.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8911) Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x

2019-07-16 Thread Tomoko Uchida (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida reassigned LUCENE-8911:
-

Assignee: Tomoko Uchida

> Backport LUCENE-8778 (improved analysis SPI name handling) to 8.x
> -
>
> Key: LUCENE-8911
> URL: https://issues.apache.org/jira/browse/LUCENE-8911
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In LUCENE-8907 I reverted LUCENE-8778 from the 8x branch.
> Can we backport it to 8x branch again, with transparent backwards 
> compatibility (by emulating the factory loading method of Lucene 8.1)?
> I am not so sure about it would be better or not to backport the changes, 
> however, maybe it is good for Solr to have SOLR-13593 without waiting for 
> release 9.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-8.2 - Build # 5 - Still Failing

2019-07-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.2/5/

No tests ran.

Build Log:
[...truncated 24963 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2587 links (2117 relative) to 3396 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/package/solr-8.2.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: 

[jira] [Commented] (SOLR-13565) Node level runtime libs loaded from remote urls

2019-07-16 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885951#comment-16885951
 ] 

ASF subversion and git services commented on SOLR-13565:


Commit a19f6450da2625dd6820ceebe05e5cf471f84030 in lucene-solr's branch 
refs/heads/jira/SOLR-13565 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a19f645 ]

SOLR-13565: fixing tests


> Node level runtime libs loaded from remote urls
> ---
>
> Key: SOLR-13565
> URL: https://issues.apache.org/jira/browse/SOLR-13565
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Custom components to be loaded at a CorContainer level
> How to configure this?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> How to update your jars?
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "update-runtimelib": {
>   "name": "lib-name" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This only loads the components used in CoreContainer and it does not need to 
> restart the Solr node
> The configuration lives in the file {{/clusterprops.json}} in ZK.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 3441 - Unstable

2019-07-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3441/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/151/consoleText

[repro] Revision: ee4495f33bc433cba6582aaf8160cfc16009460f

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=RollingRestartTest 
-Dtests.method=test -Dtests.seed=5E64C76CCB142C48 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sr-ME -Dtests.timezone=GMT0 -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
2d357c960c13ee3c1370bb1caa8bc3fc18e079bd
[repro] git fetch
[repro] git checkout ee4495f33bc433cba6582aaf8160cfc16009460f

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   RollingRestartTest
[repro] ant compile-test

[...truncated 3577 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.RollingRestartTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=5E64C76CCB142C48 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sr-ME -Dtests.timezone=GMT0 -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 6968 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.RollingRestartTest
[repro] git checkout 2d357c960c13ee3c1370bb1caa8bc3fc18e079bd

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] [lucene-solr] bruno-roustant commented on a change in pull request #780: SOLR-11866: Support efficient subset matching in query elevation rules

2019-07-16 Thread GitBox
bruno-roustant commented on a change in pull request #780: SOLR-11866: Support 
efficient subset matching in query elevation rules
URL: https://github.com/apache/lucene-solr/pull/780#discussion_r303794163
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
 ##
 @@ -425,10 +429,10 @@ protected ElevationProvider 
loadElevationProvider(XmlConfigFile config) throws I
 previousElevationBuilder.merge(elevationBuilder);
   }
 }
-return createElevationProvider(queryAnalyzer, elevationBuilderMap);
+return createElevationProvider(elevationBuilderMap);
 
 Review comment:
   Yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8915) Allow RateLimiter To Have Dynamic Limits

2019-07-16 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885941#comment-16885941
 ] 

Atri Sharma commented on LUCENE-8915:
-

[~ab] Thanks, raised a PR doing the same.

 

[https://github.com/apache/lucene-solr/pull/789]

> Allow RateLimiter To Have Dynamic Limits
> 
>
> Key: LUCENE-8915
> URL: https://issues.apache.org/jira/browse/LUCENE-8915
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> RateLimiter does not allow dynamic configuration of the rate limit today. 
> This limits the kind of applications that the functionality can be applied 
> to. This Jira tracks 1) allowing the rate limiter to change limits 
> dynamically. 2) Add a RateLimiter subclass which exposes the same.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >