[jira] [Commented] (SOLR-13579) Create resource management API

2019-08-06 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901705#comment-16901705
 ] 

Shalin Shekhar Mangar commented on SOLR-13579:
--

Thanks [~ab]. This is looking good. I've done a first pass through the design 
and code. It took a time to wrap my head around it and your jira comments 
describing the use-case and how it works really helped.

I have some initial comments:
# The DefaultResourceManaged has a bug I think. The pool can be created by 
createPool and it is scheduled immediately and added to the resourcePools map 
with the key being the name of the resource pool. So presumably we can create 
multiple pools of the same type which is as per the design. But the 
#registerComponent() method gets the pool for the given name and checks that 
there are no other pools with the same type? AIUI, there are no checks to see 
if the given managed component is actually registered in the other pools of the 
same type? This can be easily demonstrated by changing the 
TestDefaultResourceManagerPool.testBasic method and adding another pool with 
the same type.
# The package-info.java for the managed package can benefit from some of the 
design documentation you have added in this Jira.
# There is no v2 api for the /admin/resources?

I'm going to do another pass and try it out and get back to you.

> Create resource management API
> --
>
> Key: SOLR-13579
> URL: https://issues.apache.org/jira/browse/SOLR-13579
> Project: Solr
>  Issue Type: New Feature
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, 
> SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch
>
>
> Resource management framework API supporting the goals outlined in SOLR-13578.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-06 Thread Tomoko Uchida
Hi

+1 for separated list(s) for JIRA/Github updates and Jenkins jobs.
While I myself am not in trouble with assorting the mails thanks to
gmail filters, I know an user (external dev) who unsubscribed this
list. The one reason is the volume of the mail flow :)

Tomoko

2019年8月7日(水) 8:17 Jan Høydahl :
>
> Hi
>
> The mail volume on dev@ is fairly high, betwen 2500-3500/month.
> To break down the numbers last month, see 
> https://lists.apache.org/trends.html?dev@lucene.apache.org:lte=1M:
>
> Top 10 participants:
> -GitBox: 420 emails
> -ASF subversion and git services (JIRA): 351 emails
> -Apache Jenkins Server: 261 emails
> -Policeman Jenkins Server: 234 emails
> -Munendra S N (JIRA): 134 emails
> -Joel Bernstein (JIRA): 84 emails
> -Tomoko Uchida (JIRA): 77 emails
> -Jan Høydahl (JIRA): 52 emails
> -Andrzej Bialecki (JIRA): 47 emails
> -Adrien Grand (JIRA): 46 emails
>
> I have especially noticed how every single GitHub PR review comment triggers 
> its own email instead of one email per review session.
> Also, every commit/push triggers an email since a bot adds a comment to JIRA 
> for it.
>
> Personally I think the ratio of notifications vs human emails is a bit too 
> high. I fear external devs who just want to follow the project may get 
> overwhelmed and unsubscribe.
> One suggestion is therefore to add a new list where detailed JIRA comments 
> and Github comments / reviews go. All committers should of course subscribe!
> I saw the Zookeeper project have a notifications@ list for GitHub comments 
> and issues@ for JIRA comments (Except the first [Created] email for a JIRA 
> will also go to dev@)
> The Maven project follows the same scheme and they also send Jenkins mails to 
> the notifications@ list. The Cassandra project seems to divert all jira 
> comments to the commits@ list.
> The HBase project has keeps only [Created]/[Resolved] mails on dev@ and all 
> other from Jira/GH on issues@ list and Jenkins mails on a separate builds@ 
> list.
>
> Is it time we did something similar? I propose a single new notifications@ 
> list for everything JIRA, GitHub and Jenkins but keep [Created|Resolved] 
> mails on dev@
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13593) Allow to specify analyzer components by their SPI names in schema definition

2019-08-06 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901689#comment-16901689
 ] 

Tomoko Uchida commented on SOLR-13593:
--

I updated the pull request. If both of "name" and "class" appear at the same 
time on an element, SolrException is thrown and error logs are emmited.

I've also tested this manually: (1) start a local solr core with manually 
modified managed-schema which has field types including "name" property, (2) 
add types including "name" via the rest API as well. Works for me and this does 
not affect to existing field types (having "class"). Also the core can be 
restarted without any problems after adding the types having "name", so the 
regenerated & saved managed-schema works fine.

And I created the service provider file for Solr's custom filters (it has not 
been there so far) so that they can be looked up by names.

// META-INF/services/org.apache.lucene.analysis.util.TokenFilterFactory
{code:java}
org.apache.solr.rest.schema.analysis.ManagedStopFilterFactory
org.apache.solr.rest.schema.analysis.ManagedSynonymFilterFactory
org.apache.solr.rest.schema.analysis.ManagedSynonymGraphFilterFactory
{code}
Let me know if there are any other things that would block this issue - I'd 
like to wait until this weekend and merge the changes into the ASF repo, if 
there are no objections.

> Allow to specify analyzer components by their SPI names in schema definition
> 
>
> Key: SOLR-13593
> URL: https://issues.apache.org/jira/browse/SOLR-13593
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Reporter: Tomoko Uchida
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Now each analysis factory has explicitely documented SPI name which is stored 
> in the static "NAME" field (LUCENE-8778).
>  Solr uses factories' simple class name in schema definition (like 
> class="solr.WhitespaceTokenizerFactory"), but we should be able to also use 
> more concise SPI names (like name="whitespace").
> e.g.:
> {code:xml}
> 
>   
> 
>  />
> 
>   
> 
> {code}
> would be
> {code:xml}
> 
>   
> 
> 
> 
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-08-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901642#comment-16901642
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 0451c6e5320f4350f8fc75baacdcc50d05151cf4 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0451c6e ]

SOLR-13105: Add text to loading page 7


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-08-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901637#comment-16901637
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 893ca4b34c5bd29e2338eff42994a56e031b9ce6 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=893ca4b ]

SOLR-13105: Add text to loading page 6


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-08-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901611#comment-16901611
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit ae6287faff562df37369f04987172416d87b2744 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ae6287f ]

SOLR-13105: Add text to loading page 5


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3511 - Failure

2019-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3511/

All tests passed

Build Log:
[...truncated 64643 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1990784082
 [ecj-lint] Compiling 48 source files to /tmp/ecj1990784082
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 23)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 28)
 [ecj-lint] public class MockInitialContextFactory implements 
InitialContextFactory {
 [ecj-lint]  ^
 [ecj-lint] The type MockInitialContextFactory must implement the inherited 
abstract method InitialContextFactory.getInitialContext(Hashtable)
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 30)
 [ecj-lint] private final javax.naming.Context context;
 [ecj-lint]   
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint] ^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 36)
 [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> 
objects.get(invocation.getArgument(0)));
 [ecj-lint]  ^^^
 [ecj-lint] context cannot be resolved
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 38)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 45)
 [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) {
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 46)
 [ecj-lint] return context;
 [ecj-lint]^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 9 problems (9 errors)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:634: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build.xml:651:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/common-build.xml:479:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2015:
 The following error occurred while executing this line:

[JENKINS] Lucene-Solr-8.x-Windows (32bit/jdk1.8.0_201) - Build # 385 - Still Unstable!

2019-08-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/385/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseSerialGC

5 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testFileStreamDirectoryCrawl

Error Message:
expected: but was:

Stack Trace:
org.junit.ComparisonFailure: expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([A3E007B3A4ACDB10:709C95F68A3BCF5]:0)
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testFileStreamDirectoryCrawl(StreamExpressionTest.java:3128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testFileStreamDirectoryCrawl

Error Message:
expected: but was:

Stack Trace:
org.junit.ComparisonFailure: expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([A3E007B3A4ACD

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1922 - Still Failing

2019-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1922/

No tests ran.

Build Log:
[...truncated 25 lines...]
ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data
org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the 
server
svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data'
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119)
at 
org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at 
org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
... 4 more
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

Separate dev mailing list for automated mails?

2019-08-06 Thread Jan Høydahl
Hi

The mail volume on dev@ is fairly high, betwen 2500-3500/month.
To break down the numbers last month, see 
https://lists.apache.org/trends.html?dev@lucene.apache.org:lte=1M:

Top 10 participants:
-GitBox: 420 emails
-ASF subversion and git services (JIRA): 351 emails
-Apache Jenkins Server: 261 emails
-Policeman Jenkins Server: 234 emails
-Munendra S N (JIRA): 134 emails
-Joel Bernstein (JIRA): 84 emails
-Tomoko Uchida (JIRA): 77 emails
-Jan Høydahl (JIRA): 52 emails
-Andrzej Bialecki (JIRA): 47 emails
-Adrien Grand (JIRA): 46 emails

I have especially noticed how every single GitHub PR review comment triggers 
its own email instead of one email per review session.
Also, every commit/push triggers an email since a bot adds a comment to JIRA 
for it.

Personally I think the ratio of notifications vs human emails is a bit too 
high. I fear external devs who just want to follow the project may get 
overwhelmed and unsubscribe.
One suggestion is therefore to add a new list where detailed JIRA comments and 
Github comments / reviews go. All committers should of course subscribe!
I saw the Zookeeper project have a notifications@ list for GitHub comments and 
issues@ for JIRA comments (Except the first [Created] email for a JIRA will 
also go to dev@)
The Maven project follows the same scheme and they also send Jenkins mails to 
the notifications@ list. The Cassandra project seems to divert all jira 
comments to the commits@ list.
The HBase project has keeps only [Created]/[Resolved] mails on dev@ and all 
other from Jira/GH on issues@ list and Jenkins mails on a separate builds@ list.

Is it time we did something similar? I propose a single new notifications@ list 
for everything JIRA, GitHub and Jenkins but keep [Created|Resolved] mails on 
dev@

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13464) Sporadic Auth + Cloud test failures, probably due to lag in nodes reloading security config

2019-08-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901523#comment-16901523
 ] 

ASF subversion and git services commented on SOLR-13464:


Commit 6fea853711773a134c7b04b40a31193af5dd77f8 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6fea853 ]

Harden BasicAuthIntegrationTest w/work around for SOLR-13464

(cherry picked from commit 878d332a0bd7374190a85a23d3a6241d930289f3)


> Sporadic Auth + Cloud test failures, probably due to lag in nodes reloading 
> security config
> ---
>
> Key: SOLR-13464
> URL: https://issues.apache.org/jira/browse/SOLR-13464
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
>
> I've been investigating some sporadic and hard to reproduce test failures 
> related to authentication in cloud mode, and i *think* (but have not directly 
> verified) that the common cause is that after uses one of the 
> {{/admin/auth...}} handlers to update some setting, there is an inherient and 
> unpredictible delay (due to ZK watches) until every node in the cluster has 
> had a chance to (re)load the new configuration and initialize the various 
> security plugins with the new settings.
> Which means, if a test client does a POST to some node to add/change/remove 
> some authn/authz settings, and then immediately hits the exact same node (or 
> any other node) to test that the effects of those settings exist, there is no 
> garuntee that they will have taken affect yet.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3510 - Unstable

2019-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3510/

1 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestZKPropertiesWriter.testZKPropertiesWriter

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([53B43466FD987552:1B384811D1CF55A6]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:996)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:956)
at 
org.apache.solr.handler.dataimport.TestZKPropertiesWriter.testZKPropertiesWriter(TestZKPropertiesWriter.java:137)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//*[@numFound='1']
xml response was: 

true01*:*0202.2


request was:q=*:*&qt=&start=0&rows=20&version=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:989)
... 40 more




Build Log:
[...truncated 21

[jira] [Commented] (SOLR-13464) Sporadic Auth + Cloud test failures, probably due to lag in nodes reloading security config

2019-08-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901516#comment-16901516
 ] 

ASF subversion and git services commented on SOLR-13464:


Commit 878d332a0bd7374190a85a23d3a6241d930289f3 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=878d332 ]

Harden BasicAuthIntegrationTest w/work around for SOLR-13464


> Sporadic Auth + Cloud test failures, probably due to lag in nodes reloading 
> security config
> ---
>
> Key: SOLR-13464
> URL: https://issues.apache.org/jira/browse/SOLR-13464
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
>
> I've been investigating some sporadic and hard to reproduce test failures 
> related to authentication in cloud mode, and i *think* (but have not directly 
> verified) that the common cause is that after uses one of the 
> {{/admin/auth...}} handlers to update some setting, there is an inherient and 
> unpredictible delay (due to ZK watches) until every node in the cluster has 
> had a chance to (re)load the new configuration and initialize the various 
> security plugins with the new settings.
> Which means, if a test client does a POST to some node to add/change/remove 
> some authn/authz settings, and then immediately hits the exact same node (or 
> any other node) to test that the effects of those settings exist, there is no 
> garuntee that they will have taken affect yet.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 173 - Still Failing

2019-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/173/

No tests ran.

Build Log:
[...truncated 25 lines...]
ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data
org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the 
server
svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data'
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119)
at 
org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at 
org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
... 4 more
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-08-06 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901459#comment-16901459
 ] 

Noble Paul commented on SOLR-13677:
---

bq.the tag properly represents the object or group of objects with the same 
life-cycle -...

I think this is a better solution.  I was not sure about the implications of 
such a change. I will try to implement it and you can review it


> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-08-06 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901449#comment-16901449
 ] 

Andrzej Bialecki  commented on SOLR-13677:
--

bq. usually close is called always

:D There's no hard guarantee in any of these scenarios, just a degree of 
likelihood.

I think that the patch over-complicates things by keeping an explicit reference 
to the gauges. It should not be needed if we properly use the {{tag}} argument 
when registering gauges.

The {{tag}} attribute was added to solve a problem of non-deterministic order 
of gauge registration during core reload - the new core would already register 
some of the new gauges but the old core would linger for a while and when it 
tried to unregister gauges with the same names it would unregister the new ones 
instead of the old ones.

So the important thing about the {{tag}} argument in 
{{SolrMetricManager.registerGauge}} and {{SolrMetricManager.unregisterGauges}} 
is this: the tag represents an object or a group of objects with the same 
life-cycle. Until now the tag was generated by {{SolrCore}} and passed to all 
its components because they had the same lifecycle. Also, it was {{SolrCore}} 
(via {{SolrCoreMetricManager}}) that would call {{unregisterGauges}} on behalf 
of all its components.

Now, if the life-cycle of components is different from that of {{SolrCore}} 
then we need to make sure of two things:
* the {{tag}} properly represents the object or group of objects with the same 
life-cycle - so if eg. SolrCache-s can be re-loaded without reloading SolrCore 
then they should no longer use the same tag as their parent SolrCore. Instead 
they should generate their own tags.
* each component must be now responsible for unregistering its own gauges, as 
identified by its own tag. We can strongly encourage implementors to do this in 
each component's {{AutoCloseable.close()}} but I don't see any easy way to 
actually enforce it.

This approach doesn't require keeping actual references to gauges in each 
component.

For convenience we could also extend the concept of {{tag}} so that it's 
multi-valued - eg. a cache would use its own tag and the parent SolrCore-s tag. 
This way both the cache and the SolrCore could each easily unregister gauges 
that they (or their parent) created.

> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-06 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901430#comment-16901430
 ] 

Andrzej Bialecki  edited comment on SOLR-9658 at 8/6/19 8:25 PM:
-

The latest patch adds also support for {{maxIdleTime}} to {{FastLRUCache}}.
Similarly to {{LFUCache}} if there's a cleanup thread it will wake up every 
{{maxIdleTime}} to sweep and evict entries regardless of {{put}}.


was (Author: ab):
The latest patch adds also support for {{maxIdleTime}} to {{FastLRUCache}}.

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-9658.patch, SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-06 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9658:

Fix Version/s: 8.3

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-9658.patch, SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-06 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901430#comment-16901430
 ] 

Andrzej Bialecki  commented on SOLR-9658:
-

The latest patch adds also support for {{maxIdleTime}} to {{FastLRUCache}}.

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-9658.patch, SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-06 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9658:

Attachment: SOLR-9658.patch

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-9658.patch, SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-08-06 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901418#comment-16901418
 ] 

Noble Paul commented on SOLR-13677:
---

[~cpoerschke], the problem is not with adding those {{remember()}} {{forget()}} 
methods. There is no way to ensure that those methods are invoked  . We may be 
able to check them today. But who will ensure that about the future components. 
The advantage of piggybacking on {{close()}} is that it is a well known and 
usually close is called always

> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 437 - Failure

2019-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/437/

1 tests failed.
FAILED:  org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain.testRandom

Error Message:
Error from server at 
http://127.0.0.1:37249/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection:
 Error from server at null: Expected mime type application/octet-stream but got 
text/html.Error 500 Server Error 
 HTTP ERROR 500 Problem accessing 
/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection_shard1_replica_n1/select.
 Reason: Server ErrorCaused 
by:java.lang.AssertionError  at 
java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1896)  at 
java.base/java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2061)  at 
java.base/java.util.HashMap.putVal(HashMap.java:633)  at 
java.base/java.util.HashMap.put(HashMap.java:607)  at 
org.apache.solr.search.LRUCache.put(LRUCache.java:201)  at 
org.apache.solr.search.SolrCacheHolder.put(SolrCacheHolder.java:46)  at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1449)
  at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:568)  at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1484)
  at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:398)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:305)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2581)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:780)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:505)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)  at 
org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:311)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)
  at java.base/java.lang.Thread.run(Thread.java:834)  http://eclipse.org/jetty";>Powered by Jetty:// 9.4.19.v20190610  
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:37249/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection:
 Error from server at null: Expected mime type application/octet-stream but got 
text/html. 


Error 500 Server Error

HTTP ERROR 500
Problem accessing 
/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection_shard1_replica_n1/select.
 Reason:
Server ErrorCaused by:java.lang.AssertionError
at 
java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1896)
at java.base/java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2061)
at java.base/java.util.HashMap.putVal(HashMap.java:633)
at java.base/java.util.HashMap.put(HashMap.java:607)
at org.apache.solr.search.LRUCache.put(LRUCache.java:201)
at org.apache.solr.search.SolrCacheHolder.put(SolrCacheHolder.java:46)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1449)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:568)
 

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_201) - Build # 978 - Unstable!

2019-08-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/978/
Java: 64bit/jdk1.8.0_201 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.search.facet.TestCloudJSONFacetSKG.testRandom

Error Message:
Error from server at 
https://127.0.0.1:32971/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection:
 Error from server at null: Expected mime type application/octet-stream but got 
text/html.Error 500 Server Error 
 HTTP ERROR 500 Problem accessing 
/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection_shard2_replica_n2/select.
 Reason: Server ErrorCaused 
by:java.lang.AssertionError  at 
java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1849)  at 
java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2014)  at 
java.util.HashMap.putVal(HashMap.java:638)  at 
java.util.HashMap.put(HashMap.java:612)  at 
org.apache.solr.search.LRUCache.put(LRUCache.java:201)  at 
org.apache.solr.search.SolrCacheHolder.put(SolrCacheHolder.java:46)  at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1449)
  at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:568)  at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1484)
  at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:398)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:305)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2592)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:780)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:505)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)  at 
org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427)
  at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:321)  
at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159)  
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)  at 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)
  at java.lang.Thread.run(Thread.java:748)  http://eclipse.org/jetty";>Powered by Jetty:// 9.4.19.v20190610  
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
https://127.0.0.1:32971/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection:
 Error from server at null: Expected mime type application/octet-stream but got 
text/html. 


Error 500 Server Error

HTTP ERROR 500
Problem accessing 
/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection_shard2_replica_n2/select.
 Reason:
Server ErrorCaused by:java.lang.AssertionError
at java.util.HashMap$TreeNode.moveRootToFront(HashM

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.3) - Build # 8072 - Still Unstable!

2019-08-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8072/
Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  
org.apache.solr.handler.admin.ZookeeperStatusHandlerTest.monitorZookeeper

Error Message:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:60499/solr: Server Error  request: 
http://127.0.0.1:60499/solr/admin/zookeeper/status?wt=json&version=1

Stack Trace:
java.util.concurrent.ExecutionException: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:60499/solr: Server Error

request: http://127.0.0.1:60499/solr/admin/zookeeper/status?wt=json&version=1
at 
__randomizedtesting.SeedInfo.seed([CE2F9AC11602F75:99053AF515810D98]:0)
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.solr.handler.admin.ZookeeperStatusHandlerTest.monitorZookeeper(ZookeeperStatusHandlerTest.java:76)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.

[jira] [Commented] (SOLR-13678) ZkStateReader.removeCollectionPropsWatcher can deadlock with concurrent zkCallback thread on props watcher

2019-08-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901371#comment-16901371
 ] 

Tomás Fernández Löbbe commented on SOLR-13678:
--

Thanks Hoss. I'll try to take a look as soon as I can.

> ZkStateReader.removeCollectionPropsWatcher can deadlock with concurrent 
> zkCallback thread on props watcher
> --
>
> Key: SOLR-13678
> URL: https://issues.apache.org/jira/browse/SOLR-13678
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: collectionpropswatcher-deadlock-jstack.txt
>
>
> while investigating an (unrelated) test bug in CollectionPropsTest I 
> discovered a deadlock situation that can occur when calling 
> {{ZkStateReader.removeCollectionPropsWatcher()}} if a zkCallback thread tries 
> to concurrently fire the watchers set on the collection props.
> {{ZkStateReader.removeCollectionPropsWatcher()}} is itself called when a 
> {{CollectionPropsWatcher.onStateChanged()}} impl returns "true" -- meaning 
> that IIUC any usage of {{CollectionPropsWatcher}} could potentially result in 
> this type of deadlock situation. 
> {noformat}
> "TEST-CollectionPropsTest.testReadWriteCached-seed#[D3C6921874D1CFEB]" #15 
> prio=5 os_prio=0 cpu=567.78ms elapsed=682.12s tid=0x7
> fa5e8343800 nid=0x3f61 waiting for monitor entry  [0x7fa62d222000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.solr.common.cloud.ZkStateReader.lambda$removeCollectionPropsWatcher$20(ZkStateReader.java:2001)
> - waiting to lock <0xe6207500> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.solr.common.cloud.ZkStateReader$$Lambda$617/0x0001006c1840.apply(Unknown
>  Source)
> at 
> java.util.concurrent.ConcurrentHashMap.compute(java.base@11.0.3/ConcurrentHashMap.java:1932)
> - locked <0xeb9156b8> (a 
> java.util.concurrent.ConcurrentHashMap$Node)
> at 
> org.apache.solr.common.cloud.ZkStateReader.removeCollectionPropsWatcher(ZkStateReader.java:1994)
> at 
> org.apache.solr.cloud.CollectionPropsTest.testReadWriteCached(CollectionPropsTest.java:125)
> ...
> "zkCallback-88-thread-2" #213 prio=5 os_prio=0 cpu=14.06ms elapsed=672.65s 
> tid=0x7fa6041bf000 nid=0x402f waiting for monitor ent
> ry  [0x7fa5b8f39000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> java.util.concurrent.ConcurrentHashMap.compute(java.base@11.0.3/ConcurrentHashMap.java:1923)
> - waiting to lock <0xeb9156b8> (a 
> java.util.concurrent.ConcurrentHashMap$Node)
> at 
> org.apache.solr.common.cloud.ZkStateReader$PropsNotification.(ZkStateReader.java:2262)
> at 
> org.apache.solr.common.cloud.ZkStateReader.notifyPropsWatchers(ZkStateReader.java:2243)
> at 
> org.apache.solr.common.cloud.ZkStateReader$PropsWatcher.refreshAndWatch(ZkStateReader.java:1458)
> - locked <0xe6207500> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.solr.common.cloud.ZkStateReader$PropsWatcher.process(ZkStateReader.java:1440)
> at 
> org.apache.solr.common.cloud.SolrZkClient$ProcessWatchWithExecutor.lambda$process$1(SolrZkClient.java:838)
> at 
> org.apache.solr.common.cloud.SolrZkClient$ProcessWatchWithExecutor$$Lambda$253/0x0001004a4440.run(Unknown
>  Source)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(java.base@11.0.3/Executors.java:515)
> at 
> java.util.concurrent.FutureTask.run(java.base@11.0.3/FutureTask.java:264)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$140/0x000100308c40.run(Unknown
>  Source)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.3/ThreadPoolExecutor.java:1128)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.3/ThreadPoolExecutor.java:628)
> at java.lang.Thread.run(java.base@11.0.3/Thread.java:834)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-08-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r311203284
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/endresulttransformer/GroupedEndResultTransformer.java
 ##
 @@ -75,7 +75,13 @@ public void transform(Map result, 
ResponseBuilder rb, SolrDocumentSou
   SimpleOrderedMap groupResult = new SimpleOrderedMap<>();
   if (group.groupValue != null) {
 // use createFields so that fields having doc values are also 
supported
-List fields = 
groupField.createFields(group.groupValue.utf8ToString());
+final String groupValue;
+if (rb.getGroupingSpec().isSkipSecondGroupingStep()) {
+  groupValue = 
groupField.getType().indexedToReadable(group.groupValue.utf8ToString());
 
 Review comment:
   I struggled (again) to comprehend why this change is needed. 
https://github.com/cpoerschke/lucene-solr/commit/20129e7d3f7e12f442254e780e7da9a590a9036b
 proposes to factor out a local `bytesRefToString` functor with fairly detailed 
comments. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13399) compositeId support for shard splitting

2019-08-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901349#comment-16901349
 ] 

ASF subversion and git services commented on SOLR-13399:


Commit d8f99a9986835507d19b70edf0ff280416104788 in lucene-solr's branch 
refs/heads/branch_8x from Yonik Seeley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d8f99a9 ]

SOLR-13399: ability to use id field for compositeId histogram


> compositeId support for shard splitting
> ---
>
> Key: SOLR-13399
> URL: https://issues.apache.org/jira/browse/SOLR-13399
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13399.patch, SOLR-13399.patch, 
> SOLR-13399_testfix.patch, SOLR-13399_useId.patch
>
>
> Shard splitting does not currently have a way to automatically take into 
> account the actual distribution (number of documents) in each hash bucket 
> created by using compositeId hashing.
> We should probably add a parameter *splitByPrefix* to the *SPLITSHARD* 
> command that would look at the number of docs sharing each compositeId prefix 
> and use that to create roughly equal sized buckets by document count rather 
> than just assuming an equal distribution across the entire hash range.
> Like normal shard splitting, we should bias against splitting within hash 
> buckets unless necessary (since that leads to larger query fanout.) . Perhaps 
> this warrants a parameter that would control how much of a size mismatch is 
> tolerable before resorting to splitting within a bucket. 
> *allowedSizeDifference*?
> To more quickly calculate the number of docs in each bucket, we could index 
> the prefix in a different field.  Iterating over the terms for this field 
> would quickly give us the number of docs in each (i.e lucene keeps track of 
> the doc count for each term already.)  Perhaps the implementation could be a 
> flag on the *id* field... something like *indexPrefixes* and poly-fields that 
> would cause the indexing to be automatically done and alleviate having to 
> pass in an additional field during indexing and during the call to 
> *SPLITSHARD*.  This whole part is an optimization though and could be split 
> off into its own issue if desired.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13399) compositeId support for shard splitting

2019-08-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901348#comment-16901348
 ] 

ASF subversion and git services commented on SOLR-13399:


Commit 19ddcfd282f3b9eccc50da83653674e510229960 in lucene-solr's branch 
refs/heads/master from Yonik Seeley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=19ddcfd2 ]

SOLR-13399: ability to use id field for compositeId histogram


> compositeId support for shard splitting
> ---
>
> Key: SOLR-13399
> URL: https://issues.apache.org/jira/browse/SOLR-13399
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13399.patch, SOLR-13399.patch, 
> SOLR-13399_testfix.patch, SOLR-13399_useId.patch
>
>
> Shard splitting does not currently have a way to automatically take into 
> account the actual distribution (number of documents) in each hash bucket 
> created by using compositeId hashing.
> We should probably add a parameter *splitByPrefix* to the *SPLITSHARD* 
> command that would look at the number of docs sharing each compositeId prefix 
> and use that to create roughly equal sized buckets by document count rather 
> than just assuming an equal distribution across the entire hash range.
> Like normal shard splitting, we should bias against splitting within hash 
> buckets unless necessary (since that leads to larger query fanout.) . Perhaps 
> this warrants a parameter that would control how much of a size mismatch is 
> tolerable before resorting to splitting within a bucket. 
> *allowedSizeDifference*?
> To more quickly calculate the number of docs in each bucket, we could index 
> the prefix in a different field.  Iterating over the terms for this field 
> would quickly give us the number of docs in each (i.e lucene keeps track of 
> the doc count for each term already.)  Perhaps the implementation could be a 
> flag on the *id* field... something like *indexPrefixes* and poly-fields that 
> would cause the indexing to be automatically done and alleviate having to 
> pass in an additional field during indexing and during the call to 
> *SPLITSHARD*.  This whole part is an optimization though and could be split 
> off into its own issue if desired.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-08-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r311202450
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java
 ##
 @@ -142,4 +150,58 @@ private NamedList 
serializeSearchGroup(Collection> data, S
 return result;
   }
 
+  public static class SkipSecondStepSearchResultResultTransformer extends 
SearchGroupsResultTransformer {
+
+private static final String TOP_DOC_SOLR_ID_KEY = "topDocSolrId";
+private static final String TOP_DOC_SCORE_KEY = "topDocScore";
+private static final String SORTVALUES_KEY = "sortValues";
+
+private final SchemaField uniqueField;
+
+public SkipSecondStepSearchResultResultTransformer(SolrIndexSearcher 
searcher) {
+  super(searcher);
+  this.uniqueField = searcher.getSchema().getUniqueKeyField();
+}
+
+@Override
+protected Object[] getSortValues(Object groupDocs) {
+  NamedList groupInfo = (NamedList) groupDocs;
+  final ArrayList sortValues = (ArrayList) 
groupInfo.get(SORTVALUES_KEY);
+  return sortValues.toArray(new Comparable[sortValues.size()]);
+}
+
+@Override
+protected SearchGroup deserializeOneSearchGroup(SchemaField 
groupField, String groupValue,
+  SortField[] 
groupSortField, Object rawSearchGroupData) {
+  SearchGroup searchGroup = 
super.deserializeOneSearchGroup(groupField, groupValue, groupSortField, 
rawSearchGroupData);
+  NamedList groupInfo = (NamedList) rawSearchGroupData;
+  searchGroup.topDocLuceneId = DocIdSetIterator.NO_MORE_DOCS;
+  searchGroup.topDocScore = (float) groupInfo.get(TOP_DOC_SCORE_KEY);
+  searchGroup.topDocSolrId = groupInfo.get(TOP_DOC_SOLR_ID_KEY);
+  return searchGroup;
+}
+
+@Override
+protected Object serializeOneSearchGroup(SortField[] groupSortField, 
SearchGroup searchGroup) {
+  Document luceneDoc = null;
+  /** Use the lucene id to get the unique solr id so that it can be sent 
to the federator.
+   * The lucene id of a document is not unique across all shards i.e. 
different documents
+   * in different shards could have the same lucene id, whereas the solr 
id is guaranteed
+   * to be unique so this is what we need to return to the federator
+   **/
+  try {
+luceneDoc = searcher.doc(searchGroup.topDocLuceneId, 
Collections.singleton(uniqueField.getName()));
 
 Review comment:
   
https://github.com/cpoerschke/lucene-solr/commit/20129e7d3f7e12f442254e780e7da9a590a9036b
 proposes to introduce a `uniqueFieldNameAsSet` member to avoid successive 
calls allocating identical singleton sets.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-08-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r311202125
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SkipSecondStepSearchGroupShardResponseProcessor.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.grouping.distributed.responseprocessor;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.TotalHits;
+import org.apache.lucene.search.grouping.GroupDocs;
+import org.apache.lucene.search.grouping.SearchGroup;
+import org.apache.lucene.search.grouping.TopGroups;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.handler.component.ResponseBuilder;
+import org.apache.solr.handler.component.ShardDoc;
+import org.apache.solr.handler.component.ShardRequest;
+import org.apache.solr.handler.component.ShardResponse;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.grouping.GroupingSpecification;
+import 
org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer;
+
+public class SkipSecondStepSearchGroupShardResponseProcessor extends 
SearchGroupShardResponseProcessor {
+
+  @Override
+  protected SearchGroupsResultTransformer 
newSearchGroupsResultTransformer(SolrIndexSearcher solrIndexSearcher) {
+return new 
SearchGroupsResultTransformer.SkipSecondStepSearchResultResultTransformer(solrIndexSearcher);
+  }
+
+  @Override
+  protected SearchGroupsContainer newSearchGroupsContainer(ResponseBuilder rb) 
{
+return new 
SkipSecondStepSearchGroupsContainer(rb.getGroupingSpec().getFields());
+  }
+
+  @Override
+  public void process(ResponseBuilder rb, ShardRequest shardRequest) {
+super.process(rb, shardRequest);
+TopGroupsShardResponseProcessor.fillResultIds(rb);
+  }
+
+  protected static class SkipSecondStepSearchGroupsContainer extends 
SearchGroupsContainer {
+
+private final Map docIdToShard = new HashMap<>();
+
+public SkipSecondStepSearchGroupsContainer(String[] fields) {
+  super(fields);
+}
+
+@Override
+public void addSearchGroups(ShardResponse srsp, String field, 
Collection> searchGroups) {
+  super.addSearchGroups(srsp, field, searchGroups);
+  for (SearchGroup searchGroup : searchGroups) {
+assert(srsp.getShard() != null);
+docIdToShard.put(searchGroup.topDocSolrId, srsp.getShard());
+  }
+}
+
+@Override
+public void addMergedSearchGroups(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups ) {
+  // TODO: add comment or javadoc re: why this method is overridden as a 
no-op
+}
+
+@Override
+public void addSearchGroupToShards(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups) {
+  super.addSearchGroupToShards(rb, groupField, mergedTopGroups);
+
+  final GroupingSpecification groupingSpecification = rb.getGroupingSpec();
+  final Sort groupSort = 
groupingSpecification.getGroupSortSpec().getSort();
+
+  GroupDocs[] groups = new GroupDocs[mergedTopGroups.size()];
+
+  // This is the max score found in any document on any group
+  float maxScore = 0;
+  int index = 0;
+
+  for (SearchGroup group : mergedTopGroups) {
+maxScore = Math.max(maxScore, group.topDocScore);
+final String shard = docIdToShard.get(group.topDocSolrId);
+assert(shard != null);
+final ShardDoc sdoc = new ShardDoc();
+sdoc.score = group.topDocScore;
+sdoc.id = group.topDocSolrId;
+sdoc.shard = shard;
+
+groups[index++] = new GroupDocs<>(group.topDocScore,
+group.topDocScore,
+new TotalHits(1, TotalHits.Relation.EQUAL_TO), /* we don't know 
the actual number of hits in the group- we set it to 1 as we only keep track of 
the top doc */
+new ShardDoc[] { sdoc }, /* only top doc */
+group.groupValue,
+group.sortValues);
+  }
+  TopGroups 

[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-08-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r311201828
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SkipSecondStepSearchGroupShardResponseProcessor.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.grouping.distributed.responseprocessor;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.TotalHits;
+import org.apache.lucene.search.grouping.GroupDocs;
+import org.apache.lucene.search.grouping.SearchGroup;
+import org.apache.lucene.search.grouping.TopGroups;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.handler.component.ResponseBuilder;
+import org.apache.solr.handler.component.ShardDoc;
+import org.apache.solr.handler.component.ShardRequest;
+import org.apache.solr.handler.component.ShardResponse;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.grouping.GroupingSpecification;
+import 
org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer;
+
+public class SkipSecondStepSearchGroupShardResponseProcessor extends 
SearchGroupShardResponseProcessor {
+
+  @Override
+  protected SearchGroupsResultTransformer 
newSearchGroupsResultTransformer(SolrIndexSearcher solrIndexSearcher) {
+return new 
SearchGroupsResultTransformer.SkipSecondStepSearchResultResultTransformer(solrIndexSearcher);
+  }
+
+  @Override
+  protected SearchGroupsContainer newSearchGroupsContainer(ResponseBuilder rb) 
{
+return new 
SkipSecondStepSearchGroupsContainer(rb.getGroupingSpec().getFields());
+  }
+
+  @Override
+  public void process(ResponseBuilder rb, ShardRequest shardRequest) {
+super.process(rb, shardRequest);
+TopGroupsShardResponseProcessor.fillResultIds(rb);
+  }
+
+  protected static class SkipSecondStepSearchGroupsContainer extends 
SearchGroupsContainer {
+
+private final Map docIdToShard = new HashMap<>();
+
+public SkipSecondStepSearchGroupsContainer(String[] fields) {
+  super(fields);
+}
+
+@Override
+public void addSearchGroups(ShardResponse srsp, String field, 
Collection> searchGroups) {
+  super.addSearchGroups(srsp, field, searchGroups);
+  for (SearchGroup searchGroup : searchGroups) {
+assert(srsp.getShard() != null);
+docIdToShard.put(searchGroup.topDocSolrId, srsp.getShard());
+  }
+}
+
+@Override
+public void addMergedSearchGroups(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups ) {
+  // TODO: add comment or javadoc re: why this method is overridden as a 
no-op
+}
+
+@Override
+public void addSearchGroupToShards(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups) {
+  super.addSearchGroupToShards(rb, groupField, mergedTopGroups);
+
+  final GroupingSpecification groupingSpecification = rb.getGroupingSpec();
+  final Sort groupSort = 
groupingSpecification.getGroupSortSpec().getSort();
+
+  GroupDocs[] groups = new GroupDocs[mergedTopGroups.size()];
+
+  // This is the max score found in any document on any group
+  float maxScore = 0;
+  int index = 0;
+
+  for (SearchGroup group : mergedTopGroups) {
+maxScore = Math.max(maxScore, group.topDocScore);
+final String shard = docIdToShard.get(group.topDocSolrId);
+assert(shard != null);
+final ShardDoc sdoc = new ShardDoc();
+sdoc.score = group.topDocScore;
+sdoc.id = group.topDocSolrId;
+sdoc.shard = shard;
+
+groups[index++] = new GroupDocs<>(group.topDocScore,
 
 Review comment:
   Passing `Float.NaN` for the first (`score`) argument looks to be 
[possible](https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.2.0/lucene/grouping/src/java/org/apache/lucene/search/grouping/GroupDocs.java#L33-L35)
 and seems more accurate here?


Thi

[GitHub] [lucene-solr] cpoerschke commented on issue #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-08-06 Thread GitBox
cpoerschke commented on issue #300: SOLR-11831: Skip second grouping step if 
group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#issuecomment-518781007
 
 
   Thanks again @diegoceccarelli for rebasing on top of the current master! 
With the serialise/deserialiseOneGroup methods in place the 
`SearchGroupsResultTransformer` in particular is really nice to read and 
understand now, even and especially after some time away from the code.
   
   My 
https://github.com/cpoerschke/lucene-solr/commit/20129e7d3f7e12f442254e780e7da9a590a9036b
 commit has a couple of tweaks and suggestions, lumped together as one commit 
(sorry!) but do feel free to look and selectively pick as you like.
   
   I'll also go and annotate on the PR here the suggested tweaks that are _not_ 
related to code comprehension or style consistency. And then next (not today) 
I'm planning to look at the tests; the `.adoc` documentation changes are also 
on the known to-do list.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-06 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901327#comment-16901327
 ] 

Andrzej Bialecki  commented on SOLR-9658:
-

This patch implements the following:
 * adds {{maxIdleTime}} config param to {{LFUCache}} and {{LRUCache}}.
 * {{FastLRUCache}} is in the works but the eviction algorithm there is quite 
complicated and I'm not sure I fully understand it... Having said that, maybe 
we should do an additional full sweep and remove expired entries regardless of 
the existing algorithm.
 * entries are expired on {{put}}. If a cleanup thread is used in {{LFUCache}} 
then it also wakes up every {{maxIdleTime}} to expire old entries even if 
there's no {{put}}.
 * cache entries are now marked with epoch time (ns) instead of the generation 
counter. This allows us to evict entries based on real elapsed time, and using 
epoch time makes debugging somewhat easier at no additional cost compared to 
nano time.
 * unit tests

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-06 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9658:

Attachment: SOLR-9658.patch

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8947) Indexing fails with "too many tokens for field" when using custom term frequencies

2019-08-06 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-8947:
--

 Summary: Indexing fails with "too many tokens for field" when 
using custom term frequencies
 Key: LUCENE-8947
 URL: https://issues.apache.org/jira/browse/LUCENE-8947
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 7.5
Reporter: Michael McCandless


We are using custom term frequencies (LUCENE-7854) to index per-token scoring 
signals, however for one document that had many tokens and those tokens had 
fairly large (~998,000) scoring signals, we hit this exception:
{noformat}
2019-08-05T21:32:37,048 [ERROR] (LuceneIndexing-3-thread-3) 
com.amazon.lucene.index.IndexGCRDocument: Failed to index doc: 
java.lang.IllegalArgumentException: too many tokens for field "foobar"
at 
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:825)
at 
org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:430)
at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:297)
at 
org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:450)
at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1291)
at org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1264)
{noformat}
This is happening in this code in {{DefaultIndexingChain.java}}:
{noformat}
  try {
invertState.length = Math.addExact(invertState.length, 
invertState.termFreqAttribute.getTermFrequency());
  } catch (ArithmeticException ae) {
throw new IllegalArgumentException("too many tokens for field \"" + 
field.name() + "\"");
  }{noformat}
Where Lucene is accumulating the total length (number of tokens) for the field. 
 But total length doesn't really make sense if you are using custom term 
frequencies to hold arbitrary scoring signals?  Or, maybe it does make sense, 
if user is using this as simple boosting, but maybe we should allow this length 
to be a {{long}}?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-08-06 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901175#comment-16901175
 ] 

Andrzej Bialecki  commented on SOLR-13677:
--

Also, {{GaugeRef}} can be an interface that {{GaugeWrapper}} implements, then 
there's no need to create even more objects.

> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-08-06 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901172#comment-16901172
 ] 

Andrzej Bialecki  commented on SOLR-13677:
--

Thanks Christine, this looks somewhat cleaner.

For historical reasons (a complicated refactoring of SolrInfoMBean, UI 
dependencies, JMX, etc) some of the methods one would expect from 
{{SolrMetricProducer}} ended up in {{SolrInfoBean}} instead, and this is also 
the interface that is passed to {{registerGauge}}. Unless we want to do a 
larger refactoring now, we could treat {{SolrInfoBean}} as the memory, and add 
the default methods for remembering and forgetting the gauges to this interface.

> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-08-06 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901125#comment-16901125
 ] 

Lucene/Solr QA commented on SOLR-13240:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} SOLR-13240 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13240 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976814/SOLR-13240.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/522/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server

[jira] [Commented] (LUCENE-8747) Allow access to submatches from Matches instances

2019-08-06 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901100#comment-16901100
 ] 

Alan Woodward commented on LUCENE-8747:
---

> Can we return a list of Matches in findNamedMatches?

Oh that's a much better idea, yes.  Patch updated.

> Allow access to submatches from Matches instances
> -
>
> Key: LUCENE-8747
> URL: https://issues.apache.org/jira/browse/LUCENE-8747
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8747.patch, LUCENE-8747.patch, LUCENE-8747.patch, 
> LUCENE-8747.patch, LUCENE-8747.patch
>
>
> A Matches object currently allows access to all matching terms from a query, 
> but the structure of the matching query is flattened out, so if you want to 
> find which subqueries have matched you need to iterate over all matches, 
> collecting queries as you go.  It should be easier to get this information 
> from the parent Matches object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8747) Allow access to submatches from Matches instances

2019-08-06 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8747:
--
Attachment: LUCENE-8747.patch

> Allow access to submatches from Matches instances
> -
>
> Key: LUCENE-8747
> URL: https://issues.apache.org/jira/browse/LUCENE-8747
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8747.patch, LUCENE-8747.patch, LUCENE-8747.patch, 
> LUCENE-8747.patch, LUCENE-8747.patch
>
>
> A Matches object currently allows access to all matching terms from a query, 
> but the structure of the matching query is flattened out, so if you want to 
> find which subqueries have matched you need to iterate over all matches, 
> collecting queries as you go.  It should be easier to get this information 
> from the parent Matches object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8747) Allow access to submatches from Matches instances

2019-08-06 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901091#comment-16901091
 ] 

Jim Ferenczi commented on LUCENE-8747:
--

Can we return a list of Matches in findNamedMatches ? The set of string is 
useful for testing purpose but it should be easy to extract any named Matches 
from a global Matches object ?

> Allow access to submatches from Matches instances
> -
>
> Key: LUCENE-8747
> URL: https://issues.apache.org/jira/browse/LUCENE-8747
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8747.patch, LUCENE-8747.patch, LUCENE-8747.patch, 
> LUCENE-8747.patch
>
>
> A Matches object currently allows access to all matching terms from a query, 
> but the structure of the matching query is flattened out, so if you want to 
> find which subqueries have matched you need to iterate over all matches, 
> collecting queries as you go.  It should be easier to get this information 
> from the parent Matches object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8943) Incorrect IDF in MultiPhraseQuery and SpanOrQuery

2019-08-06 Thread Christoph Goller (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901076#comment-16901076
 ] 

Christoph Goller edited comment on LUCENE-8943 at 8/6/19 1:54 PM:
--

{{Thanks for your quick response Alan. I've been doing some thinking about 
adding up IDF values in case of simple phrase queries and I no longer think 
that is the way we should do it.}}

{{The problem is that we can get very high IDF values, i.e. values that are 
considerably higher than the maximum IDF value for a single term!}}

{{Consider an index with 10 million docs. The maximum IDF value (BM25) for a 
single term is 16.8. Assume we have 10 docs containing "wifi" and 10 docs 
containing "wi-fi" which is split by our tokenizer into 2 tokens. The IDF value 
for "wifi" will be 13.77. If we assume that "wi" and "fi" both occur only in 
"wi-fi" docs, we get an IDF of 27.5 for the "wi fi" phrase query which wee need 
in order to find our 10 "wi-fi" docs. If we search for wifi OR "wi fi" the docs 
containing "wi-fi" will score much higher!}}

{{Admittedly, it is easy to construct examples in which adding the IDF values 
of phrase parts yields values that are too high. The assumption of independence 
of phrase parts does not normally apply. But BM25 has a saturation for IDF 
values and adding up IDF values breaks it. This seems to be a serious 
drawback.}}

{{I propose to switch from combining IDF-values to calculating / approximating 
docFreq. For the OR-case SynonymQuery does this already. It uses the maximum. 
For the AND-case we could use something like}}

{{docFreqPhrase = (docFreq1 * docFreq2) / docCount}}

{{The intuition behind this is again independence of phrase parts. But by 
computing a docFreq we can guarantee the saturation for IDF.}}

{{For the "wi fi" example we get docFreqPhrase of 10^-5 leading to an IDF of 
16.8 (saturation) and the difference to the IDF of wifi is considerably smaller 
compared to adding up IDFs. If phrase parts are rare, we quickly run into 
saturation of the IDF. But we also get some reasonable values. Consider the 
phrase "New York". If we assume that 100,000 docs contain "new" and 10,000 docs 
contain "york". By applying the formula from above we get and IDF for the 
phrase "New York" of 11.5 which is roughly the number we get when we add up the 
IDFs of the parts (current Lucene behavior).}}

{{We could even have some simple adjustments for the fact that usually the 
independence assumption is not correct. For both the OR-case and the AND-case 
we could make values a little bit higher. The exact way for approximating 
docFreq for the AND-case and the OR-case could be defined in the Similarity and 
it could be configurable.}}

I also did some research with Google:

{{(multiword OR N-gram) AND BM25 AND IDF}}

Unfortunately I did not find anything that helps.

{{Do you know about the benchmarks used to evaluate scoring in Lucene? Are 
there any phrase queries involved?}}
 {{Robert told me it’s very Trek-like, so probably no phrase queries?}}

{{In my opinion something like BM25 can only get us to a certain level of 
relevance. Of course, we have to get it right. IDF values of phrases / 
SpanQueries should not have such a big effect on the score simply because they 
get too high IDF-values. We have to do something reasonable. But for real 
break-through improvements we need something like query segmentation or even 
query interpretation and proximity of query terms in documents should have a 
high impact on the score. That's why I think it is important to integrate 
PhraseQueries and SpanQueries properly into BM25.}}


was (Author: gol...@detego-software.de):
{{Thanks for your quick response Alan. I've been doing some thinking about 
adding up IDF values in case of simple phrase queries and I no longer think 
that is the way we should do it.}}

{{The problem is that we can get very high IDF values, i.e. values that are 
considerably higher than the maximum IDF value for a single term!}}

{{Consider an index with 10 million docs. The maximum IDF value (BM25) for a 
single term is 16.8. Assume we have 10 docs containing "wifi" and 10 docs 
containing "wi-fi" which is split by our tokenizer into 2 tokens. The IDF value 
for "wifi" will be 13.77. If we assume that "wi" and "fi" both occur only in 
"wi-fi" docs, we get an IDF of 27.5 for the "wi fi" phrase query which wee need 
in order to find our 10 "wi-fi" docs. If we search for wifi OR "wi fi" the docs 
containing "wi-fi" will score much higher!}}

{{Admittedly, it is easy to construct examples in which adding the IDF values 
of phrase parts yields values that are too high. The assumption of independence 
of phrase parts does not normally apply. But BM25 has a saturation for IDF 
values and adding up IDF values breaks it. This seems to be a serious 
drawback.}}

{{I propose to switch from combining IDF-values to cal

[jira] [Comment Edited] (LUCENE-8943) Incorrect IDF in MultiPhraseQuery and SpanOrQuery

2019-08-06 Thread Christoph Goller (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901076#comment-16901076
 ] 

Christoph Goller edited comment on LUCENE-8943 at 8/6/19 1:52 PM:
--

{{Thanks for your quick response Alan. I've been doing some thinking about 
adding up IDF values in case of simple phrase queries and I no longer think 
that is the way we should do it.}}

{{The problem is that we can get very high IDF values, i.e. values that are 
considerably higher than the maximum IDF value for a single term!}}

{{Consider an index with 10 million docs. The maximum IDF value (BM25) for a 
single term is 16.8. Assume we have 10 docs containing "wifi" and 10 docs 
containing "wi-fi" which is split by our tokenizer into 2 tokens. The IDF value 
for "wifi" will be 13.77. If we assume that "wi" and "fi" both occur only in 
"wi-fi" docs, we get an IDF of 27.5 for the "wi fi" phrase query which wee need 
in order to find our 10 "wi-fi" docs. If we search for wifi OR "wi fi" the docs 
containing "wi-fi" will score much higher!}}

{{Admittedly, it is easy to construct examples in which adding the IDF values 
of phrase parts yields values that are too high. The assumption of independence 
of phrase parts does not normally apply. But BM25 has a saturation for IDF 
values and adding up IDF values breaks it. This seems to be a serious 
drawback.}}

{{I propose to switch from combining IDF-values to calculating / approximating 
docFreq. For the OR-case SynonymQuery does this already. It uses the maximum. 
For the AND-case we could use something like}}

{{docFreqPhrase = (docFreq1 * docFreq2) / docCount}}

{{The intuition behind this is again independence of phrase parts. But by 
computing a docFreq we can guarantee the saturation for IDF.}}

{{For the "wi fi" example we get docFreqPhrase of 10^-5 leading to an IDF of 
16.8 (saturation) and the difference to the IDF of wifi is considerably smaller 
compared to adding up IDFs. If phrase parts are rare, we quickly run into 
saturation of the IDF. But we also get some reasonable values. Consider the 
phrase "New York". If we assume that 100,000 docs contain "new" and 10,000 docs 
contain "york". By applying the formula from above we get and IDF for the 
phrase "New York" of 11.5 which is roughly the number we get when we add up the 
IDFs of the parts (current Lucene behavior).}}

{{We could even have some simple adjustments for the fact that usually the 
independence assumption is not correct. For both the OR-case and the AND-case 
we could make values a little bit higher. The exact way for approximating 
docFreq for the AND-case and the OR-case could be defined in the Similarity and 
it could be configurable.}}

{{I also did some research with Google: }}

{{(multiword OR N-gram) AND BM25 AND IDF}}


 Unfortunately, I did not find anything that helps.
 {{Do you know about the benchmarks used to evaluate scoring in Lucene? Are 
there any phrase queries involved?}}
 {{Robert told me it’s very Trek-like, so probably no phrase queries?}}

{{In my opinion something like BM25 can only get us to a certain level of 
relevance. Of course, we have to get it right. IDF values of phrases / 
SpanQueries should not have such a big effect on the score simply because they 
get too high IDF-values. We have to do something reasonable. But for real 
break-through improvements we need something like query segmentation or even 
query interpretation and proximity of query terms in documents should have a 
high impact on the score. That's why I think it is important to integrate 
PhraseQueries and SpanQueries properly into BM25.}}


was (Author: gol...@detego-software.de):
{{Thanks for your quick response Alan. I've been doing some thinking about 
adding up IDF values in case of simple phrase queries and I no longer think 
that is the way we should do it.}}

{{The problem is that we can get very high IDF values, i.e. values that are 
considerably higher than the maximum IDF value for a single term!}}

{{Consider an index with 10 million docs. The maximum IDF value (BM25) for a 
single term is 16.8. Assume we have 10 docs containing "wifi" and 10 docs 
containing "wi-fi" which is split by our tokenizer into 2 tokens. The IDF value 
for "wifi" will be 13.77. If we assume that "wi" and "fi" both occur only in 
"wi-fi" docs, we get an IDF of 27.5 for the "wi fi" phrase query which wee need 
in order to find our 10 "wi-fi" docs. If we search for wifi OR "wi fi" the docs 
containing "wi-fi" will score much higher!}}

{{Admittedly, it is easy to construct examples in which adding the IDF values 
of phrase parts yields values that are too high. The assumption of independence 
of phrase parts does not normally apply. But BM25 has a saturation for IDF 
values and adding up IDF values breaks it. This seems to be a serious 
drawback.}}

{{I propose to switch from combining IDF-value

[jira] [Comment Edited] (LUCENE-8943) Incorrect IDF in MultiPhraseQuery and SpanOrQuery

2019-08-06 Thread Christoph Goller (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901076#comment-16901076
 ] 

Christoph Goller edited comment on LUCENE-8943 at 8/6/19 1:52 PM:
--

{{Thanks for your quick response Alan. I've been doing some thinking about 
adding up IDF values in case of simple phrase queries and I no longer think 
that is the way we should do it.}}

{{The problem is that we can get very high IDF values, i.e. values that are 
considerably higher than the maximum IDF value for a single term!}}

{{Consider an index with 10 million docs. The maximum IDF value (BM25) for a 
single term is 16.8. Assume we have 10 docs containing "wifi" and 10 docs 
containing "wi-fi" which is split by our tokenizer into 2 tokens. The IDF value 
for "wifi" will be 13.77. If we assume that "wi" and "fi" both occur only in 
"wi-fi" docs, we get an IDF of 27.5 for the "wi fi" phrase query which wee need 
in order to find our 10 "wi-fi" docs. If we search for wifi OR "wi fi" the docs 
containing "wi-fi" will score much higher!}}

{{Admittedly, it is easy to construct examples in which adding the IDF values 
of phrase parts yields values that are too high. The assumption of independence 
of phrase parts does not normally apply. But BM25 has a saturation for IDF 
values and adding up IDF values breaks it. This seems to be a serious 
drawback.}}

{{I propose to switch from combining IDF-values to calculating / approximating 
docFreq. For the OR-case SynonymQuery does this already. It uses the maximum. 
For the AND-case we could use something like}}

{{docFreqPhrase = (docFreq1 * docFreq2) / docCount}}

{{The intuition behind this is again independence of phrase parts. But by 
computing a docFreq we can guarantee the saturation for IDF.}}

{{For the "wi fi" example we get docFreqPhrase of 10^-5 leading to an IDF of 
16.8 (saturation) and the difference to the IDF of wifi is considerably smaller 
compared to adding up IDFs. If phrase parts are rare, we quickly run into 
saturation of the IDF. But we also get some reasonable values. Consider the 
phrase "New York". If we assume that 100,000 docs contain "new" and 10,000 docs 
contain "york". By applying the formula from above we get and IDF for the 
phrase "New York" of 11.5 which is roughly the number we get when we add up the 
IDFs of the parts (current Lucene behavior).}}

{{We could even have some simple adjustments for the fact that usually the 
independence assumption is not correct. For both the OR-case and the AND-case 
we could make values a little bit higher. The exact way for approximating 
docFreq for the AND-case and the OR-case could be defined in the Similarity and 
it could be configurable.}}

{{I also did some research with Google: (multiword OR N-gram) AND BM25 AND IDF}}
 Unfortunately, I did not find anything that helps.
 {{Do you know about the benchmarks used to evaluate scoring in Lucene? Are 
there any phrase queries involved?}}
 {{Robert told me it’s very Trek-like, so probably no phrase queries?}}

{{In my opinion something like BM25 can only get us to a certain level of 
relevance. Of course, we have to get it right. IDF values of phrases / 
SpanQueries should not have such a big effect on the score simply because they 
get too high IDF-values. We have to do something reasonable. But for real 
break-through improvements we need something like query segmentation or even 
query interpretation and proximity of query terms in documents should have a 
high impact on the score. That's why I think it is important to integrate 
PhraseQueries and SpanQueries properly into BM25.}}


was (Author: gol...@detego-software.de):
{{Thanks for your quick response Alan. I've been doing some thinking about 
adding up IDF values in case of simple phrase queries and I no longer think 
that is the way we should do it.}}

{{The problem is that we can get very high IDF values, i.e. values that are 
considerably higher than the maximum IDF value for a single term!}}

{{Consider an index with 10 million docs. The maximum IDF value (BM25) for a 
single term is 16.8. Assume we have 10 docs containing "wifi" and 10 docs 
containing "wi-fi" which is split by our tokenizer into 2 tokens. The IDF value 
for "wifi" will be 13.77. If we assume that "wi" and "fi" both occur only in 
"wi-fi" docs, we get an IDF of 27.5 for the "wi fi" phrase query which wee need 
in order to find our 10 "wi-fi" docs. If we search for wifi OR "wi fi" the docs 
containing "wi-fi" will score much higher!}}

{{Admittedly, it is easy to construct examples in which adding the IDF values 
of phrase parts yields values that are too high. The assumption of independence 
of phrase parts does not normally apply. But BM25 has a saturation for IDF 
values and adding up IDF values breaks it. This seems to be a serious 
drawback.}}

{{I propose to switch from combining IDF-values to cal

[jira] [Commented] (LUCENE-8943) Incorrect IDF in MultiPhraseQuery and SpanOrQuery

2019-08-06 Thread Christoph Goller (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901076#comment-16901076
 ] 

Christoph Goller commented on LUCENE-8943:
--

{{Thanks for your quick response Alan. I've been doing some thinking about 
adding up IDF values in case of simple phrase queries and I no longer think 
that is the way we should do it.}}

{{The problem is that we can get very high IDF values, i.e. values that are 
considerably higher than the maximum IDF value for a single term!}}

{{Consider an index with 10 million docs. The maximum IDF value (BM25) for a 
single term is 16.8. Assume we have 10 docs containing "wifi" and 10 docs 
containing "wi-fi" which is split by our tokenizer into 2 tokens. The IDF value 
for "wifi" will be 13.77. If we assume that "wi" and "fi" both occur only in 
"wi-fi" docs, we get an IDF of 27.5 for the "wi fi" phrase query which wee need 
in order to find our 10 "wi-fi" docs. If we search for wifi OR "wi fi" the docs 
containing "wi-fi" will score much higher!}}

{{Admittedly, it is easy to construct examples in which adding the IDF values 
of phrase parts yields values that are too high. The assumption of independence 
of phrase parts does not normally apply. But BM25 has a saturation for IDF 
values and adding up IDF values breaks it. This seems to be a serious 
drawback.}}

{{I propose to switch from combining IDF-values to calculating / approximating 
docFreq. For the OR-case SynonymQuery does this already. It uses the maximum. 
For the AND-case we could use something like}}

{{docFreqPhrase = (docFreq1 * docFreq2) / docCount}}

{{The intuition behind this is again independence of phrase parts. But by 
computing a docFreq we can guarantee the saturation for IDF.}}

{{For the "wi fi" example we get docFreqPhrase of 10^-5 leading to an IDF of 
16.8 (saturation) and the difference to the IDF of wifi is considerably smaller 
compared to adding up IDFs. If phrase parts are rare, we quickly run into 
saturation of the IDF. But we also get some reasonable values. Consider the 
phrase "New York". If we assume that 100,000 docs contain "new" and 10,000 docs 
contain "york". By applying the formula from above we get and IDF for the 
phrase "New York" of 11.5 which is roughly the number we get when we add up the 
IDFs of the parts (current Lucene behavior).}}

{{We could even have some simple adjustments for the fact that usually the 
independence assumption is not correct. For both the OR-case and the AND-case 
we could make values a little bit higher. The exact way for approximating 
docFreq for the AND-case and the OR-case could be defined in the Similarity and 
it could be configurable.}}

{{I also did some research with Google: (multiword OR N-gram) AND BM25 AND IDF}}
{{Unfortunately, I did not find anything that helps. }}
{{Do you know about the benchmarks used to evaluate scoring in Lucene? Are 
there any phrase queries involved?}}
{{Robert told me it’s very Trek-like, so probably no phrase queries?}}

{{In my opinion something like BM25 can only get us to a certain level of 
relevance. Of course, we have to get it right. IDF values of phrases / 
SpanQueries should not have such a big effect on the score simply because they 
get too high IDF-values. We have to do something reasonable. But for real 
break-through improvements we need something like query segmentation or even 
query interpretation and proximity of query terms in documents should have a 
high impact on the score. That's why I think it is important to integrate 
PhraseQueries and SpanQueries properly into BM25.}}

> Incorrect IDF in MultiPhraseQuery and SpanOrQuery
> -
>
> Key: LUCENE-8943
> URL: https://issues.apache.org/jira/browse/LUCENE-8943
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Affects Versions: 8.0
>Reporter: Christoph Goller
>Priority: Major
>
> I recently stumbled across a very old bug in the IDF computation for 
> MultiPhraseQuery and SpanOrQuery.
> BM25Similarity and TFIDFSimilarity / ClassicSimilarity have a method for 
> combining IDF values from more than on term / TermStatistics.
> I mean the method:
> Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics 
> termStats[])
> It simply adds up the IDFs from all termStats[].
> This method is used e.g. in PhraseQuery where it makes sense. If we assume 
> that for the phrase "New York" the occurrences of both words are independent, 
> we can multiply their probabilitis and since IDFs are logarithmic we add them 
> up. Seems to be a reasonable approximation. However, this method is also used 
> to add up the IDFs of all terms in a MultiPhraseQuery as can be seen in:
> Similarity.SimScorer getStats(IndexSearcher searcher)
> A MultiPhraseQuery is actually

[jira] [Comment Edited] (SOLR-13240) UTILIZENODE action results in an exception

2019-08-06 Thread Richard Goodman (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901020#comment-16901020
 ] 

Richard Goodman edited comment on SOLR-13240 at 8/6/19 1:14 PM:


Okay having had a look at this, this is what I understand from it:

With the 
[clusterstate|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L90-L130]
 that is being loaded, and the 
[policies|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L329-L338]
 being loaded. Then there is a violation of there being more than 1 replica for 
a shard on the same node _(in this case, node1)_. This is why the first stage 
is moving replica and verifying it has moved to {{node2}}.

What doesn't make sense to me is why it's then moving a replica to node3, 
because this then re-raises a violation of there being more than 1 replica of 
the same shard on the same node. So I don't get why it's doing it again, nor 
how it passes If you spot something that I can't see then let me know.

After that the policies change to allow more than 1, but less than 3 replicas 
of the same shard to be on the same node. This is where the test is failing, 
because with the new comparator, it will order replicas based on their name, 
and with the first iteration of moving replicas, it expects replica 3 
_({{r3}})_ to be moved first, however, it would in fact be replica 1 
_({{r1}})_. 

So again it would be just moving the stages around. I'm not sure why doing this 
changes which node it goes too, that is throwing me off a little bit.

I've attached a patch with updates to these tests, ran the {{ant test 
-Dtestcase=TestPolicy}} and it passed, let me know what you think.

 [^SOLR-13240.patch] 


was (Author: goodman):
Okay having had a look at this, this is what I understand from it:

With the 
[clusterstate|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L90-L130]
 that is being loaded, and the 
[policies|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L329-L338]
 being loaded. Then there is a violation of there being more than 1 replica for 
a shard on the same node _(in this case, node1)_. This is why the first stage 
is moving replica and verifying it has moved to {{node2}}.

What doesn't make sense to me is why it's then moving a replica to node3, 
because this then re-raises a violation of there being more than 1 replica of 
the same shard on the same node. So I don't get why it's doing it again, nor 
how it passes If you spot something that I can't see then let me know.

After that the policies change to allow more than 1, but less than 3 replicas 
of the same shard to be on the same node. This is where the test is failing, 
because with the new comparator, it will order replicas based on their name, 
and with the first iteration of moving replicas, it expects replica 3 
_({{r3}})_ to be moved first, however, it would in fact be replica 1 
_({{r1}})_. 

So again it would be just moving the stages around. I'm not sure why doing this 
changes which node it goes too, that is throwing me off a little bit.

I've attached a patch with updates to these tests, ran the {{ant test 
-Dtestcase=TestPolicy}} and it passed, let me know what you think.



> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract

[jira] [Updated] (SOLR-13240) UTILIZENODE action results in an exception

2019-08-06 Thread Richard Goodman (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Goodman updated SOLR-13240:
---
Attachment: SOLR-13240.patch

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWha

[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-08-06 Thread Richard Goodman (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901020#comment-16901020
 ] 

Richard Goodman commented on SOLR-13240:


Okay having had a look at this, this is what I understand from it:

With the 
[clusterstate|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L90-L130]
 that is being loaded, and the 
[policies|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L329-L338]
 being loaded. Then there is a violation of there being more than 1 replica for 
a shard on the same node _(in this case, node1)_. This is why the first stage 
is moving replica and verifying it has moved to {{node2}}.

What doesn't make sense to me is why it's then moving a replica to node3, 
because this then re-raises a violation of there being more than 1 replica of 
the same shard on the same node. So I don't get why it's doing it again, nor 
how it passes If you spot something that I can't see then let me know.

After that the policies change to allow more than 1, but less than 3 replicas 
of the same shard to be on the same node. This is where the test is failing, 
because with the new comparator, it will order replicas based on their name, 
and with the first iteration of moving replicas, it expects replica 3 
_({{r3}})_ to be moved first, however, it would in fact be replica 1 
_({{r1}})_. 

So again it would be just moving the stages around. I'm not sure why doing this 
changes which node it goes too, that is throwing me off a little bit.

I've attached a patch with updates to these tests, ran the {{ant test 
-Dtestcase=TestPolicy}} and it passed, let me know what you think.



> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  

[JENKINS] Lucene-Solr-Tests-8.x - Build # 362 - Unstable

2019-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/362/

1 tests failed.
FAILED:  org.apache.solr.cloud.SystemCollectionCompatTest.testBackCompat

Error Message:
Error from server at https://127.0.0.1:39889/solr/.system: Error reading input 
String Can't find resource 'schema.xml' in classpath or '/configs/.system', 
cwd=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:39889/solr/.system: Error reading input String 
Can't find resource 'schema.xml' in classpath or '/configs/.system', 
cwd=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0
at 
__randomizedtesting.SeedInfo.seed([84B564FB5FBD8806:F440C7523F752170]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.SystemCollectionCompatTest.setupSystemCollection(SystemCollectionCompatTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:972)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate

[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-08-06 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900959#comment-16900959
 ] 

Christine Poerschke commented on SOLR-13677:


{code:java}
- public void registerGauge(SolrInfoBean info, String registry, Gauge gauge, 
String tag, boolean force, String metricName, String... metricPath) {
+ public GaugeRef registerGauge(SolrInfoBean info, String registry, Gauge 
gauge, String tag, boolean force, String metricName, String... metricPath) {
{code}
The above {{registerGauge}} method change to make it return a gauge reference 
encourages but does not ensure that the caller 'remembers' the reference and so 
that it is later then included in the unregister calls. There's also a small 
amount of repetition w.r.t. iterating over the {{myGauges}} collection and 
unregistering the elements.

I wonder if some sort of container or wrapper class might be helpful i.e. 
{{SolrMetricManager.registerGauge}} would be sure to call 'remember' for the 
gauge and the close(\?) method of the producer would call the 'forgetAll' 
method. What do you think?
{code:java}
+ class FooBar {
+   private final List gaugeRefs = new ArrayList<>();
+   void remember(GaugeRef gaugeRef) {
+ gaugeRefs.add(gaugeRef);
+   }
+   void forgetAll() {
+ for (GaugeRef gaugeRef : gaugeRefs) {
+   gaugeRef.unregister();
+ }
+ gaugeRefs.clear();
+   }
+ }
+ 
+ public void registerGauge(FooBar memory, SolrInfoBean info, String registry, 
Gauge gauge, String tag, boolean force, String metricName, String... 
metricPath) {
+   memory.remember(registerGauge(info, registry, gauge, tag, force, 
metricName, metricPath));
+ }
+
+ private GaugeRef registerGauge(SolrInfoBean info, String registry, Gauge 
gauge, String tag, boolean force, String metricName, String... metricPath) {
- public void registerGauge(SolrInfoBean info, String registry, Gauge gauge, 
String tag, boolean force, String metricName, String... metricPath) {
...
{code}

> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8883) CHANGES.txt: Auto add issue categories on new releases

2019-08-06 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-8883.
--
   Resolution: Fixed
Fix Version/s: 8.3

> CHANGES.txt: Auto add issue categories on new releases
> --
>
> Key: LUCENE-8883
> URL: https://issues.apache.org/jira/browse/LUCENE-8883
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.3
>
> Attachments: LUCENE-8883.patch, LUCENE-8883.patch, LUCENE-8883.patch
>
>
> As I write this, looking at Solr's CHANGES.txt for 8.2 I see we have some 
> sections: "Upgrade Notes", "New Features", "Bug Fixes", and "Other Changes".  
> There is no "Improvements" so no surprise here, the New Features category 
> has issues that ought to be listed as such.  I think the order vary as well.  
> I propose that on new releases, the initial state of the next release in 
> CHANGES.txt have these sections.  They can easily be removed at the upcoming 
> release if there are no such sections, or they could stay as empty.  It seems 
> addVersion.py is the code that sets this up and it could be enhanced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8883) CHANGES.txt: Auto add issue categories on new releases

2019-08-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900940#comment-16900940
 ] 

ASF subversion and git services commented on LUCENE-8883:
-

Commit 8233981e7f0e5bddc25860b02421a559dd38ccb3 in lucene-solr's branch 
refs/heads/branch_8x from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8233981 ]

LUCENE-8883: addVersion.py now adds categories to CHANGES.txt

(cherry picked from commit 742e6b7effe96977fa5372c0c4a8413528fd99cd)


> CHANGES.txt: Auto add issue categories on new releases
> --
>
> Key: LUCENE-8883
> URL: https://issues.apache.org/jira/browse/LUCENE-8883
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-8883.patch, LUCENE-8883.patch, LUCENE-8883.patch
>
>
> As I write this, looking at Solr's CHANGES.txt for 8.2 I see we have some 
> sections: "Upgrade Notes", "New Features", "Bug Fixes", and "Other Changes".  
> There is no "Improvements" so no surprise here, the New Features category 
> has issues that ought to be listed as such.  I think the order vary as well.  
> I propose that on new releases, the initial state of the next release in 
> CHANGES.txt have these sections.  They can easily be removed at the upcoming 
> release if there are no such sections, or they could stay as empty.  It seems 
> addVersion.py is the code that sets this up and it could be enhanced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8883) CHANGES.txt: Auto add issue categories on new releases

2019-08-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900938#comment-16900938
 ] 

ASF subversion and git services commented on LUCENE-8883:
-

Commit 742e6b7effe96977fa5372c0c4a8413528fd99cd in lucene-solr's branch 
refs/heads/master from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=742e6b7 ]

LUCENE-8883: addVersion.py now adds categories to CHANGES.txt


> CHANGES.txt: Auto add issue categories on new releases
> --
>
> Key: LUCENE-8883
> URL: https://issues.apache.org/jira/browse/LUCENE-8883
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-8883.patch, LUCENE-8883.patch, LUCENE-8883.patch
>
>
> As I write this, looking at Solr's CHANGES.txt for 8.2 I see we have some 
> sections: "Upgrade Notes", "New Features", "Bug Fixes", and "Other Changes".  
> There is no "Improvements" so no surprise here, the New Features category 
> has issues that ought to be listed as such.  I think the order vary as well.  
> I propose that on new releases, the initial state of the next release in 
> CHANGES.txt have these sections.  They can easily be removed at the upcoming 
> release if there are no such sections, or they could stay as empty.  It seems 
> addVersion.py is the code that sets this up and it could be enhanced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13240) UTILIZENODE action results in an exception

2019-08-06 Thread Richard Goodman (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900929#comment-16900929
 ] 

Richard Goodman edited comment on SOLR-13240 at 8/6/19 11:48 AM:
-

Hi [~cpoerschke], Sorry for the delay in the reply, As for 
[TestPolicy.testReplicaCountSuggestions|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L2015-L2031]
 Your interpretation seems correct to me also, and so changing the expected 
result would fix this _(without approaching it in a way of, lets just change it 
to what it's complaining about to make the test pass)_, because this patch 
enforces that kind of sorting.

As for the 
[TestPolicy.testNodeLostMultipleReplica|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L952-L1083]
 I'm having a look at this now, and so will update you with my findings.


was (Author: goodman):
Hi [~cpoerschke], Sorry for the delay in the reply, As for 
[TestPolicy.testReplicaCountSuggestions|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L2015-L2031]
 Your interpretation seems correct to me also, and so changing the expected 
result would fix this _(without approaching it in a way of, lets just change it 
to what it's complaining about to make the test pass)_, because this patch 
enforces that kind of sorting.

As for the [TestPolicy.testNodeLostMultipleReplica] I'm having a look at this 
now, and so will update you with my findings.

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.

[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-08-06 Thread Richard Goodman (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900929#comment-16900929
 ] 

Richard Goodman commented on SOLR-13240:


Hi [~cpoerschke], Sorry for the delay in the reply, As for 
[TestPolicy.testReplicaCountSuggestions|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java#L2015-L2031]
 Your interpretation seems correct to me also, and so changing the expected 
result would fix this _(without approaching it in a way of, lets just change it 
to what it's complaining about to make the test pass)_, because this patch 
enforces that kind of sorting.

As for the [TestPolicy.testNodeLostMultipleReplica] I'm having a look at this 
now, and so will update you with my findings.

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server

[JENKINS] Lucene-Solr-8.2-Linux (64bit/jdk-12.0.1) - Build # 526 - Still Unstable!

2019-08-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.2-Linux/526/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.solr.search.mlt.SimpleMLTQParserTest.doTest

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([A9C4229C0DE00686:E809A38605B153F]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:947)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:907)
at 
org.apache.solr.search.mlt.SimpleMLTQParserTest.doTest(SimpleMLTQParserTest.java:82)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result/doc[1]/str[@name='id'][.='13']
xml response was: 

0501616161616The slim 
red fox jumped over the lazy brown dogs.The slim red fox jumped over the lazy brown 
dogs.muLti-Default422019-08-06T11:22:27.394Z1641116385825259520421

[jira] [Commented] (LUCENE-8747) Allow access to submatches from Matches instances

2019-08-06 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900840#comment-16900840
 ] 

Alan Woodward commented on LUCENE-8747:
---

I spoke to [~jimczi] offline, and we agreed that adding a specific use-case to 
this might make sense, so I've updated the patch to include a new NamedMatches 
class, with a couple of static helper functions.  This class makes it possible 
to associate names with queries; if those queries are then combined into a 
larger, complex query, it is easy to retrieve the names of any queries that 
matched a particular document.

> Allow access to submatches from Matches instances
> -
>
> Key: LUCENE-8747
> URL: https://issues.apache.org/jira/browse/LUCENE-8747
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8747.patch, LUCENE-8747.patch, LUCENE-8747.patch, 
> LUCENE-8747.patch
>
>
> A Matches object currently allows access to all matching terms from a query, 
> but the structure of the matching query is flattened out, so if you want to 
> find which subqueries have matched you need to iterate over all matches, 
> collecting queries as you go.  It should be easier to get this information 
> from the parent Matches object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8747) Allow access to submatches from Matches instances

2019-08-06 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8747:
--
Attachment: LUCENE-8747.patch

> Allow access to submatches from Matches instances
> -
>
> Key: LUCENE-8747
> URL: https://issues.apache.org/jira/browse/LUCENE-8747
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8747.patch, LUCENE-8747.patch, LUCENE-8747.patch, 
> LUCENE-8747.patch
>
>
> A Matches object currently allows access to all matching terms from a query, 
> but the structure of the matching query is flattened out, so if you want to 
> find which subqueries have matched you need to iterate over all matches, 
> collecting queries as you go.  It should be easier to get this information 
> from the parent Matches object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13539) Atomic Update Multivalue remove does not work for field types UUID, Enums, Bool and Binary

2019-08-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900830#comment-16900830
 ] 

Thomas Wöckinger commented on SOLR-13539:
-

I will have a look at your tests. If they are still failing at the master, i 
will create a different PR which is referencing your issue

> Atomic Update Multivalue remove does not work for field types UUID, Enums, 
> Bool  and Binary
> ---
>
> Key: SOLR-13539
> URL: https://issues.apache.org/jira/browse/SOLR-13539
> Project: Solr
>  Issue Type: Bug
>  Components: UpdateRequestProcessors
>Affects Versions: 7.7.2, 8.1, 8.1.1
>Reporter: Thomas Wöckinger
>Priority: Critical
> Attachments: SOLR-13539.patch
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> When using JavaBinCodec the values of collections are of type 
> ByteArrayUtf8CharSequence, existing field values are Strings so the remove 
> Operation does not have any effect.
>  This is related to following field types: UUID, Enums, Bool and Binary



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_201) - Build # 976 - Still Unstable!

2019-08-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/976/
Java: 64bit/jdk1.8.0_201 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest.testTimeCat

Error Message:
took over 10 seconds after collection creation to update aliases

Stack Trace:
java.lang.AssertionError: took over 10 seconds after collection creation to 
update aliases
at 
__randomizedtesting.SeedInfo.seed([FFFD33D1DD7F953B:C60572629D5ED950]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.update.processor.RoutedAliasUpdateProcessorTest.waitColAndAlias(RoutedAliasUpdateProcessorTest.java:77)
at 
org.apache.solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest.testTimeCat(DimensionalRoutedAliasUpdateProcessorTest.java:219)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16209 lines...]
   [junit4] Suite: 
org.apac

[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 171 - Still Failing

2019-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/171/

No tests ran.

Build Log:
[...truncated 24989 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2590 links (2119 relative) to 3408 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.3.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:c

[jira] [Commented] (SOLR-13539) Atomic Update Multivalue remove does not work for field types UUID, Enums, Bool and Binary

2019-08-06 Thread Tim Owen (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900810#comment-16900810
 ] 

Tim Owen commented on SOLR-13539:
-

Thanks Thomas, yes we're using 7.7.2 and having trouble, as we use 
AtomicUpdates heavily. We applied the patch from SOLR-13538 and then we applied 
your patch from your Github PR (although we excluded the unit tests from your 
patch) which fixes most thing (thank you). I have attached a further patch we 
had to do locally to make removeregex work (it looks like it was fixed for the 
single value case, but multiple values were still failing) perhaps you could 
add that further fix onto your larger change, or if not I can raise a separate 
ticket.

To be honest, this whole situation with the javabin change is getting 
confusing, with various partial fixes and it's not clear to me which fixes are 
on the 7.x branch. Right now, 7.7.2 standard is effectively broken. Thanks for 
your efforts to try and get this back stable.

> Atomic Update Multivalue remove does not work for field types UUID, Enums, 
> Bool  and Binary
> ---
>
> Key: SOLR-13539
> URL: https://issues.apache.org/jira/browse/SOLR-13539
> Project: Solr
>  Issue Type: Bug
>  Components: UpdateRequestProcessors
>Affects Versions: 7.7.2, 8.1, 8.1.1
>Reporter: Thomas Wöckinger
>Priority: Critical
> Attachments: SOLR-13539.patch
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> When using JavaBinCodec the values of collections are of type 
> ByteArrayUtf8CharSequence, existing field values are Strings so the remove 
> Operation does not have any effect.
>  This is related to following field types: UUID, Enums, Bool and Binary



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-12.0.1) - Build # 384 - Still Unstable!

2019-08-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/384/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

18 tests failed.
FAILED:  
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud

Error Message:
IOException occurred when talking to server at: https://127.0.0.1:52193/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occurred when 
talking to server at: https://127.0.0.1:52193/solr
at 
__randomizedtesting.SeedInfo.seed([C6FE304FA96A1E74:17F9C2CA0D659546]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:670)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.createAndTest(LegacyCloudClusterPropTest.java:87)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud(LegacyCloudClusterPropTest.java:79)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.e

[jira] [Updated] (SOLR-13539) Atomic Update Multivalue remove does not work for field types UUID, Enums, Bool and Binary

2019-08-06 Thread Tim Owen (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Owen updated SOLR-13539:

Attachment: SOLR-13539.patch

> Atomic Update Multivalue remove does not work for field types UUID, Enums, 
> Bool  and Binary
> ---
>
> Key: SOLR-13539
> URL: https://issues.apache.org/jira/browse/SOLR-13539
> Project: Solr
>  Issue Type: Bug
>  Components: UpdateRequestProcessors
>Affects Versions: 7.7.2, 8.1, 8.1.1
>Reporter: Thomas Wöckinger
>Priority: Critical
> Attachments: SOLR-13539.patch
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> When using JavaBinCodec the values of collections are of type 
> ByteArrayUtf8CharSequence, existing field values are Strings so the remove 
> Operation does not have any effect.
>  This is related to following field types: UUID, Enums, Bool and Binary



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3507 - Failure

2019-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3507/

All tests passed

Build Log:
[...truncated 64522 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj327491426
 [ecj-lint] Compiling 69 source files to /tmp/ecj327491426
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
 (at line 28)
 [ecj-lint] import javax.naming.InitialContext;
 [ecj-lint]^^^
 [ecj-lint] The type javax.naming.InitialContext is not accessible
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
 (at line 29)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
 (at line 182)
 [ecj-lint] c = getFromJndi(initProps, jndiName);
 [ecj-lint] ^^^
 [ecj-lint] The method getFromJndi(Properties, String) from the type new 
Callable(){} refers to the missing type NamingException
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
 (at line 245)
 [ecj-lint] private Connection getFromJndi(final Properties initProps, 
final String jndiName) throws NamingException,
 [ecj-lint] 
 ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
 (at line 249)
 [ecj-lint] InitialContext ctx =  new InitialContext();
 [ecj-lint] ^^
 [ecj-lint] InitialContext cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
 (at line 249)
 [ecj-lint] InitialContext ctx =  new InitialContext();
 [ecj-lint]   ^^
 [ecj-lint] InitialContext cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 6 problems (6 errors)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:634: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build.xml:651:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/common-build.xml:479:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2009:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2048:
 Compile failed; see the compiler error output for details.

Total time: 103 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8944) "I am authorized to contribute" wording in the Pull Request Template

2019-08-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900721#comment-16900721
 ] 

Jan Høydahl commented on LUCENE-8944:
-

+1 to the minor improvements in bullet 1-3

Let’s remove the “I’m authorized” box if it is not needed. I think I have read 
that legally, the very fact that a PR is submitted proves that the submitter 
intends it to be ready for inclusion. Exception is larger pieces of work 
contributed from a company where explicit CLAs may be nd. 

Better to err on asking legal an extra time if no one have a link to such 
official policy.

> "I am authorized to contribute" wording in the Pull Request Template
> 
>
> Key: LUCENE-8944
> URL: https://issues.apache.org/jira/browse/LUCENE-8944
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Priority: Minor
>
> This ticket is to consider potential revisions to one of the checklist items 
> in the [pull request 
> template|https://github.com/apache/lucene-solr/blob/master/.github/PULL_REQUEST_TEMPLATE.md]
>  -- its current wording is:
> bq. \[ \] I am authorized to contribute this code to the ASF and have removed 
> any code I do not have a license to distribute.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org