[GitHub] [lucene-solr] dsmiley commented on issue #838: SOLR-13705 Double-checked locking bug is fixed.

2019-09-03 Thread GitBox
dsmiley commented on issue #838: SOLR-13705 Double-checked locking bug is fixed.
URL: https://github.com/apache/lucene-solr/pull/838#issuecomment-527737010
 
 
   Thanks for contributing!  Is there a reason you didn't check the 
authorization checkbox in the template?
   
   Your description mentions a mutable object but I think it's the 
_immutability_ here that enables us to avoid a volatile.
   Speaking of which I looked closer to see if these lazily created objects are 
immutable. SSLConfigurations maybe has an issue.  It looks okay but it's only 
field is an SSLCredentialProviderFactory that _should_ be immutable but 
technically lacks it's only field, a String providerChain, from being declared 
final.  That is required for "publication safety" I think.
   Furthermore I think the immutability of these things should be declared with 
a comment as justification for the lack of volatile.
   
   Reference: 
https://wiki.sei.cmu.edu/confluence/display/java/LCK10-J.+Use+a+correct+form+of+the+double-checked+locking+idiom
   Reference:  
https://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3676 - Unstable

2019-09-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3676/

1 tests failed.
FAILED:  org.apache.solr.search.facet.TestCloudJSONFacetSKG.testRandom

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:33121/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:33121/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection]
at 
__randomizedtesting.SeedInfo.seed([5D445E53BDFA995:779860EA8ABF1FE6]:0)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:987)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1002)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.getNumFound(TestCloudJSONFacetSKG.java:669)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.verifySKGResults(TestCloudJSONFacetSKG.java:446)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:392)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:402)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:402)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:349)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.testRandom(TestCloudJSONFacetSKG.java:274)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Assigned] (SOLR-13705) Double-checked Locking Should Not be Used

2019-09-03 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-13705:
---

Assignee: David Smiley

> Double-checked Locking Should Not be Used
> -
>
> Key: SOLR-13705
> URL: https://issues.apache.org/jira/browse/SOLR-13705
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.2
>Reporter: Furkan KAMACI
>Assignee: David Smiley
>Priority: Major
> Fix For: 8.3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using double-checked locking for the lazy initialization of any other type of 
> primitive or mutable object risks a second thread using an uninitialized or 
> partially initialized member while the first thread is still creating it, and 
> crashing the program.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921814#comment-16921814
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 7a9c429064f2203e62e8fe0c34314b1c24ad88b5 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7a9c429 ]

SOLR-13105: Revamp simulations docs 12


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921810#comment-16921810
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 096723f943370960a84b3fa5efba08cc6a016552 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=096723f ]

SOLR-13105: Revamp simulations docs 11


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921806#comment-16921806
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit a3d79765c1feda172d85bb7cddb0b19c642d136d in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a3d7976 ]

SOLR-13105: Revamp simulations docs 10


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] MarcusSorealheis opened a new pull request #853: Moving SolrCloud on the README with some cues.

2019-09-03 Thread GitBox
MarcusSorealheis opened a new pull request #853: Moving SolrCloud on the README 
with some cues.
URL: https://github.com/apache/lucene-solr/pull/853
 
 
   
   
   
   # Description
   
   Some haters don't scroll on documentation.
   
   # Solution
   
   Moving SolrCloud before standalone.
   
   # Tests
   
   No tests for the `README`.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [ ] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [ ] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [ ] I am authorized to contribute this code to the ASF and have removed 
any code I do not have a license to distribute.
   - [ ] I have developed this patch against the `master` branch.
   - [ ] I have run `ant precommit` and the appropriate test suite.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13737) Lead with SolrCloud

2019-09-03 Thread Marcus Eagan (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eagan updated SOLR-13737:

Status: Patch Available  (was: Open)

> Lead with SolrCloud
> ---
>
> Key: SOLR-13737
> URL: https://issues.apache.org/jira/browse/SOLR-13737
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: master (9.0)
>Reporter: Marcus Eagan
>Priority: Trivial
> Fix For: master (9.0)
>
>
> Based on some of the unnecessary and non-constructive criticism I have heard 
> that SolrCloud is an after thought in 2019, which is totally not true, I 
> decided it might be better if we moved it up ahead of standalone Solr in the 
> README.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13737) Lead with SolrCloud

2019-09-03 Thread Marcus Eagan (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eagan updated SOLR-13737:

Status: Open  (was: Patch Available)

> Lead with SolrCloud
> ---
>
> Key: SOLR-13737
> URL: https://issues.apache.org/jira/browse/SOLR-13737
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: master (9.0)
>Reporter: Marcus Eagan
>Priority: Trivial
> Fix For: master (9.0)
>
>
> Based on some of the unnecessary and non-constructive criticism I have heard 
> that SolrCloud is an after thought in 2019, which is totally not true, I 
> decided it might be better if we moved it up ahead of standalone Solr in the 
> README.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13737) Lead with SolrCloud

2019-09-03 Thread Marcus Eagan (Jira)
Marcus Eagan created SOLR-13737:
---

 Summary: Lead with SolrCloud
 Key: SOLR-13737
 URL: https://issues.apache.org/jira/browse/SOLR-13737
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Affects Versions: master (9.0)
Reporter: Marcus Eagan
 Fix For: master (9.0)


Based on some of the unnecessary and non-constructive criticism I have heard 
that SolrCloud is an after thought in 2019, which is totally not true, I 
decided it might be better if we moved it up ahead of standalone Solr in the 
README.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8962) Can we merge small segments during refresh, for faster searching?

2019-09-03 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921796#comment-16921796
 ] 

David Smiley commented on LUCENE-8962:
--

At Salesforce I worked on a custom merge policy to better address handling of 
small segments than TieredMergePolicy's choices.  What's disappointing about 
TMP is that TMP insists on merging getSegmentsPerTier() (10) segments, _even 
when they are small_ (below getFloorSegmentMB()).  Instead we wanted some 
"cheap merges" of a smaller number of segments (even as few as 3 for us) that 
solely consist of the small segments.  This cut our average segment count in 
half, although cost us more I/O -- a trade-off we were happy with.  I'd like to 
open-source this, perhaps as a direct change to TMP with defaults to do a 
similar amount of I/O but averaging fewer segments.  The difficult part is 
doing simulations to prove out the theories.

Additionally, I worked on a custom MergeScheduler that executed those "cheap 
merges" synchronously (directly in the calling thread) while having the regular 
other merges pass through to the concurrent scheduler.  The rationale wasn't 
tied to NRT but I could see NRT benefiting from this if getting an NRT searcher 
calls out to the merge code (I don't know if it does).

Perhaps your use-case could benefit from this as well.  Unlike what you propose 
in the description, it doesn't involve changes/features to Lucene itself.  WDYT?

> Can we merge small segments during refresh, for faster searching?
> -
>
> Key: LUCENE-8962
> URL: https://issues.apache.org/jira/browse/LUCENE-8962
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Michael McCandless
>Priority: Major
>
> With near-real-time search we ask {{IndexWriter}} to write all in-memory 
> segments to disk and open an {{IndexReader}} to search them, and this is 
> typically a quick operation.
> However, when you use many threads for concurrent indexing, {{IndexWriter}} 
> will accumulate write many small segments during {{refresh}} and this then 
> adds search-time cost as searching must visit all of these tiny segments.
> The merge policy would normally quickly coalesce these small segments if 
> given a little time ... so, could we somehow improve {{IndexWriter'}}s 
> refresh to optionally kick off merge policy to merge segments below some 
> threshold before opening the near-real-time reader?  It'd be a bit tricky 
> because while we are waiting for merges, indexing may continue, and new 
> segments may be flushed, but those new segments shouldn't be included in the 
> point-in-time segments returned by refresh ...
> One could almost do this on top of Lucene today, with a custom merge policy, 
> and some hackity logic to have the merge policy target small segments just 
> written by refresh, but it's tricky to then open a near-real-time reader, 
> excluding newly flushed but including newly merged segments since the refresh 
> originally finished ...
> I'm not yet sure how best to solve this, so I wanted to open an issue for 
> discussion!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921770#comment-16921770
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 2124e3fc362ce4c32af514587b6b8acda3436e54 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2124e3f ]

SOLR-13105: Revamp simulations docs 9


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13727) V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" instead of using regex pattern

2019-09-03 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921766#comment-16921766
 ] 

Lucene/Solr QA commented on SOLR-13727:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} SOLR-13727 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13727 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979259/SOLR-13727.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/547/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" 
> instead of using regex pattern
> 
>
> Key: SOLR-13727
> URL: https://issues.apache.org/jira/browse/SOLR-13727
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, v2 API
>Affects Versions: 8.2
>Reporter: Megan Carey
>Priority: Major
>  Labels: easyfix, patch
> Attachments: SOLR-13727.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When the HttpSolrClient is formatting a V2Request, it needs to change the 
> endpoint from the default "/solr/..." to "/api/...". It does so by simply 
> calling String.replace, which replaces the first instance of "/solr" in the 
> URL with "/api".
>  
> In the case where the host's address starts with "solr" and the HTTP protocol 
> is appended, this call changes the address for the request. Example:
> if baseUrl is "http://solr-host.com/8983/solr;, this call will change to 
> "http:/api-host.com:8983/solr"
>  
> We should use a regex pattern to ensure that we're replacing the correct 
> portion of the URL.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-8.x - Build # 518 - Unstable

2019-09-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/518/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimComputePlanAction.testNodeAdded

Error Message:
OverseerTriggerThread never caught up to the latest znodeVersion

Stack Trace:
java.util.concurrent.TimeoutException: OverseerTriggerThread never caught up to 
the latest znodeVersion
at 
__randomizedtesting.SeedInfo.seed([67E6B1C02ED91327:225E7B78C7ABB24]:0)
at org.apache.solr.util.TimeOut.waitFor(TimeOut.java:66)
at 
org.apache.solr.cloud.autoscaling.sim.SimSolrCloudTestCase.assertAutoscalingUpdateComplete(SimSolrCloudTestCase.java:98)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimComputePlanAction.init(TestSimComputePlanAction.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:972)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13643 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.autoscaling.sim.TestSimComputePlanAction
   [junit4]   2> 348156 INFO  
(SUITE-TestSimComputePlanAction-seed#[67E6B1C02ED91327]-worker) [ ] 
o.a.s.SolrTestCaseJ4 Created 

[jira] [Created] (LUCENE-8962) Can we merge small segments during refresh, for faster searching?

2019-09-03 Thread Michael McCandless (Jira)
Michael McCandless created LUCENE-8962:
--

 Summary: Can we merge small segments during refresh, for faster 
searching?
 Key: LUCENE-8962
 URL: https://issues.apache.org/jira/browse/LUCENE-8962
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Michael McCandless


With near-real-time search we ask {{IndexWriter}} to write all in-memory 
segments to disk and open an {{IndexReader}} to search them, and this is 
typically a quick operation.

However, when you use many threads for concurrent indexing, {{IndexWriter}} 
will accumulate write many small segments during {{refresh}} and this then adds 
search-time cost as searching must visit all of these tiny segments.

The merge policy would normally quickly coalesce these small segments if given 
a little time ... so, could we somehow improve {{IndexWriter'}}s refresh to 
optionally kick off merge policy to merge segments below some threshold before 
opening the near-real-time reader?  It'd be a bit tricky because while we are 
waiting for merges, indexing may continue, and new segments may be flushed, but 
those new segments shouldn't be included in the point-in-time segments returned 
by refresh ...

One could almost do this on top of Lucene today, with a custom merge policy, 
and some hackity logic to have the merge policy target small segments just 
written by refresh, but it's tricky to then open a near-real-time reader, 
excluding newly flushed but including newly merged segments since the refresh 
originally finished ...

I'm not yet sure how best to solve this, so I wanted to open an issue for 
discussion!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13655) Cut Over Collections.unmodifiableSet usages to Set.*

2019-09-03 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921733#comment-16921733
 ] 

David Smiley commented on SOLR-13655:
-

This is a trivial master-only change since it requires Java 11.  Perhaps we 
should delay such +sweeping+ changes until master (9.0) is released?  We 
needn't back out this but at least put the breaks on other such things.  I am 
appreciative of your efforts, guys.

> Cut Over Collections.unmodifiableSet usages to Set.*
> 
>
> Key: SOLR-13655
> URL: https://issues.apache.org/jira/browse/SOLR-13655
> Project: Solr
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 201 - Still Failing

2019-09-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/201/

No tests ran.

Build Log:
[...truncated 25 lines...]
ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data
org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the 
server
svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data'
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119)
at 
org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at 
org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
... 4 more
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

[jira] [Commented] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-09-03 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921723#comment-16921723
 ] 

David Smiley commented on SOLR-9658:


Do we as a project really want to implement our own caches or is it about time 
that we use other popular caches like 
[Caffeine|https://github.com/ben-manes/caffeine]?  Don't get me wrong, I love 
writing code, and working on caches is fun, but I'd rather we use one of our 
many already existing dependencies for this common task.  SOLR-8241 is about 
adding Caffeine to Solr.

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-9658.patch, SOLR-9658.patch, SOLR-9658.patch, 
> SOLR-9658.patch, SOLR-9658.patch, SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3486) The memory size of Solr caches should be configurable

2019-09-03 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-3486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-3486.
--

> The memory size of Solr caches should be configurable
> -
>
> Key: SOLR-3486
> URL: https://issues.apache.org/jira/browse/SOLR-3486
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LFUMap.java, SOLR-3486.patch, SOLR-3486.patch
>
>
> It is currently possible to configure the sizes of Solr caches based on the 
> number of entries of the cache. The problem is that the memory size of cached 
> values may vary a lot over time (depending on IndexReader.maxDoc and the 
> queries that are run) although the JVM heap size does not.
> Having a configurable max size in bytes would also help optimize cache 
> utilization, making it possible to store more values provided that they have 
> a small memory footprint.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3486) The memory size of Solr caches should be configurable

2019-09-03 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-3486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-3486.

Resolution: Duplicate

> The memory size of Solr caches should be configurable
> -
>
> Key: SOLR-3486
> URL: https://issues.apache.org/jira/browse/SOLR-3486
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LFUMap.java, SOLR-3486.patch, SOLR-3486.patch
>
>
> It is currently possible to configure the sizes of Solr caches based on the 
> number of entries of the cache. The problem is that the memory size of cached 
> values may vary a lot over time (depending on IndexReader.maxDoc and the 
> queries that are run) although the JVM heap size does not.
> Having a configurable max size in bytes would also help optimize cache 
> utilization, making it possible to store more values provided that they have 
> a small memory footprint.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921712#comment-16921712
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 1152ebf3786bafdf9216a75d82240ec26b4c1702 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1152ebf ]

SOLR-13105: Revamp simulations docs 8


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on a change in pull request #850: SOLR-13727: Bug fix for V2Request handling in HttpSolrClient

2019-09-03 Thread GitBox
noblepaul commented on a change in pull request #850: SOLR-13727: Bug fix for 
V2Request handling in HttpSolrClient
URL: https://github.com/apache/lucene-solr/pull/850#discussion_r320470306
 
 

 ##
 File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java
 ##
 @@ -330,6 +333,12 @@ protected ModifiableSolrParams 
calculateQueryParams(Set queryParamNames,
 }
 return queryModParams;
   }
+  
+  private String changeV2RequestEndpoint(String basePath) throws 
MalformedURLException {
 
 Review comment:
   this could be made a `public static` method


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13727) V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" instead of using regex pattern

2019-09-03 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-13727:

Status: Patch Available  (was: Open)

> V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" 
> instead of using regex pattern
> 
>
> Key: SOLR-13727
> URL: https://issues.apache.org/jira/browse/SOLR-13727
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, v2 API
>Affects Versions: 8.2
>Reporter: Megan Carey
>Priority: Major
>  Labels: easyfix, patch
> Attachments: SOLR-13727.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When the HttpSolrClient is formatting a V2Request, it needs to change the 
> endpoint from the default "/solr/..." to "/api/...". It does so by simply 
> calling String.replace, which replaces the first instance of "/solr" in the 
> URL with "/api".
>  
> In the case where the host's address starts with "solr" and the HTTP protocol 
> is appended, this call changes the address for the request. Example:
> if baseUrl is "http://solr-host.com/8983/solr;, this call will change to 
> "http:/api-host.com:8983/solr"
>  
> We should use a regex pattern to ensure that we're replacing the correct 
> portion of the URL.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13727) V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" instead of using regex pattern

2019-09-03 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-13727:

Attachment: SOLR-13727.patch
Status: Open  (was: Open)

> V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" 
> instead of using regex pattern
> 
>
> Key: SOLR-13727
> URL: https://issues.apache.org/jira/browse/SOLR-13727
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, v2 API
>Affects Versions: 8.2
>Reporter: Megan Carey
>Priority: Major
>  Labels: easyfix, patch
> Attachments: SOLR-13727.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When the HttpSolrClient is formatting a V2Request, it needs to change the 
> endpoint from the default "/solr/..." to "/api/...". It does so by simply 
> calling String.replace, which replaces the first instance of "/solr" in the 
> URL with "/api".
>  
> In the case where the host's address starts with "solr" and the HTTP protocol 
> is appended, this call changes the address for the request. Example:
> if baseUrl is "http://solr-host.com/8983/solr;, this call will change to 
> "http:/api-host.com:8983/solr"
>  
> We should use a regex pattern to ensure that we're replacing the correct 
> portion of the URL.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] thomaswoeckinger commented on issue #755: SOLR-13592: Introduce EmbeddedSolrTestBase for better integration tests

2019-09-03 Thread GitBox
thomaswoeckinger commented on issue #755: SOLR-13592: Introduce 
EmbeddedSolrTestBase for better integration tests
URL: https://github.com/apache/lucene-solr/pull/755#issuecomment-527624546
 
 
   > Note that the javadocs on 
`org.apache.solr.SolrJettyTestBase#createNewSolrClient` is obsolete as it still 
references using an "embedded implementation" which is no longer true.
   
   I will remove the reference in the followup PR #665 . 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13709) Race condition on core reload while core is still loading?

2019-09-03 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921707#comment-16921707
 ] 

Erick Erickson commented on SOLR-13709:
---

Problem is that from what I've seen so far schema changes seem to be only one 
of the conditions that have NULL returned from getCoreDescriptor when it 
shouldn't be.

"load" may or may not have anything to do with CoreContainer.load(). A core can 
be created for instance by collection creation. There's a period between  when 
a core is created by something other than start-up but the coreDescriptor is 
not yet available, even though CoreContainer.load() has finished and presumably 
all the coreDescriptors are available that I believe affects more than just a 
schemaless update.

"fail" in this case is I beast the hell out of the test and have a program I 
wrote examine all the stdout files looking for various things. NPE in this 
case, then try to see what the root cause is. 998/1000 times there's no NPE, 
but when there is it always looks like getCoreDescriptor returns null.

Hmmm, with my proposed sequence of events, I'm starting to wonder if it could 
be something as stupid-simple as registering the watcher before the call to 
create core returns

And I totally realize you're comments are what you had time for.

> Race condition on core reload while core is still loading?
> --
>
> Key: SOLR-13709
> URL: https://issues.apache.org/jira/browse/SOLR-13709
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Erick Erickson
>Priority: Major
> Attachments: apache_Lucene-Solr-Tests-8.x_449.log.txt
>
>
> A recent jenkins failure from {{TestSolrCLIRunExample}} seems to suggest that 
> there may be a race condition when attempting to re-load a SolrCore while the 
> core is currently in the process of (re)loading that can leave the SolrCore 
> in an unusable state.
> Details to follow...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-03 Thread GitBox
diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip 
second grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r320456939
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SkipSecondStepSearchGroupShardResponseProcessor.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.grouping.distributed.responseprocessor;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.TotalHits;
+import org.apache.lucene.search.grouping.GroupDocs;
+import org.apache.lucene.search.grouping.SearchGroup;
+import org.apache.lucene.search.grouping.TopGroups;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.handler.component.ResponseBuilder;
+import org.apache.solr.handler.component.ShardDoc;
+import org.apache.solr.handler.component.ShardRequest;
+import org.apache.solr.handler.component.ShardResponse;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.grouping.GroupingSpecification;
+import 
org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer;
+
+public class SkipSecondStepSearchGroupShardResponseProcessor extends 
SearchGroupShardResponseProcessor {
+
+  @Override
+  protected SearchGroupsResultTransformer 
newSearchGroupsResultTransformer(SolrIndexSearcher solrIndexSearcher) {
+return new 
SearchGroupsResultTransformer.SkipSecondStepSearchResultResultTransformer(solrIndexSearcher);
+  }
+
+  @Override
+  protected SearchGroupsContainer newSearchGroupsContainer(ResponseBuilder rb) 
{
+return new 
SkipSecondStepSearchGroupsContainer(rb.getGroupingSpec().getFields());
+  }
+
+  @Override
+  public void process(ResponseBuilder rb, ShardRequest shardRequest) {
+super.process(rb, shardRequest);
+TopGroupsShardResponseProcessor.fillResultIds(rb);
+  }
+
+  protected static class SkipSecondStepSearchGroupsContainer extends 
SearchGroupsContainer {
+
+private final Map docIdToShard = new HashMap<>();
+
+public SkipSecondStepSearchGroupsContainer(String[] fields) {
+  super(fields);
+}
+
+@Override
+public void addSearchGroups(ShardResponse srsp, String field, 
Collection> searchGroups) {
+  super.addSearchGroups(srsp, field, searchGroups);
+  for (SearchGroup searchGroup : searchGroups) {
+assert(srsp.getShard() != null);
+docIdToShard.put(searchGroup.topDocSolrId, srsp.getShard());
+  }
+}
+
+@Override
+public void addMergedSearchGroups(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups ) {
+  // TODO: add comment or javadoc re: why this method is overridden as a 
no-op
 
 Review comment:
   I removed `addSearchGroupToShard` and pushed here. Test are still 
successful, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] diegoceccarelli commented on issue #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-03 Thread GitBox
diegoceccarelli commented on issue #300: SOLR-11831: Skip second grouping step 
if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#issuecomment-527619910
 
 
   I just pushed all my changes. Tests are successful, precommit is failing at 
the moment due to "Rat problems" - It seems to affect also the master, i'll 
check tomorrow. 
   Thanks for all the comments!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on issue #755: SOLR-13592: Introduce EmbeddedSolrTestBase for better integration tests

2019-09-03 Thread GitBox
dsmiley commented on issue #755: SOLR-13592: Introduce EmbeddedSolrTestBase for 
better integration tests
URL: https://github.com/apache/lucene-solr/pull/755#issuecomment-527617990
 
 
   Note that the javadocs on 
`org.apache.solr.SolrJettyTestBase#createNewSolrClient` is obsolete as it still 
references using an "embedded implementation" which is no longer true.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13722) A cluster-wide blob upload package option & avoid remote url

2019-09-03 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13722:
--
Description: 
This ticket totally eliminates the need for an external service to host the 
jars. So a url will no longer be required. An external URL leads to 
unreliability because the service may go offline or it can be DDoSed if/when 
too many requests are sent to them

 

 
 Add a jar to cluster as follows
{code:java}
curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
@myjar.jar http://localhost:8983/api/cluster/blob
{code}
This does the following operations
 * Upload this jar to all the live nodes in the system

 

Subsequently, when a node is started, it tries to get all the available blobs 
from one of the live nodes where it is available.

 

 

  was:
This ticket totally eliminates the need for an external service to host the 
jars. So a url will no longer be required.

 
 Add a jar to cluster as follows
{code:java}
curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
@myjar.jar http://localhost:8983/api/cluster/blob
{code}
This does the following operations
 * Upload this jar to all the live nodes in the system

 

Subsequently, when a node is started, it tries to get all the available blobs 
from one of the live nodes where it is available.

 

 


> A cluster-wide blob upload package option & avoid remote url
> 
>
> Key: SOLR-13722
> URL: https://issues.apache.org/jira/browse/SOLR-13722
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Labels: package
>
> This ticket totally eliminates the need for an external service to host the 
> jars. So a url will no longer be required. An external URL leads to 
> unreliability because the service may go offline or it can be DDoSed if/when 
> too many requests are sent to them
>  
>  
>  Add a jar to cluster as follows
> {code:java}
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @myjar.jar http://localhost:8983/api/cluster/blob
> {code}
> This does the following operations
>  * Upload this jar to all the live nodes in the system
>  
> Subsequently, when a node is started, it tries to get all the available blobs 
> from one of the live nodes where it is available.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13722) A cluster-wide blob upload package option & avoid remote url

2019-09-03 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13722:
--
Summary: A cluster-wide blob upload package option & avoid remote url  
(was: A cluster-wide blob upload package option)

> A cluster-wide blob upload package option & avoid remote url
> 
>
> Key: SOLR-13722
> URL: https://issues.apache.org/jira/browse/SOLR-13722
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Labels: package
>
> This ticket totally eliminates the need for an external service to host the 
> jars. So a url will no longer be required.
>  
>  Add a jar to cluster as follows
> {code:java}
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @myjar.jar http://localhost:8983/api/cluster/blob
> {code}
> This does the following operations
>  * Upload this jar to all the live nodes in the system
>  
> Subsequently, when a node is started, it tries to get all the available blobs 
> from one of the live nodes where it is available.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13650) Support for named global classloaders

2019-09-03 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13650:
--
Description: 
{code:json}
curl -X POST -H 'Content-type:application/json' --data-binary '
{
  "add-package": {
   "name": "my-package" ,
  "sha512":""
  }
}' http://localhost:8983/api/cluster

{code}
This means that Solr creates a globally accessible classloader with a name 
{{my-package}} which contains all the jars of that package. 
 A component should be able to use the package by using the {{"package" : 
"my-package"}}.
 eg:
{code:json}
curl -X POST -H 'Content-type:application/json' --data-binary '
{
  "create-searchcomponent": {
  "name": "my-searchcomponent" ,
  "class" : "my.path.to.ClassName",
 "package" : "my-package"
  }
}' http://localhost:8983/api/c/mycollection/config 
{code}

  was:
{code:json}
curl -X POST -H 'Content-type:application/json' --data-binary '
{
  "add-package": {
   "name": "my-package" ,
  "url" : "http://host:port/url/of/jar;,
  "sha512":""
  }
}' http://localhost:8983/api/cluster

{code}

This means that Solr creates a globally accessible classloader with a name 
{{my-package}} which contains all the jars of that package. 
A component should be able to use the package by using the {{"package" : 
"my-package"}}.
eg:
{code:json}
curl -X POST -H 'Content-type:application/json' --data-binary '
{
  "create-searchcomponent": {
  "name": "my-searchcomponent" ,
  "class" : "my.path.to.ClassName",
 "package" : "my-package"
  }
}' http://localhost:8983/api/c/mycollection/config 
{code}


> Support for named global classloaders
> -
>
> Key: SOLR-13650
> URL: https://issues.apache.org/jira/browse/SOLR-13650
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>  Labels: package
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-package": {
>"name": "my-package" ,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This means that Solr creates a globally accessible classloader with a name 
> {{my-package}} which contains all the jars of that package. 
>  A component should be able to use the package by using the {{"package" : 
> "my-package"}}.
>  eg:
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "create-searchcomponent": {
>   "name": "my-searchcomponent" ,
>   "class" : "my.path.to.ClassName",
>  "package" : "my-package"
>   }
> }' http://localhost:8983/api/c/mycollection/config 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-03 Thread GitBox
diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip 
second grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r320441007
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SkipSecondStepSearchGroupShardResponseProcessor.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.grouping.distributed.responseprocessor;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.TotalHits;
+import org.apache.lucene.search.grouping.GroupDocs;
+import org.apache.lucene.search.grouping.SearchGroup;
+import org.apache.lucene.search.grouping.TopGroups;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.handler.component.ResponseBuilder;
+import org.apache.solr.handler.component.ShardDoc;
+import org.apache.solr.handler.component.ShardRequest;
+import org.apache.solr.handler.component.ShardResponse;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.grouping.GroupingSpecification;
+import 
org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer;
+
+public class SkipSecondStepSearchGroupShardResponseProcessor extends 
SearchGroupShardResponseProcessor {
+
+  @Override
+  protected SearchGroupsResultTransformer 
newSearchGroupsResultTransformer(SolrIndexSearcher solrIndexSearcher) {
+return new 
SearchGroupsResultTransformer.SkipSecondStepSearchResultResultTransformer(solrIndexSearcher);
+  }
+
+  @Override
+  protected SearchGroupsContainer newSearchGroupsContainer(ResponseBuilder rb) 
{
+return new 
SkipSecondStepSearchGroupsContainer(rb.getGroupingSpec().getFields());
+  }
+
+  @Override
+  public void process(ResponseBuilder rb, ShardRequest shardRequest) {
+super.process(rb, shardRequest);
+TopGroupsShardResponseProcessor.fillResultIds(rb);
+  }
+
+  protected static class SkipSecondStepSearchGroupsContainer extends 
SearchGroupsContainer {
+
+private final Map docIdToShard = new HashMap<>();
+
+public SkipSecondStepSearchGroupsContainer(String[] fields) {
+  super(fields);
+}
+
+@Override
+public void addSearchGroups(ShardResponse srsp, String field, 
Collection> searchGroups) {
+  super.addSearchGroups(srsp, field, searchGroups);
+  for (SearchGroup searchGroup : searchGroups) {
+assert(srsp.getShard() != null);
+docIdToShard.put(searchGroup.topDocSolrId, srsp.getShard());
+  }
+}
+
+@Override
+public void addMergedSearchGroups(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups ) {
+  // TODO: add comment or javadoc re: why this method is overridden as a 
no-op
 
 Review comment:
   (a) good catch, `addSearchGroupToShard` only sets `searchGroupToShards` that 
seems to be used only here: 

   
https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302214165a4d/solr/core/src/java/org/apache/solr/search/grouping/distributed/requestfactory/TopGroupsShardRequestFactory.java#L69
   
   and if we enable las vegas we are going to skip topgroups so we should be 
fine. 
   
   (b) um I would still skip it explicit instead of relying on implicit signal 
that might change over the time.. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13727) V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" instead of using regex pattern

2019-09-03 Thread Yonik Seeley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921677#comment-16921677
 ] 

Yonik Seeley commented on SOLR-13727:
-

Changes look good to me! I'll commit soon unless anyone else sees an issue with 
this approach.

> V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" 
> instead of using regex pattern
> 
>
> Key: SOLR-13727
> URL: https://issues.apache.org/jira/browse/SOLR-13727
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, v2 API
>Affects Versions: 8.2
>Reporter: Megan Carey
>Priority: Major
>  Labels: easyfix, patch
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When the HttpSolrClient is formatting a V2Request, it needs to change the 
> endpoint from the default "/solr/..." to "/api/...". It does so by simply 
> calling String.replace, which replaces the first instance of "/solr" in the 
> URL with "/api".
>  
> In the case where the host's address starts with "solr" and the HTTP protocol 
> is appended, this call changes the address for the request. Example:
> if baseUrl is "http://solr-host.com/8983/solr;, this call will change to 
> "http:/api-host.com:8983/solr"
>  
> We should use a regex pattern to ensure that we're replacing the correct 
> portion of the URL.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Failing tests

2019-09-03 Thread Chris Hostetter

FWIW: it looks like Uwe already created a new Jenkins JIRA for this, and 
there is already a new PR to (try and) fix the new problem.

I created a github issue for my "jenkins-reports" side project (which is 
what's consuming these RSS feeds and generating the reports) just so there 
could be a single plass to track everything related to this (didn't make 
sense to create a SOLR jira for this) ...

https://github.com/hossman/jenkins-reports/issues/1


: Date: Sat, 24 Aug 2019 15:01:59 -
: From: Uwe Schindler 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: Re: Failing tests
: 
: Hi,
: 
: Jenkins was updated, the bug seems to be (partially) fixed, but the URL is
: no longer absolute. Not sure why.
: 
: I am on vacation, so could anybody check this and maybe reopen the jenkins
: issue, if there is still a regression?
: 
: Uwe
: 
: > : Just for yucks, I grepped the e-mails I’ve been sending out for the
: > : number of failing tests in the most recent 4 of Hoss’s rollups, see
: > : below.
: >
: > my reports aren't comprehensive for the past few weeks because of a bug in
: > Uwe's jenkins box that broke data collection...
: >
: > 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201908.mbox/%3c010b01d551d0$6ae88b50$40b9a1f0$@thetaphi.de%3e
: >
: >
: > :
: > : The drop in the last few weeks is dramatic, hope it’s a trend…..
: > :
: > : e-mail-2018-06-11.txt:  989
: > : e-mail-2018-06-18.txt:  689
: > : e-mail-2018-06-25.txt:  555
: > : e-mail-2018-07-02.txt:  723
: > : e-mail-2018-07-09.txt:  793
: > : e-mail-2018-07-16.txt:  809
: > : e-mail-2018-07-23.txt:  953
: > : e-mail-2018-07-30.txt:  945
: > : e-mail-2018-08-06.txt:  830
: > : e-mail-2018-08-14.txt:  853
: > : e-mail-2018-08-20.txt:  547
: > : e-mail-2018-08-27.txt:  571
: > : e-mail-2018-09-03.txt:  512
: > : e-mail-2018-09-10.txt:  605
: > : e-mail-2018-09-18.txt:  624
: > : e-mail-2018-10-08.txt:  816
: > : e-mail-2018-12-24.txt: 1050
: > : e-mail-2019-01-08.txt:  655
: > : e-mail-2019-01-15.txt:  421
: > : e-mail-2019-02-12.txt:  347
: > : e-mail-2019-02-18.txt:  341
: > : e-mail-2019-03-04.txt:  279
: > : e-mail-2019-03-11.txt:  301
: > : e-mail-2019-03-18.txt:  275
: > : e-mail-2019-03-25.txt:  279
: > : e-mail-2019-04-01.txt:  288
: > : e-mail-2019-04-08.txt:  288
: > : e-mail-2019-04-15.txt:  259
: > : e-mail-2019-04-22.txt:  260
: > : e-mail-2019-05-06.txt:  238
: > : e-mail-2019-05-20.txt:  222
: > : e-mail-2019-06-03.txt:  199
: > : e-mail-2019-06-10.txt:  258
: > : e-mail-2019-06-17.txt:  530
: > : e-mail-2019-06-24.txt:  543
: > : e-mail-2019-07-01.txt:  585
: > : e-mail-2019-07-29.txt:  338
: > : e-mail-2019-08-05.txt:  252
: > : e-mail-2019-08-12.txt:  182
: > : e-mail-2019-08-19.txt:   80
: > : -
: > : To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: > : For additional commands, e-mail: dev-h...@lucene.apache.org
: > :
: > :
: >
: > -Hoss
: > http://www.lucidworks.com/
: >
: > -
: > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: > For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] [lucene-solr] diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-03 Thread GitBox
diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip 
second grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r320422541
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SkipSecondStepSearchGroupShardResponseProcessor.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.grouping.distributed.responseprocessor;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.TotalHits;
+import org.apache.lucene.search.grouping.GroupDocs;
+import org.apache.lucene.search.grouping.SearchGroup;
+import org.apache.lucene.search.grouping.TopGroups;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.handler.component.ResponseBuilder;
+import org.apache.solr.handler.component.ShardDoc;
+import org.apache.solr.handler.component.ShardRequest;
+import org.apache.solr.handler.component.ShardResponse;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.grouping.GroupingSpecification;
+import 
org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer;
+
+public class SkipSecondStepSearchGroupShardResponseProcessor extends 
SearchGroupShardResponseProcessor {
+
+  @Override
+  protected SearchGroupsResultTransformer 
newSearchGroupsResultTransformer(SolrIndexSearcher solrIndexSearcher) {
+return new 
SearchGroupsResultTransformer.SkipSecondStepSearchResultResultTransformer(solrIndexSearcher);
+  }
+
+  @Override
+  protected SearchGroupsContainer newSearchGroupsContainer(ResponseBuilder rb) 
{
+return new 
SkipSecondStepSearchGroupsContainer(rb.getGroupingSpec().getFields());
+  }
+
+  @Override
+  public void process(ResponseBuilder rb, ShardRequest shardRequest) {
+super.process(rb, shardRequest);
+TopGroupsShardResponseProcessor.fillResultIds(rb);
+  }
+
+  protected static class SkipSecondStepSearchGroupsContainer extends 
SearchGroupsContainer {
+
+private final Map docIdToShard = new HashMap<>();
+
+public SkipSecondStepSearchGroupsContainer(String[] fields) {
+  super(fields);
+}
+
+@Override
+public void addSearchGroups(ShardResponse srsp, String field, 
Collection> searchGroups) {
+  super.addSearchGroups(srsp, field, searchGroups);
+  for (SearchGroup searchGroup : searchGroups) {
+assert(srsp.getShard() != null);
+docIdToShard.put(searchGroup.topDocSolrId, srsp.getShard());
+  }
+}
+
+@Override
+public void addMergedSearchGroups(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups ) {
+  // TODO: add comment or javadoc re: why this method is overridden as a 
no-op
+}
+
+@Override
+public void addSearchGroupToShards(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups) {
+  super.addSearchGroupToShards(rb, groupField, mergedTopGroups);
+
+  final GroupingSpecification groupingSpecification = rb.getGroupingSpec();
+  final Sort groupSort = 
groupingSpecification.getGroupSortSpec().getSort();
+
+  GroupDocs[] groups = new GroupDocs[mergedTopGroups.size()];
+
+  // This is the max score found in any document on any group
+  float maxScore = 0;
+  int index = 0;
+
+  for (SearchGroup group : mergedTopGroups) {
+maxScore = Math.max(maxScore, group.topDocScore);
+final String shard = docIdToShard.get(group.topDocSolrId);
+assert(shard != null);
+final ShardDoc sdoc = new ShardDoc();
+sdoc.score = group.topDocScore;
+sdoc.id = group.topDocSolrId;
+sdoc.shard = shard;
+
+groups[index++] = new GroupDocs<>(group.topDocScore,
+group.topDocScore,
+new TotalHits(1, TotalHits.Relation.EQUAL_TO), /* we don't know 
the actual number of hits in the group- we set it to 1 as we only keep track of 
the top doc */
+new ShardDoc[] { sdoc }, /* only top doc */
+group.groupValue,
+group.sortValues);
+  }
+  

[GitHub] [lucene-solr] diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-03 Thread GitBox
diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip 
second grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r320422334
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SkipSecondStepSearchGroupShardResponseProcessor.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.grouping.distributed.responseprocessor;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.TotalHits;
+import org.apache.lucene.search.grouping.GroupDocs;
+import org.apache.lucene.search.grouping.SearchGroup;
+import org.apache.lucene.search.grouping.TopGroups;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.handler.component.ResponseBuilder;
+import org.apache.solr.handler.component.ShardDoc;
+import org.apache.solr.handler.component.ShardRequest;
+import org.apache.solr.handler.component.ShardResponse;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.grouping.GroupingSpecification;
+import 
org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer;
+
+public class SkipSecondStepSearchGroupShardResponseProcessor extends 
SearchGroupShardResponseProcessor {
+
+  @Override
+  protected SearchGroupsResultTransformer 
newSearchGroupsResultTransformer(SolrIndexSearcher solrIndexSearcher) {
+return new 
SearchGroupsResultTransformer.SkipSecondStepSearchResultResultTransformer(solrIndexSearcher);
+  }
+
+  @Override
+  protected SearchGroupsContainer newSearchGroupsContainer(ResponseBuilder rb) 
{
+return new 
SkipSecondStepSearchGroupsContainer(rb.getGroupingSpec().getFields());
+  }
+
+  @Override
+  public void process(ResponseBuilder rb, ShardRequest shardRequest) {
+super.process(rb, shardRequest);
+TopGroupsShardResponseProcessor.fillResultIds(rb);
+  }
+
+  protected static class SkipSecondStepSearchGroupsContainer extends 
SearchGroupsContainer {
+
+private final Map docIdToShard = new HashMap<>();
+
+public SkipSecondStepSearchGroupsContainer(String[] fields) {
+  super(fields);
+}
+
+@Override
+public void addSearchGroups(ShardResponse srsp, String field, 
Collection> searchGroups) {
+  super.addSearchGroups(srsp, field, searchGroups);
+  for (SearchGroup searchGroup : searchGroups) {
+assert(srsp.getShard() != null);
+docIdToShard.put(searchGroup.topDocSolrId, srsp.getShard());
+  }
+}
+
+@Override
+public void addMergedSearchGroups(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups ) {
+  // TODO: add comment or javadoc re: why this method is overridden as a 
no-op
+}
+
+@Override
+public void addSearchGroupToShards(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups) {
+  super.addSearchGroupToShards(rb, groupField, mergedTopGroups);
+
+  final GroupingSpecification groupingSpecification = rb.getGroupingSpec();
+  final Sort groupSort = 
groupingSpecification.getGroupSortSpec().getSort();
+
+  GroupDocs[] groups = new GroupDocs[mergedTopGroups.size()];
+
+  // This is the max score found in any document on any group
+  float maxScore = 0;
+  int index = 0;
+
+  for (SearchGroup group : mergedTopGroups) {
+maxScore = Math.max(maxScore, group.topDocScore);
+final String shard = docIdToShard.get(group.topDocSolrId);
+assert(shard != null);
+final ShardDoc sdoc = new ShardDoc();
+sdoc.score = group.topDocScore;
+sdoc.id = group.topDocSolrId;
+sdoc.shard = shard;
+
+groups[index++] = new GroupDocs<>(group.topDocScore,
 
 Review comment:
   Ok, I double checked the code and it looks safe, merged - thank you


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this 

[jira] [Commented] (SOLR-9505) Extra tests to confirm Atomic Update remove behaviour

2019-09-03 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921662#comment-16921662
 ] 

Lucene/Solr QA commented on SOLR-9505:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} SOLR-9505 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-9505 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828195/SOLR-9505.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/546/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Extra tests to confirm Atomic Update remove behaviour
> -
>
> Key: SOLR-9505
> URL: https://issues.apache.org/jira/browse/SOLR-9505
> Project: Solr
>  Issue Type: Test
>Affects Versions: 7.0
>Reporter: Tim Owen
>Priority: Minor
> Attachments: SOLR-9505.patch
>
>
> The behaviour of the Atomic Update {{remove}} operation in the code doesn't 
> match the description in the Confluence documentation, which has been 
> questioned already. From looking at the source code, and using curl to 
> confirm, the {{remove}} operation only removes the first occurrence of a 
> value from a multi-valued field, it does not remove all occurrences. The 
> {{removeregex}} operation does remove all, however.
> There are unit tests for Atomic Updates, but they didn't assert this 
> behaviour, so I've added some extra assertions to confirm that, and a couple 
> of extra tests including one that checks that {{removeregex}} does a Regex 
> match of the whole value, not just a find-anywhere operation.
> I think it's the documentation that needs clarifying - the code behaves as 
> expected (assuming {{remove}} was intended to work that way?)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 465 - Failure

2019-09-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/465/

1 tests failed.
FAILED:  org.apache.lucene.codecs.TestCodecLoadingDeadlock.testDeadlock

Error Message:
Process did not exit after 60 secs?

Stack Trace:
java.lang.AssertionError: Process did not exit after 60 secs?
at 
__randomizedtesting.SeedInfo.seed([AC9CCD2E6495380:7A22DC6E013FE56]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.lucene.codecs.TestCodecLoadingDeadlock.testDeadlock(TestCodecLoadingDeadlock.java:82)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:404)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:708)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:138)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:629)




Build Log:
[...truncated 1134 lines...]
   [junit4] Suite: org.apache.lucene.codecs.TestCodecLoadingDeadlock
   [junit4]   1> codec: FastDecompressionCompressingStoredFields, pf: Lucene50, 
dvf: Direct
   [junit4] FAILURE 60.2s J0 | TestCodecLoadingDeadlock.testDeadlock <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Process did not exit 
after 60 secs?
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([AC9CCD2E6495380:7A22DC6E013FE56]:0)
   [junit4]>at 
org.apache.lucene.codecs.TestCodecLoadingDeadlock.testDeadlock(TestCodecLoadingDeadlock.java:82)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
   [junit4] Completed [255/524 (1!)] on J0 in 60.21s, 1 test, 1 failure <<< 
FAILURES!

[...truncated 63766 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj518158574
 [ecj-lint] Compiling 48 source files to /tmp/ecj518158574
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 23)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 28)
 [ecj-lint] public class 

[jira] [Comment Edited] (SOLR-13709) Race condition on core reload while core is still loading?

2019-09-03 Thread Hoss Man (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921632#comment-16921632
 ] 

Hoss Man edited comment on SOLR-13709 at 9/3/19 6:27 PM:
-

-Commit 86e8c44be472556c8a905deb338cafa803ee6ee0 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter-
 -[ [https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=86e8c44] ]-

-SOLR-13709: Fixed distributed grouping when multiple 'fl' params are specified-

-(cherry picked from commit 83cd54f80157916b364bb5ebde20a66cbd5d3d93)-

EDIT: not actually relevant to this issue, sorry.


was (Author: jira-bot):
Commit 86e8c44be472556c8a905deb338cafa803ee6ee0 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=86e8c44 ]

SOLR-13709: Fixed distributed grouping when multiple 'fl' params are specified

(cherry picked from commit 83cd54f80157916b364bb5ebde20a66cbd5d3d93)


> Race condition on core reload while core is still loading?
> --
>
> Key: SOLR-13709
> URL: https://issues.apache.org/jira/browse/SOLR-13709
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Erick Erickson
>Priority: Major
> Attachments: apache_Lucene-Solr-Tests-8.x_449.log.txt
>
>
> A recent jenkins failure from {{TestSolrCLIRunExample}} seems to suggest that 
> there may be a race condition when attempting to re-load a SolrCore while the 
> core is currently in the process of (re)loading that can leave the SolrCore 
> in an unusable state.
> Details to follow...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13709) Race condition on core reload while core is still loading?

2019-09-03 Thread Hoss Man (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921619#comment-16921619
 ] 

Hoss Man edited comment on SOLR-13709 at 9/3/19 6:27 PM:
-

-Commit 83cd54f80157916b364bb5ebde20a66cbd5d3d93 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter-
 -[ [https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=83cd54f] ]-

-SOLR-13709: Fixed distributed grouping when multiple 'fl' params are specified-

EDIT: not actually relevant to this issue, sorry.


was (Author: jira-bot):
Commit 83cd54f80157916b364bb5ebde20a66cbd5d3d93 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=83cd54f ]

SOLR-13709: Fixed distributed grouping when multiple 'fl' params are specified


> Race condition on core reload while core is still loading?
> --
>
> Key: SOLR-13709
> URL: https://issues.apache.org/jira/browse/SOLR-13709
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Erick Erickson
>Priority: Major
> Attachments: apache_Lucene-Solr-Tests-8.x_449.log.txt
>
>
> A recent jenkins failure from {{TestSolrCLIRunExample}} seems to suggest that 
> there may be a race condition when attempting to re-load a SolrCore while the 
> core is currently in the process of (re)loading that can leave the SolrCore 
> in an unusable state.
> Details to follow...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13717) Distributed Grouping breaks multi valued 'fl' param

2019-09-03 Thread Hoss Man (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921644#comment-16921644
 ] 

Hoss Man commented on SOLR-13717:
-

Gah, juggling too many tabs/issues at the same time.

Primary commits related to this issue...
* master: https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=83cd54f
** 83cd54f80157916b364bb5ebde20a66cbd5d3d93
* branch_8x: https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=86e8c44
** 86e8c44be472556c8a905deb338cafa803ee6ee0

> Distributed Grouping breaks multi valued 'fl' param
> ---
>
> Key: SOLR-13717
> URL: https://issues.apache.org/jira/browse/SOLR-13717
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13717.patch, SOLR-13717.patch
>
>
> Co-worker discovered a bug with (distributed) grouping when multiple {{fl}} 
> params are specified.
> {{StoredFieldsShardRequestFactory}} has very (old and) brittle code that 
> assumes there will be 0 or 1 {{fl}} params in the original request that it 
> should inspect to see if it needs to append (via string concat) the uniqueKey 
> field onto in order to collate the returned stored fields into their 
> respective (grouped) documents -- and then ignores any additional {{fl}} 
> params that may exist in the original request when it does so.
> The net result is that only the uniqueKey field and whatever fields _are_ 
> specified in the first {{fl}} param specified are fetched from each shard and 
> ultimately returned.
> The only workaround is to replace multiple {{fl}} params with a single {{fl}} 
> param containing a comma seperated list of the requested fields.
> 
> Bug is trivial to reproduce with {{bin/solr -e cloud -noprompt}} by comparing 
> these requests which should all be equivilent...
> {noformat}
> $ bin/post -c gettingstarted -out yes example/exampledocs/books.csv
> ...
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=author,name,id=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0812521390",
> "name":["The Black Company"],
> "author":["Glen Cook"]}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354",
> "name":["Foundation"],
> "author":["Isaac Asimov"]}]
>   }}]}}}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=author=name,id=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0812521390",
> "author":["Glen Cook"]}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354",
> "author":["Isaac Asimov"]}]
>   }}]}}}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=id=author=name=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0553573403"}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354"}]
>   }}]}}}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13717) Distributed Grouping breaks multi valued 'fl' param

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921641#comment-16921641
 ] 

ASF subversion and git services commented on SOLR-13717:


Commit 96c9207f905ebfafdbe81a748a978c7e83d683df in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=96c9207 ]

CHANGES fixup: SOLR-13709 -> SOLR-13717

(cherry picked from commit d1a4d1352538a0d967a12686ca903453d10c48c9)


> Distributed Grouping breaks multi valued 'fl' param
> ---
>
> Key: SOLR-13717
> URL: https://issues.apache.org/jira/browse/SOLR-13717
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13717.patch, SOLR-13717.patch
>
>
> Co-worker discovered a bug with (distributed) grouping when multiple {{fl}} 
> params are specified.
> {{StoredFieldsShardRequestFactory}} has very (old and) brittle code that 
> assumes there will be 0 or 1 {{fl}} params in the original request that it 
> should inspect to see if it needs to append (via string concat) the uniqueKey 
> field onto in order to collate the returned stored fields into their 
> respective (grouped) documents -- and then ignores any additional {{fl}} 
> params that may exist in the original request when it does so.
> The net result is that only the uniqueKey field and whatever fields _are_ 
> specified in the first {{fl}} param specified are fetched from each shard and 
> ultimately returned.
> The only workaround is to replace multiple {{fl}} params with a single {{fl}} 
> param containing a comma seperated list of the requested fields.
> 
> Bug is trivial to reproduce with {{bin/solr -e cloud -noprompt}} by comparing 
> these requests which should all be equivilent...
> {noformat}
> $ bin/post -c gettingstarted -out yes example/exampledocs/books.csv
> ...
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=author,name,id=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0812521390",
> "name":["The Black Company"],
> "author":["Glen Cook"]}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354",
> "name":["Foundation"],
> "author":["Isaac Asimov"]}]
>   }}]}}}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=author=name,id=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0812521390",
> "author":["Glen Cook"]}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354",
> "author":["Isaac Asimov"]}]
>   }}]}}}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=id=author=name=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0553573403"}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354"}]
>   }}]}}}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13709) Race condition on core reload while core is still loading?

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921642#comment-16921642
 ] 

ASF subversion and git services commented on SOLR-13709:


Commit d1a4d1352538a0d967a12686ca903453d10c48c9 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d1a4d13 ]

CHANGES fixup: SOLR-13709 -> SOLR-13717


> Race condition on core reload while core is still loading?
> --
>
> Key: SOLR-13709
> URL: https://issues.apache.org/jira/browse/SOLR-13709
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Erick Erickson
>Priority: Major
> Attachments: apache_Lucene-Solr-Tests-8.x_449.log.txt
>
>
> A recent jenkins failure from {{TestSolrCLIRunExample}} seems to suggest that 
> there may be a race condition when attempting to re-load a SolrCore while the 
> core is currently in the process of (re)loading that can leave the SolrCore 
> in an unusable state.
> Details to follow...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13709) Race condition on core reload while core is still loading?

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921640#comment-16921640
 ] 

ASF subversion and git services commented on SOLR-13709:


Commit 96c9207f905ebfafdbe81a748a978c7e83d683df in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=96c9207 ]

CHANGES fixup: SOLR-13709 -> SOLR-13717

(cherry picked from commit d1a4d1352538a0d967a12686ca903453d10c48c9)


> Race condition on core reload while core is still loading?
> --
>
> Key: SOLR-13709
> URL: https://issues.apache.org/jira/browse/SOLR-13709
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Erick Erickson
>Priority: Major
> Attachments: apache_Lucene-Solr-Tests-8.x_449.log.txt
>
>
> A recent jenkins failure from {{TestSolrCLIRunExample}} seems to suggest that 
> there may be a race condition when attempting to re-load a SolrCore while the 
> core is currently in the process of (re)loading that can leave the SolrCore 
> in an unusable state.
> Details to follow...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13717) Distributed Grouping breaks multi valued 'fl' param

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921643#comment-16921643
 ] 

ASF subversion and git services commented on SOLR-13717:


Commit d1a4d1352538a0d967a12686ca903453d10c48c9 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d1a4d13 ]

CHANGES fixup: SOLR-13709 -> SOLR-13717


> Distributed Grouping breaks multi valued 'fl' param
> ---
>
> Key: SOLR-13717
> URL: https://issues.apache.org/jira/browse/SOLR-13717
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13717.patch, SOLR-13717.patch
>
>
> Co-worker discovered a bug with (distributed) grouping when multiple {{fl}} 
> params are specified.
> {{StoredFieldsShardRequestFactory}} has very (old and) brittle code that 
> assumes there will be 0 or 1 {{fl}} params in the original request that it 
> should inspect to see if it needs to append (via string concat) the uniqueKey 
> field onto in order to collate the returned stored fields into their 
> respective (grouped) documents -- and then ignores any additional {{fl}} 
> params that may exist in the original request when it does so.
> The net result is that only the uniqueKey field and whatever fields _are_ 
> specified in the first {{fl}} param specified are fetched from each shard and 
> ultimately returned.
> The only workaround is to replace multiple {{fl}} params with a single {{fl}} 
> param containing a comma seperated list of the requested fields.
> 
> Bug is trivial to reproduce with {{bin/solr -e cloud -noprompt}} by comparing 
> these requests which should all be equivilent...
> {noformat}
> $ bin/post -c gettingstarted -out yes example/exampledocs/books.csv
> ...
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=author,name,id=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0812521390",
> "name":["The Black Company"],
> "author":["Glen Cook"]}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354",
> "name":["Foundation"],
> "author":["Isaac Asimov"]}]
>   }}]}}}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=author=name,id=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0812521390",
> "author":["Glen Cook"]}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354",
> "author":["Isaac Asimov"]}]
>   }}]}}}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=id=author=name=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0553573403"}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354"}]
>   }}]}}}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13717) Distributed Grouping breaks multi valued 'fl' param

2019-09-03 Thread Hoss Man (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-13717:

Fix Version/s: 8.3
   master (9.0)
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Christine: thanks for the review, and for catching & fixing my test laziness.  
much cleaner.

> Distributed Grouping breaks multi valued 'fl' param
> ---
>
> Key: SOLR-13717
> URL: https://issues.apache.org/jira/browse/SOLR-13717
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13717.patch, SOLR-13717.patch
>
>
> Co-worker discovered a bug with (distributed) grouping when multiple {{fl}} 
> params are specified.
> {{StoredFieldsShardRequestFactory}} has very (old and) brittle code that 
> assumes there will be 0 or 1 {{fl}} params in the original request that it 
> should inspect to see if it needs to append (via string concat) the uniqueKey 
> field onto in order to collate the returned stored fields into their 
> respective (grouped) documents -- and then ignores any additional {{fl}} 
> params that may exist in the original request when it does so.
> The net result is that only the uniqueKey field and whatever fields _are_ 
> specified in the first {{fl}} param specified are fetched from each shard and 
> ultimately returned.
> The only workaround is to replace multiple {{fl}} params with a single {{fl}} 
> param containing a comma seperated list of the requested fields.
> 
> Bug is trivial to reproduce with {{bin/solr -e cloud -noprompt}} by comparing 
> these requests which should all be equivilent...
> {noformat}
> $ bin/post -c gettingstarted -out yes example/exampledocs/books.csv
> ...
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=author,name,id=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0812521390",
> "name":["The Black Company"],
> "author":["Glen Cook"]}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354",
> "name":["Foundation"],
> "author":["Isaac Asimov"]}]
>   }}]}}}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=author=name,id=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0812521390",
> "author":["Glen Cook"]}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354",
> "author":["Isaac Asimov"]}]
>   }}]}}}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?omitHeader=true=true=id=author=name=*:*=true=genre_s'
> {
>   "grouped":{
> "genre_s":{
>   "matches":10,
>   "groups":[{
>   "groupValue":"fantasy",
>   "doclist":{"numFound":8,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"0553573403"}]
>   }},
> {
>   "groupValue":"scifi",
>   "doclist":{"numFound":2,"start":0,"docs":[
>   {
> "id":"0553293354"}]
>   }}]}}}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] yonik commented on issue #850: SOLR-13727: Bug fix for V2Request handling in HttpSolrClient

2019-09-03 Thread GitBox
yonik commented on issue #850: SOLR-13727: Bug fix for V2Request handling in 
HttpSolrClient
URL: https://github.com/apache/lucene-solr/pull/850#issuecomment-527578116
 
 
   Hmmm, I do see "files changed 2", but I also see "Commits 7".  I guess I'm 
getting confused by the merge commits.  I wonder if using the "Squash and 
merge" from github will keep a linear history?  I think I'll be safe and just 
cherry-pick the last two commits.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13709) Race condition on core reload while core is still loading?

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921632#comment-16921632
 ] 

ASF subversion and git services commented on SOLR-13709:


Commit 86e8c44be472556c8a905deb338cafa803ee6ee0 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=86e8c44 ]

SOLR-13709: Fixed distributed grouping when multiple 'fl' params are specified

(cherry picked from commit 83cd54f80157916b364bb5ebde20a66cbd5d3d93)


> Race condition on core reload while core is still loading?
> --
>
> Key: SOLR-13709
> URL: https://issues.apache.org/jira/browse/SOLR-13709
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Erick Erickson
>Priority: Major
> Attachments: apache_Lucene-Solr-Tests-8.x_449.log.txt
>
>
> A recent jenkins failure from {{TestSolrCLIRunExample}} seems to suggest that 
> there may be a race condition when attempting to re-load a SolrCore while the 
> core is currently in the process of (re)loading that can leave the SolrCore 
> in an unusable state.
> Details to follow...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13709) Race condition on core reload while core is still loading?

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921619#comment-16921619
 ] 

ASF subversion and git services commented on SOLR-13709:


Commit 83cd54f80157916b364bb5ebde20a66cbd5d3d93 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=83cd54f ]

SOLR-13709: Fixed distributed grouping when multiple 'fl' params are specified


> Race condition on core reload while core is still loading?
> --
>
> Key: SOLR-13709
> URL: https://issues.apache.org/jira/browse/SOLR-13709
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Erick Erickson
>Priority: Major
> Attachments: apache_Lucene-Solr-Tests-8.x_449.log.txt
>
>
> A recent jenkins failure from {{TestSolrCLIRunExample}} seems to suggest that 
> there may be a race condition when attempting to re-load a SolrCore while the 
> core is currently in the process of (re)loading that can leave the SolrCore 
> in an unusable state.
> Details to follow...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8403) Support 'filtered' term vectors - don't require all terms to be present

2019-09-03 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-8403.
--
Resolution: Won't Fix

> Support 'filtered' term vectors - don't require all terms to be present
> ---
>
> Key: LUCENE-8403
> URL: https://issues.apache.org/jira/browse/LUCENE-8403
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael Braun
>Priority: Minor
> Attachments: LUCENE-8403.patch
>
>
> The genesis of this was a conversation and idea from [~dsmiley] several years 
> ago.
> In order to optimize term vector storage, we may not actually need all tokens 
> to be present in the term vectors - and if so, ideally our codec could just 
> opt not to store them.
> I attempted to fork the standard codec and override the TermVectorsFormat and 
> TermVectorsWriter to ignore storing certain Terms within a field. This 
> worked, however, CheckIndex checks that the terms present in the standard 
> postings are also present in the TVs, if TVs enabled. So this then doesn't 
> work as 'valid' according to CheckIndex.
> Can the TermVectorsFormat be made in such a way to support configuration of 
> tokens that should not be stored (benefits: less storage, more optimal 
> retrieval per doc)? Is this valuable to the wider community? Is there a way 
> we can design this to not break CheckIndex's contract while at the same time 
> lessening storage for unneeded tokens?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-8403) Support 'filtered' term vectors - don't require all terms to be present

2019-09-03 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed LUCENE-8403.


> Support 'filtered' term vectors - don't require all terms to be present
> ---
>
> Key: LUCENE-8403
> URL: https://issues.apache.org/jira/browse/LUCENE-8403
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael Braun
>Priority: Minor
> Attachments: LUCENE-8403.patch
>
>
> The genesis of this was a conversation and idea from [~dsmiley] several years 
> ago.
> In order to optimize term vector storage, we may not actually need all tokens 
> to be present in the term vectors - and if so, ideally our codec could just 
> opt not to store them.
> I attempted to fork the standard codec and override the TermVectorsFormat and 
> TermVectorsWriter to ignore storing certain Terms within a field. This 
> worked, however, CheckIndex checks that the terms present in the standard 
> postings are also present in the TVs, if TVs enabled. So this then doesn't 
> work as 'valid' according to CheckIndex.
> Can the TermVectorsFormat be made in such a way to support configuration of 
> tokens that should not be stored (benefits: less storage, more optimal 
> retrieval per doc)? Is this valuable to the wider community? Is there a way 
> we can design this to not break CheckIndex's contract while at the same time 
> lessening storage for unneeded tokens?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-8.x - Build # 5587 - Failure

2019-09-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-8.x/5587/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-8.x
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://gitbox.apache.org/repos/asf/lucene-solr.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from 
https://gitbox.apache.org/repos/asf/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://gitbox.apache.org/repos/asf/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/branch_8x^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/branch_8x^{commit} # timeout=10
Checking out Revision 54685c5e7f5f84d28e02e42c583d5eb70588532d 
(refs/remotes/origin/branch_8x)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 54685c5e7f5f84d28e02e42c583d5eb70588532d
Commit message: "LUCENE-8960: Add LatLonDocValuesPointInPolygonQuery (#851)"
 > git rev-list --no-walk 54685c5e7f5f84d28e02e42c583d5eb70588532d # timeout=10
No emails were triggered.
[Solr-reference-guide-8.x] $ /bin/bash -xe /tmp/jenkins11196756677238353327.sh
+ gpg2 --keyserver hkp://pool.sks-keyservers.net --recv-keys 
409B6B1796C275462A1703113804BB82D39DC0E3 
7D2BAF1CF37B13E2069D6956105BD0E739499BDB
/tmp/jenkins11196756677238353327.sh: line 4: gpg2: command not found
+ command curl -sSL https://rvm.io/mpapis.asc
+ curl -sSL https://rvm.io/mpapis.asc
+ gpg --import -
gpg: key 3804BB82D39DC0E3: 47 signatures not checked due to missing keys
gpg: key 3804BB82D39DC0E3: "Michal Papis (RVM signing) " not 
changed
gpg: Total number processed: 1
gpg:  unchanged: 1
+ command curl -sSL https://rvm.io/pkuczynski.asc
+ gpg --import -
+ curl -sSL https://rvm.io/pkuczynski.asc
gpg: key 105BD0E739499BDB: "Piotr Kuczynski " not 
changed
gpg: Total number processed: 1
gpg:  unchanged: 1
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.5.1
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.9.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.9/1.29.9.tar.gz.asc
gpg: Signature made Wed 10 Jul 2019 08:31:02 AM UTC
gpg:using RSA key 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
gpg: Good signature from "Piotr Kuczynski " [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the owner.
Primary key fingerprint: 7D2B AF1C F37B 13E2 069D  6956 105B D0E7 3949 9BDB
GPG verified '/home/jenkins/.rvm/archives/rvm-1.29.9.tgz'
Upgrading the RVM installation in /home/jenkins/.rvm/
Upgrade of RVM in /home/jenkins/.rvm/ is complete.

Thanks for installing RVM 
Please consider donating to our open collective to help us maintain RVM.

  Donate: https://opencollective.com/rvm/donate


+ set +x
Running 'source /home/jenkins/.rvm/scripts/rvm'
Running 'rvm cleanup all'
Cleaning up rvm archives
Cleaning up rvm repos
Cleaning up rvm src
Cleaning up rvm log
Cleaning up rvm tmp
Cleaning up rvm gemsets
Cleaning up rvm links
Cleanup done.
Running 'rvm autolibs disable'
Running 'rvm install ruby-2.5.1'
Already installed ruby-2.5.1.
To reinstall use:

rvm reinstall ruby-2.5.1

Running 'rvm gemset create solr-refguide-gemset'
ruby-2.5.1 - #gemset created 
/home/jenkins/.rvm/gems/ruby-2.5.1@solr-refguide-gemset
ruby-2.5.1 - #generating solr-refguide-gemset wrappers...
Running 'rvm ruby-2.5.1@solr-refguide-gemset'
Using /home/jenkins/.rvm/gems/ruby-2.5.1 with gemset solr-refguide-gemset
Running 'gem install --force --version 3.5.0 jekyll'
Successfully installed jekyll-3.5.0
Parsing documentation for jekyll-3.5.0
Done installing documentation for jekyll after 3 seconds
1 gem installed
Running 'gem uninstall --all --ignore-dependencies asciidoctor'
Removing asciidoctor
Removing asciidoctor-safe
Successfully uninstalled asciidoctor-1.5.6.2
Running 'gem install --force --version 1.5.6.2 asciidoctor'
Successfully installed asciidoctor-1.5.6.2
Parsing documentation for asciidoctor-1.5.6.2
Installing ri documentation for asciidoctor-1.5.6.2
Done installing documentation for asciidoctor after 4 seconds
1 gem installed
Running 'gem install --force --version 2.1.0 jekyll-asciidoc'
Successfully installed jekyll-asciidoc-2.1.0
Parsing documentation for jekyll-asciidoc-2.1.0
Done installing documentation for jekyll-asciidoc after 0 seconds
1 gem installed
Running 'gem install --force --version 1.1.2 

[GitHub] [lucene-solr] megancarey commented on issue #850: SOLR-13727: Bug fix for V2Request handling in HttpSolrClient

2019-09-03 Thread GitBox
megancarey commented on issue #850: SOLR-13727: Bug fix for V2Request handling 
in HttpSolrClient
URL: https://github.com/apache/lucene-solr/pull/850#issuecomment-527567515
 
 
   @yonik The changed files contain only my changes for this bug fix, but does 
have other commits attached - would it be easier if I condensed the commit 
history to a single commit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13709) Race condition on core reload while core is still loading?

2019-09-03 Thread Hoss Man (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921597#comment-16921597
 ] 

Hoss Man commented on SOLR-13709:
-

just to be clear, my primary concern when i created this issue was that it was 
evident from the test failure logs that core reloading (and as erick points 
out: potentially other core level ops) could occur in a race condition with the 
core itself loading.

my comments about {{SolrCores.getCoreDescriptor(String)}} and if/when/why/how  
it should block on attempts to ccess a core by name if/while that core was 
loading were based *solely* on the exsting javadocs for that method.

if those javadocs are and have always been wrong, then trying to "fix" that 
method to match the javadocs isn't necessarily the best solution -- especially 
if doing so causes lots of other problems.  we can always just update the 
javadocs, making a note of when/why/how the value may be null, and audit the 
callers to ensure they are accounting for the possibility of null and handling 
that value in whatever way makes the most sense for the situation (throw NPE, 
throw a diff exception, fail a command, etc...)

i should point out, i have no idea if a "user level" Core RELOAD (or SWAP or 
UNLOAD) op (ie: something triggered externally via /admin/cores, or via 
overseer) also has this problem, or already accounts for the possibility that a 
core may not yet be loaded -- it may simply be that this particular ZkWatcher 
that registered by the core to watch the schema is itself broken, and should be 
checking some more explicit state to block and take no action until the core is 
fully loaded.

As far as testing...

[~erickerickson] - it's not really clear to me what/where/how you're currently 
trying to test this? ... as i mentioned, it's kind of a fluke that 
TestSolrCLIRunExample triggered this failure at all, and even when it did it 
didn't really "fail" in a reliable way that was oviously related to this 
specifit bug.  

I would suggest that a more robust way to test this would be with a more 
targeted non-cloud test, using a custom plugin (searcher handler, component, 
whatever...) that spins up a background thread to trigger schema updates in ZK 
(so that the problematic watcher which does a core reload on schema changes 
will then fire) and then the custom component should "stall" for some amount of 
time (ideally {{await}}-ing on something instead of an arbitrary sleep, but i 
haven't thought it through enough to know what exact condition it could await 
on) to force a delay in the completeion of the SolrCore loading.  Then your 
test just tries to initialize a SolrCore with a config that uses this custom 
plugin, and asserts that the SolrCore initializes fine *AND* that it 
(eventually) picks up the updated schema (via polling on the schema API?)

make sense?

> Race condition on core reload while core is still loading?
> --
>
> Key: SOLR-13709
> URL: https://issues.apache.org/jira/browse/SOLR-13709
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Erick Erickson
>Priority: Major
> Attachments: apache_Lucene-Solr-Tests-8.x_449.log.txt
>
>
> A recent jenkins failure from {{TestSolrCLIRunExample}} seems to suggest that 
> there may be a race condition when attempting to re-load a SolrCore while the 
> core is currently in the process of (re)loading that can leave the SolrCore 
> in an unusable state.
> Details to follow...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] yonik commented on issue #850: SOLR-13727: Bug fix for V2Request handling in HttpSolrClient

2019-09-03 Thread GitBox
yonik commented on issue #850: SOLR-13727: Bug fix for V2Request handling in 
HttpSolrClient
URL: https://github.com/apache/lucene-solr/pull/850#issuecomment-527540401
 
 
   Hi Megan, I think this PR accidentally has changes from your previous PR as 
well.  Could you do one with just the described changes?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12475) Fix failing MaxSizeAutoCommitTest

2019-09-03 Thread Anshum Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-12475.
-
Resolution: Fixed

Not adding a fix version because I don't think this was broken in any 
'released' version.

> Fix failing MaxSizeAutoCommitTest
> -
>
> Key: SOLR-12475
> URL: https://issues.apache.org/jira/browse/SOLR-12475
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-12475.patch, SOLR-12475.patch, SOLR-12475.patch
>
>
> Investigate and fix the failing MaxSizeAutoCommitTest. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-12475) Fix failing MaxSizeAutoCommitTest

2019-09-03 Thread Anshum Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta closed SOLR-12475.
---

> Fix failing MaxSizeAutoCommitTest
> -
>
> Key: SOLR-12475
> URL: https://issues.apache.org/jira/browse/SOLR-12475
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-12475.patch, SOLR-12475.patch, SOLR-12475.patch
>
>
> Investigate and fix the failing MaxSizeAutoCommitTest. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-09-03 Thread GitBox
atris commented on a change in pull request #823: LUCENE-8939: Introduce Shared 
Count Early Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#discussion_r320344708
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/IndexSearcher.java
 ##
 @@ -467,9 +467,19 @@ public TopDocs searchAfter(ScoreDoc after, Query query, 
int numHits) throws IOEx
 
 final CollectorManager manager = new 
CollectorManager() {
 
+  private HitsThresholdChecker hitsThresholdChecker;
   @Override
   public TopScoreDocCollector newCollector() throws IOException {
-return TopScoreDocCollector.create(cappedNumHits, after, 
TOTAL_HITS_THRESHOLD);
+
+if (hitsThresholdChecker == null) {
+  if (executor == null || leafSlices.length <= 1) {
+hitsThresholdChecker = 
HitsThresholdChecker.create(TOTAL_HITS_THRESHOLD);
+  } else {
+hitsThresholdChecker = 
HitsThresholdChecker.createShared(TOTAL_HITS_THRESHOLD);
+  }
+}
 
 Review comment:
   Fixed this by using `final` modifier


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on issue #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-09-03 Thread GitBox
atris commented on issue #823: LUCENE-8939: Introduce Shared Count Early 
Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#issuecomment-527515974
 
 
   
   > In the unsorted index case, where we skip by impacts once we collect more 
than the 1000 by default, we are also still correct because we continue 
collecting in that slice, just skipping by impact based on that thread's 
private PQ bottom. We can make further improvements e.g. to share the global PQ 
bottom across all searcher threads, but that should come later.
   
   Yes, that is on the line. Next up will be a PR with shared global PQ :)
   
   > 
   > So net/net I think the change is correct, and should be a big performance 
gain for concurrent searching. Sorry for the confusion ;)
   
   No sweat, thank you for reviewing!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-09-03 Thread GitBox
atris commented on a change in pull request #823: LUCENE-8939: Introduce Shared 
Count Early Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#discussion_r320343276
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/TopScoreDocCollector.java
 ##
 @@ -191,32 +195,63 @@ public static TopScoreDocCollector create(int numHits, 
int totalHitsThreshold) {
* objects.
*/
   public static TopScoreDocCollector create(int numHits, ScoreDoc after, int 
totalHitsThreshold) {
+return create(numHits, after, 
HitsThresholdChecker.create(totalHitsThreshold));
+  }
+
+  static TopScoreDocCollector create(int numHits, ScoreDoc after, 
HitsThresholdChecker hitsThresholdChecker) {
 
 if (numHits <= 0) {
   throw new IllegalArgumentException("numHits must be > 0; please use 
TotalHitCountCollector if you just need the total hit count");
 }
 
-if (totalHitsThreshold < 0) {
-  throw new IllegalArgumentException("totalHitsThreshold must be >= 0, got 
" + totalHitsThreshold);
+if (hitsThresholdChecker == null) {
+  throw new IllegalArgumentException("hitsThresholdChecker must be non 
null");
 }
 
 if (after == null) {
-  return new SimpleTopScoreDocCollector(numHits, totalHitsThreshold);
+  return new SimpleTopScoreDocCollector(numHits, hitsThresholdChecker);
 } else {
-  return new PagingTopScoreDocCollector(numHits, after, 
totalHitsThreshold);
+  return new PagingTopScoreDocCollector(numHits, after, 
hitsThresholdChecker);
 }
   }
 
-  final int totalHitsThreshold;
+  /**
+   * Create a CollectorManager which uses a shared hit counter to maintain 
number of hits
+   */
+  public static CollectorManager 
createSharedManager(int numHits, FieldDoc after,
+   
   int totalHitsThreshold) {
+return new CollectorManager<>() {
+
+  @Override
+  public TopScoreDocCollector newCollector() throws IOException {
+return TopScoreDocCollector.create(numHits, after, 
HitsThresholdChecker.createShared(totalHitsThreshold));
 
 Review comment:
   Fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-09-03 Thread GitBox
atris commented on a change in pull request #823: LUCENE-8939: Introduce Shared 
Count Early Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#discussion_r320343227
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java
 ##
 @@ -410,10 +423,34 @@ public static TopFieldCollector create(Sort sort, int 
numHits, FieldDoc after,
 throw new IllegalArgumentException("after.fields has " + 
after.fields.length + " values but sort has " + sort.getSort().length);
   }
 
-  return new PagingFieldCollector(sort, queue, after, numHits, 
totalHitsThreshold);
+  return new PagingFieldCollector(sort, queue, after, numHits, 
hitsThresholdChecker);
 }
   }
 
+  /**
+   * Create a CollectorManager which uses a shared hit counter to maintain 
number of hits
+   */
+  public static CollectorManager 
createSharedManager(Sort sort, int numHits, FieldDoc after,
+   
  int totalHitsThreshold) {
+return new CollectorManager<>() {
+
+  @Override
+  public TopFieldCollector newCollector() throws IOException {
+return create(sort, numHits, after, 
HitsThresholdChecker.createShared(totalHitsThreshold));
 
 Review comment:
   Yeah, rebasing error, thanks for highlighting -- fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-8.x - Build # 516 - Unstable

2019-09-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/516/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSnapshotCloudManager.testSimulatorFromSnapshot

Error Message:
expected:<[/, /aliases.json, /autoscaling, /autoscaling.json, 
/autoscaling/events, /autoscaling/events/.auto_add_replicas, 
/autoscaling/events/.scheduled_maintenance, /autoscaling/nodeAdded, 
/autoscaling/nodeLost, /clusterprops.json, /collections, /collections/.system, 
/collections/.system/counter, /collections/.system/leader_elect, 
/collections/.system/leaders, /collections/.system/state.json, 
/collections/.system/terms, /collections/.system/terms/shard1, /configs, 
/configs/.system, /configs/.system/managed-schema, 
/configs/.system/schema.xml.bak, /configs/.system/solrconfig.xml, 
/configs/_default, /configs/_default/lang, 
/configs/_default/lang/contractions_ca.txt, 
/configs/_default/lang/contractions_fr.txt, 
/configs/_default/lang/contractions_ga.txt, 
/configs/_default/lang/contractions_it.txt, 
/configs/_default/lang/hyphenations_ga.txt, 
/configs/_default/lang/stemdict_nl.txt, /configs/_default/lang/stoptags_ja.txt, 
/configs/_default/lang/stopwords_ar.txt, 
/configs/_default/lang/stopwords_bg.txt, 
/configs/_default/lang/stopwords_ca.txt, 
/configs/_default/lang/stopwords_cz.txt, 
/configs/_default/lang/stopwords_da.txt, 
/configs/_default/lang/stopwords_de.txt, 
/configs/_default/lang/stopwords_el.txt, 
/configs/_default/lang/stopwords_en.txt, 
/configs/_default/lang/stopwords_es.txt, 
/configs/_default/lang/stopwords_et.txt, 
/configs/_default/lang/stopwords_eu.txt, 
/configs/_default/lang/stopwords_fa.txt, 
/configs/_default/lang/stopwords_fi.txt, 
/configs/_default/lang/stopwords_fr.txt, 
/configs/_default/lang/stopwords_ga.txt, 
/configs/_default/lang/stopwords_gl.txt, 
/configs/_default/lang/stopwords_hi.txt, 
/configs/_default/lang/stopwords_hu.txt, 
/configs/_default/lang/stopwords_hy.txt, 
/configs/_default/lang/stopwords_id.txt, 
/configs/_default/lang/stopwords_it.txt, 
/configs/_default/lang/stopwords_ja.txt, 
/configs/_default/lang/stopwords_lv.txt, 
/configs/_default/lang/stopwords_nl.txt, 
/configs/_default/lang/stopwords_no.txt, 
/configs/_default/lang/stopwords_pt.txt, 
/configs/_default/lang/stopwords_ro.txt, 
/configs/_default/lang/stopwords_ru.txt, 
/configs/_default/lang/stopwords_sv.txt, 
/configs/_default/lang/stopwords_th.txt, 
/configs/_default/lang/stopwords_tr.txt, 
/configs/_default/lang/userdict_ja.txt, /configs/_default/managed-schema, 
/configs/_default/params.json, /configs/_default/protwords.txt, 
/configs/_default/solrconfig.xml, /configs/_default/stopwords.txt, 
/configs/_default/synonyms.txt, /configs/conf, /configs/conf/schema.xml, 
/configs/conf/solrconfig.xml, /live_nodes, /overseer, /overseer/async_ids, 
/overseer/collection-map-completed, /overseer/collection-map-failure, 
/overseer/collection-map-running, /overseer/collection-queue-work, 
/overseer/queue, /overseer/queue-work, /overseer_elect, 
/overseer_elect/election, 
/overseer_elect/election/75325089451343879-127.0.0.1:35487_solr-n_00, 
/overseer_elect/election/75325089451343881-127.0.0.1:34635_solr-n_01, 
/overseer_elect/election/75325089451343885-127.0.0.1:33132_solr-n_02, 
/overseer_elect/leader, /security.json, /solr.xml]> but was:<[/, /aliases.json, 
/autoscaling, /autoscaling.json, /autoscaling/events, 
/autoscaling/events/.auto_add_replicas, 
/autoscaling/events/.scheduled_maintenance, 
/autoscaling/events/.scheduled_maintenance/qn-00, 
/autoscaling/nodeAdded, /autoscaling/nodeLost, /clusterprops.json, 
/collections, /collections/.system, /collections/.system/counter, 
/collections/.system/leader_elect, /collections/.system/leaders, 
/collections/.system/state.json, /collections/.system/terms, 
/collections/.system/terms/shard1, /configs, /configs/.system, 
/configs/.system/managed-schema, /configs/.system/schema.xml.bak, 
/configs/.system/solrconfig.xml, /configs/_default, /configs/_default/lang, 
/configs/_default/lang/contractions_ca.txt, 
/configs/_default/lang/contractions_fr.txt, 
/configs/_default/lang/contractions_ga.txt, 
/configs/_default/lang/contractions_it.txt, 
/configs/_default/lang/hyphenations_ga.txt, 
/configs/_default/lang/stemdict_nl.txt, /configs/_default/lang/stoptags_ja.txt, 
/configs/_default/lang/stopwords_ar.txt, 
/configs/_default/lang/stopwords_bg.txt, 
/configs/_default/lang/stopwords_ca.txt, 
/configs/_default/lang/stopwords_cz.txt, 
/configs/_default/lang/stopwords_da.txt, 
/configs/_default/lang/stopwords_de.txt, 
/configs/_default/lang/stopwords_el.txt, 
/configs/_default/lang/stopwords_en.txt, 
/configs/_default/lang/stopwords_es.txt, 
/configs/_default/lang/stopwords_et.txt, 
/configs/_default/lang/stopwords_eu.txt, 
/configs/_default/lang/stopwords_fa.txt, 
/configs/_default/lang/stopwords_fi.txt, 
/configs/_default/lang/stopwords_fr.txt, 
/configs/_default/lang/stopwords_ga.txt, 
/configs/_default/lang/stopwords_gl.txt, 

[jira] [Commented] (SOLR-13736) Reduce code duplication in TestPolicy.testNodeLostMultipleReplica

2019-09-03 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921517#comment-16921517
 ] 

Lucene/Solr QA commented on SOLR-13736:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} solrj in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13736 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979222/SOLR-13736.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 5cbb33fa285 |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/545/testReport/ |
| modules | C: solr/solrj U: solr/solrj |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/545/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Reduce code duplication in TestPolicy.testNodeLostMultipleReplica
> -
>
> Key: SOLR-13736
> URL: https://issues.apache.org/jira/browse/SOLR-13736
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13736.patch
>
>
> Splitting this refactor out from the SOLR-13240 changes in which it is 
> currently included.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on issue #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-09-03 Thread GitBox
mikemccand commented on issue #823: LUCENE-8939: Introduce Shared Count Early 
Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#issuecomment-527503405
 
 
   > > > https://issues.apache.org/jira/browse/LUCENE-8681
   > > 
   > > 
   > > I am not sure I see how that solves the problem?
   > > The core change targeted in this PR is allowing `TOTAL_HITS_THRESHOLD` 
to be accurately counted across all slices. Today, we collect 
`TOTAL_HITS_THRESHOLD` per slice, which is not what the API definition is. Post 
this PR, we will collect `TOTAL_HITS_THRESHOLD` in aggregate. 
`TOTAL_HITS_THRESHOLD`' s definition does not guarantee any order of collection 
of hits in the concurrent case -- we inadvertently define one today by 
collection the threshold number of hits per slice.
   > > RE: Proration, I believe that is a custom logic that can be added on top 
of this change. In any case, the proration logic also works on a bunch of 
static values + fudge factors, so it can go wrong and we might end up 
collecting lesser hits from a more valuable segment. To help prevent this 
scenario, I believe proration might also do well to build upon this PR and use 
the shared counter. But, I am unable to see why proration and accurate counting 
across slices are mutually exclusive.
   > > In any case, unlike proration, this PR does not propose any algorithmic 
changes to the way collection is done -- it simply reduces extra work done 
across slices that we do not even advertise today, so might be something that 
the user is unaware of.
   > 
   > To summarize my monologue, this PR is aimed at accurate counting of hits 
across all slices -- whereas proration targets a different use case of trying 
to "distributing" hits across slices based on some parameters.
   
   Thanks @atris, I was worried this change would alter the correctness of the 
top hits in the concurrent case based on thread scheduling, but I was wrong:
   
   In the sorted index case, we only early terminate a searcher thread (slice) 
once its thread-private PQ is full and the global (1000 default) hit count has 
been collected, so that way we know the competitive top hits from that segment 
will be merged/reduced in the end.
   
   In the unsorted index case, where we skip by impacts once we collect more 
than the 1000 by default, we are also still correct because we continue 
collecting in that slice, just skipping by impact based on that thread's 
private PQ bottom.  We can make further improvements e.g. to share the global 
PQ bottom across all searcher threads, but that should come later.
   
   So net/net I think the change is correct, and should be a big performance 
gain for concurrent searching.  Sorry for the confusion ;)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on a change in pull request #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-09-03 Thread GitBox
mikemccand commented on a change in pull request #823: LUCENE-8939: Introduce 
Shared Count Early Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#discussion_r320305075
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java
 ##
 @@ -410,10 +423,34 @@ public static TopFieldCollector create(Sort sort, int 
numHits, FieldDoc after,
 throw new IllegalArgumentException("after.fields has " + 
after.fields.length + " values but sort has " + sort.getSort().length);
   }
 
-  return new PagingFieldCollector(sort, queue, after, numHits, 
totalHitsThreshold);
+  return new PagingFieldCollector(sort, queue, after, numHits, 
hitsThresholdChecker);
 }
   }
 
+  /**
+   * Create a CollectorManager which uses a shared hit counter to maintain 
number of hits
+   */
+  public static CollectorManager 
createSharedManager(Sort sort, int numHits, FieldDoc after,
+   
  int totalHitsThreshold) {
+return new CollectorManager<>() {
+
+  @Override
+  public TopFieldCollector newCollector() throws IOException {
+return create(sort, numHits, after, 
HitsThresholdChecker.createShared(totalHitsThreshold));
 
 Review comment:
   Shouldn't we create the shared `HitsThresholdChecker` up front (at the top 
of the `createSharedManager` method), not here?  Else we are making a new 
shared instance for every segment slice (searcher thread), instead of sharing a 
single one for the whole query?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on a change in pull request #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-09-03 Thread GitBox
mikemccand commented on a change in pull request #823: LUCENE-8939: Introduce 
Shared Count Early Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#discussion_r320306102
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/TopScoreDocCollector.java
 ##
 @@ -191,32 +195,63 @@ public static TopScoreDocCollector create(int numHits, 
int totalHitsThreshold) {
* objects.
*/
   public static TopScoreDocCollector create(int numHits, ScoreDoc after, int 
totalHitsThreshold) {
+return create(numHits, after, 
HitsThresholdChecker.create(totalHitsThreshold));
+  }
+
+  static TopScoreDocCollector create(int numHits, ScoreDoc after, 
HitsThresholdChecker hitsThresholdChecker) {
 
 if (numHits <= 0) {
   throw new IllegalArgumentException("numHits must be > 0; please use 
TotalHitCountCollector if you just need the total hit count");
 }
 
-if (totalHitsThreshold < 0) {
-  throw new IllegalArgumentException("totalHitsThreshold must be >= 0, got 
" + totalHitsThreshold);
+if (hitsThresholdChecker == null) {
+  throw new IllegalArgumentException("hitsThresholdChecker must be non 
null");
 }
 
 if (after == null) {
-  return new SimpleTopScoreDocCollector(numHits, totalHitsThreshold);
+  return new SimpleTopScoreDocCollector(numHits, hitsThresholdChecker);
 } else {
-  return new PagingTopScoreDocCollector(numHits, after, 
totalHitsThreshold);
+  return new PagingTopScoreDocCollector(numHits, after, 
hitsThresholdChecker);
 }
   }
 
-  final int totalHitsThreshold;
+  /**
+   * Create a CollectorManager which uses a shared hit counter to maintain 
number of hits
+   */
+  public static CollectorManager 
createSharedManager(int numHits, FieldDoc after,
+   
   int totalHitsThreshold) {
+return new CollectorManager<>() {
+
+  @Override
+  public TopScoreDocCollector newCollector() throws IOException {
+return TopScoreDocCollector.create(numHits, after, 
HitsThresholdChecker.createShared(totalHitsThreshold));
 
 Review comment:
   Same here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9505) Extra tests to confirm Atomic Update remove behaviour

2019-09-03 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9505:
---
Status: Patch Available  (was: Open)

> Extra tests to confirm Atomic Update remove behaviour
> -
>
> Key: SOLR-9505
> URL: https://issues.apache.org/jira/browse/SOLR-9505
> Project: Solr
>  Issue Type: Test
>Affects Versions: 7.0
>Reporter: Tim Owen
>Priority: Minor
> Attachments: SOLR-9505.patch
>
>
> The behaviour of the Atomic Update {{remove}} operation in the code doesn't 
> match the description in the Confluence documentation, which has been 
> questioned already. From looking at the source code, and using curl to 
> confirm, the {{remove}} operation only removes the first occurrence of a 
> value from a multi-valued field, it does not remove all occurrences. The 
> {{removeregex}} operation does remove all, however.
> There are unit tests for Atomic Updates, but they didn't assert this 
> behaviour, so I've added some extra assertions to confirm that, and a couple 
> of extra tests including one that checks that {{removeregex}} does a Regex 
> match of the whole value, not just a find-anywhere operation.
> I think it's the documentation that needs clarifying - the code behaves as 
> expected (assuming {{remove}} was intended to work that way?)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13709) Race condition on core reload while core is still loading?

2019-09-03 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921469#comment-16921469
 ] 

Erick Erickson commented on SOLR-13709:
---

Making some progress on this, at least I'm getting local beasting to fail. 2 
out of 1,000 runs, so it takes 8-10 hours to try something. So far, 
CoreContainer.load() is completing properly and there are still failures. I 
still think blocking until it's CoreContainerl.load() is complete is a good 
idea.

What I _think_ I'm seeing now is the following sequence:
 - CoreContainer.load() completes successfully
 - a core create operation is initiated (this happens relatively frequently in 
tests of course)
 - SolrCores.getCoreDescriptor is called before the core creation is complete, 
the coreDescriptor list gets updated fairly late in the core creation process.

Relatively early in the core creation process though, the core is added to 
pendingCoreOps, a list of cores that are in transition. My latest hypothesis is 
that it's during this interval that SolrCores.getCoreDescriptor is called and 
returns null. I have some debugging logging in place to test, and a loop in 
place to wait until a core moves out of pendingCoreOps before returning from 
SolrCores.getCoreDescriptor.

There's still a small window I think between the time 
CoreContainer.create(core) is called from some client and the entry gets _in_ 
the pendingCoreOps list. First I'll see if checking pendingCoreOps has an entry 
upon occasion for a core whose descriptor is being asked for, then see if I can 
close that window.

The other thing I'm seeing is that failures happen in several places and have 
several different stack traces. I think one that I saw was from metrics, 
another from update, etc. All are fairly consistent with my proposed steps, but 
then my other three hypotheses have been too.

I'll be traveling Thursday and Friday, then the week after is Activate so this 
may languish if I can't get some closure by Sunday.

I still have a problem with the fact that the ".system" collection is regularly 
asked for in SolrCores.getDescriptor, even when it's never going to be there. 
Anything I do in getCoreDescriptor that waits is susceptible to waiting on an 
event that'll never occur. Of course I can time-limit the wait, but the example 
of asking for the ".system" core just means that there may be another case. 
Waiting while any asked-for core is in pendingCoreOps is fine since that 
condition will end as soon as the core is loaded (or fails).

> Race condition on core reload while core is still loading?
> --
>
> Key: SOLR-13709
> URL: https://issues.apache.org/jira/browse/SOLR-13709
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Erick Erickson
>Priority: Major
> Attachments: apache_Lucene-Solr-Tests-8.x_449.log.txt
>
>
> A recent jenkins failure from {{TestSolrCLIRunExample}} seems to suggest that 
> there may be a race condition when attempting to re-load a SolrCore while the 
> core is currently in the process of (re)loading that can leave the SolrCore 
> in an unusable state.
> Details to follow...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9505) Extra tests to confirm Atomic Update remove behaviour

2019-09-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921463#comment-16921463
 ] 

Thomas Wöckinger commented on SOLR-9505:


The additional tests are working in master branch, when SOLR-13592 and 
SOLR-13539 are merged to branch 8.x they should work there too

> Extra tests to confirm Atomic Update remove behaviour
> -
>
> Key: SOLR-9505
> URL: https://issues.apache.org/jira/browse/SOLR-9505
> Project: Solr
>  Issue Type: Test
>Affects Versions: 7.0
>Reporter: Tim Owen
>Priority: Minor
> Attachments: SOLR-9505.patch
>
>
> The behaviour of the Atomic Update {{remove}} operation in the code doesn't 
> match the description in the Confluence documentation, which has been 
> questioned already. From looking at the source code, and using curl to 
> confirm, the {{remove}} operation only removes the first occurrence of a 
> value from a multi-valued field, it does not remove all occurrences. The 
> {{removeregex}} operation does remove all, however.
> There are unit tests for Atomic Updates, but they didn't assert this 
> behaviour, so I've added some extra assertions to confirm that, and a couple 
> of extra tests including one that checks that {{removeregex}} does a Regex 
> match of the whole value, not just a find-anywhere operation.
> I think it's the documentation that needs clarifying - the code behaves as 
> expected (assuming {{remove}} was intended to work that way?)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13734) JWTAuthPlugin to support multiple issuers

2019-09-03 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13734:
---
Issue Type: New Feature  (was: Task)

> JWTAuthPlugin to support multiple issuers
> -
>
> Key: SOLR-13734
> URL: https://issues.apache.org/jira/browse/SOLR-13734
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: JWT, authentication
>
> In some large enterprise environments, there is more than one [Identity 
> Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
> users. The equivalent example from the public internet is logging in to a 
> website and choose between multiple pre-defined IdPs (such as Google, GitHub, 
> Facebook etc) in the Oauth2/OIDC flow.
> In the enterprise the IdPs could be public ones but most likely they will be 
> private IdPs in various networks inside the enterprise. Users will interact 
> with a search application, e.g. one providing enterprise wide search, and 
> will authenticate with one out of several IdPs depending on their local 
> affiliation. The search app will then request an access token (JWT) for the 
> user and issue requests to Solr using that token.
> The JWT plugin currently supports exactly one IdP. This JIRA will extend 
> support for multiple IdPs for access token validation only. To limit the 
> scope of this Jira, Admin UI login must still happen to the "primary" IdP. 
> Supporting multiple IdPs for Admin UI login can be done in followup issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13713) JWTAuthPlugin to support multiple JWKS endpoints

2019-09-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921448#comment-16921448
 ] 

Jan Høydahl commented on SOLR-13713:


See pull request [#852|https://github.com/apache/lucene-solr/pull/852] for 
proposed implementation. I have refactored the configuration of Issuer and how 
signature verification is called.

Will try to target the 8.3 release.

> JWTAuthPlugin to support multiple JWKS endpoints
> 
>
> Key: SOLR-13713
> URL: https://issues.apache.org/jira/browse/SOLR-13713
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 8.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: JWT
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some [Identity Providers|https://en.wikipedia.org/wiki/Identity_provider] do 
> not expose all JWK keys used to sign access tokens through the main [JWKS 
> |https://auth0.com/docs/jwks] endpoint exposed through OIDC Discovery. For 
> instance Ping Federate can have multiple Token Providers, each exposing its 
> signing keys through separate JWKS endpoints. 
> To support these, the JWT plugin should optinally accept an array of URLs for 
> the {{jwkUrl}} configuration option. If an array is provided, then we'll 
> fetch all the JWKS and validate the JWT against all before we fail the 
> request.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-09-03 Thread Christine Poerschke (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921447#comment-16921447
 ] 

Christine Poerschke commented on SOLR-13240:


{quote}...  two small refactors to surface the sequential nature of part of the 
test ...
{quote}
Just opened SOLR-13736 to split that out into a separate commit, hopefully 
making it easier to see here then what the changes to the 'expected values' in 
the test are.

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
> 

[jira] [Updated] (SOLR-13736) Reduce code duplication in TestPolicy.testNodeLostMultipleReplica

2019-09-03 Thread Christine Poerschke (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-13736:
---
Attachment: SOLR-13736.patch

> Reduce code duplication in TestPolicy.testNodeLostMultipleReplica
> -
>
> Key: SOLR-13736
> URL: https://issues.apache.org/jira/browse/SOLR-13736
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13736.patch
>
>
> Splitting this refactor out from the SOLR-13240 changes in which it is 
> currently included.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13736) Reduce code duplication in TestPolicy.testNodeLostMultipleReplica

2019-09-03 Thread Christine Poerschke (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-13736:
---
Status: Patch Available  (was: Open)

> Reduce code duplication in TestPolicy.testNodeLostMultipleReplica
> -
>
> Key: SOLR-13736
> URL: https://issues.apache.org/jira/browse/SOLR-13736
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13736.patch
>
>
> Splitting this refactor out from the SOLR-13240 changes in which it is 
> currently included.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13736) Reduce code duplication in TestPolicy.testNodeLostMultipleReplica

2019-09-03 Thread Christine Poerschke (Jira)
Christine Poerschke created SOLR-13736:
--

 Summary: Reduce code duplication in 
TestPolicy.testNodeLostMultipleReplica
 Key: SOLR-13736
 URL: https://issues.apache.org/jira/browse/SOLR-13736
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke


Splitting this refactor out from the SOLR-13240 changes in which it is 
currently included.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy opened a new pull request #852: SOLR-13713 JWTAuthPlugin to support multiple jwks endpoints

2019-09-03 Thread GitBox
janhoy opened a new pull request #852: SOLR-13713 JWTAuthPlugin to support 
multiple jwks endpoints
URL: https://github.com/apache/lucene-solr/pull/852
 
 
   # Description
   
   See https://issues.apache.org/jira/browse/SOLR-13713
   
   # Solution
   
   Accept an array of urls in `jwkUrl` config parameter. Validate incoming JWT 
against all keys from all URLs.
   
   # Tests
   
   Added tests for checking config parsing and for validating signatures from 
multiple lists.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [x] I am authorized to contribute this code to the ASF and have removed 
any code I do not have a license to distribute.
   - [x] I have developed this patch against the `master` branch.
   - [x] I have run `ant precommit` and the appropriate test suite.
   - [x] I have added tests for my changes.
   - [x] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on issue #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-09-03 Thread GitBox
atris commented on issue #823: LUCENE-8939: Introduce Shared Count Early 
Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#issuecomment-527464137
 
 
   @jpountz I updated the PR per your comments -- please take a look and let me 
know if it seems fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] thomaswoeckinger commented on issue #665: Fixes SOLR-13539

2019-09-03 Thread GitBox
thomaswoeckinger commented on issue #665: Fixes SOLR-13539
URL: https://github.com/apache/lucene-solr/pull/665#issuecomment-527455888
 
 
   > > As i already mentioned on #755, any plans on merging this back to 8.x?
   > 
   > Yep. Generally committers are encouraged to let things "cook" on master 
awhile before merging things back to release branches like `branch_8x`. This 
gives us more confidence that there's no weird flaky failures we've introduced. 
It's also more convenient to wait to merge commits back to other branches once 
all related commits have been put on master.
   
   Seems to be a good idea.
   > 
   > I'm waiting mostly on the latter, since I've run the tests a good bit. So 
once this gets merged (and I merge the patch that Tim Owen put on SOLR-13539), 
then I'll move everything back to `branch_8x` all at once.
   
   Great
   
   
   
   > > As i already mentioned on #755, any plans on merging this back to 8.x?
   > 
   > Yep. Generally committers are encouraged to let things "cook" on master 
awhile before merging things back to release branches like `branch_8x`. This 
gives us more confidence that there's no weird flaky failures we've introduced. 
It's also more convenient to wait to merge commits back to other branches once 
all related commits have been put on master.
   > 
   > I'm waiting mostly on the latter, since I've run the tests a good bit. So 
once this gets merged (and I merge the patch that Tim Owen put on SOLR-13539), 
then I'll move everything back to `branch_8x` all at once.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] thomaswoeckinger commented on a change in pull request #665: Fixes SOLR-13539

2019-09-03 Thread GitBox
thomaswoeckinger commented on a change in pull request #665: Fixes SOLR-13539
URL: https://github.com/apache/lucene-solr/pull/665#discussion_r320267334
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/handler/loader/XMLLoader.java
 ##
 @@ -429,7 +434,18 @@ public SolrInputDocument readDoc(XMLStreamReader parser) 
throws XMLStreamExcepti
 break;
   } else if ("field".equals(parser.getLocalName())) {
 // should I warn in some text has been found too
-Object v = isNull ? null : text.toString();
+Object v;
 
 Review comment:
   > So, just to make sure I understand: what is "binary XML" here? Is this XML 
where some node in the XML doc has binary content? Is this normal XML where the 
whole XML doc/string has been encoded using some binary format for compression 
or quicker transmission?
   > 
   There are several ways to transport binary data over XML, i used the one 
which seems to be the clearest most readable one from my point of view, which 
is also the proposed way on xml.com: 
https://www.xml.com/pub/a/98/07/binary/binary.html
   
   > > this was the reason binary XML support was not working at least since 
6.6.2
   > 
   > Interesting. Is this something that Solr claimed to support or had support 
for at some point? Or this is a new ability that Solr has never had that you're 
adding here?
   
   Some versions before 6.6.2 EmbeddedSolrServer used XMLCodec as default 
codec. When people start using it more in there test cases there where some 
issues regarding enums, etc.. so the codec was switched to JavaBinCodec which 
was the default when using SolrJ anyway. So this feature was not needed any 
more, anyway as far as i know all the other codecs are supporting transport of 
binary data. So if anyone want to use XML when talking to Solr he can send 
binaries now, it is also required for the new tests, the would fail otherwise.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13735) DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout expired: 300000/300000 ms

2019-09-03 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921416#comment-16921416
 ] 

Mikhail Khludnev commented on SOLR-13735:
-

{{2019-09-01 10:11:27.436 ERROR (qtp1650813924-22) [c:c_member_lots_a s:shard1}}
{{r:core_node3 x:c_collection_shard1_replica_n1] o.a.s.h.RequestHandlerBase}}
{{java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout}}
{{expired: 30/30 ms}}
{{        at}}
{{org.eclipse.jetty.server.HttpInput$ErrorState.noContent(HttpInput.java:1080)}}
{{        at org.eclipse.jetty.server.HttpInput.read(HttpInput.java:313)}}
{{        at}}
{{org.apache.solr.servlet.ServletInputStreamWrapper.read(ServletInputStreamWrapper.java:74)}}
{{        at}}
{{org.apache.commons.io.input.ProxyInputStream.read(ProxyInputStream.java:100)}}
{{        at}}
{{org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:79)}}
{{        at}}
{{org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:88)}}
{{        at}}
{{org.apache.solr.common.util.FastInputStream.peek(FastInputStream.java:60)}}
{{        at}}
{{org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:107)}}
{{        at}}
{{org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)}}
{{        at}}
{{org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)}}

> DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout 
> expired: 30/30 ms
> -
>
> Key: SOLR-13735
> URL: https://issues.apache.org/jira/browse/SOLR-13735
> Project: Solr
>  Issue Type: Sub-task
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Priority: Minor
>
> see mail thread linked.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13735) DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout expired: 300000/300000 ms

2019-09-03 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921417#comment-16921417
 ] 

Mikhail Khludnev commented on SOLR-13735:
-

SOLR-9908 has a test stub to start with. 

> DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout 
> expired: 30/30 ms
> -
>
> Key: SOLR-13735
> URL: https://issues.apache.org/jira/browse/SOLR-13735
> Project: Solr
>  Issue Type: Sub-task
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Priority: Minor
>
> see mail thread linked.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13735) DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout expired: 300000/300000 ms

2019-09-03 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-13735:

Description: see mail thread linked.

> DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout 
> expired: 30/30 ms
> -
>
> Key: SOLR-13735
> URL: https://issues.apache.org/jira/browse/SOLR-13735
> Project: Solr
>  Issue Type: Sub-task
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Priority: Minor
>
> see mail thread linked.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13735) DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout expired: 300000/300000 ms

2019-09-03 Thread Mikhail Khludnev (Jira)
Mikhail Khludnev created SOLR-13735:
---

 Summary: DIH on SolrCloud more than 5 mins causes 
TimeoutException: Idle timeout expired: 30/30 ms
 Key: SOLR-13735
 URL: https://issues.apache.org/jira/browse/SOLR-13735
 Project: Solr
  Issue Type: Sub-task
  Components: contrib - DataImportHandler
Reporter: Mikhail Khludnev






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] gerlowskija commented on a change in pull request #665: Fixes SOLR-13539

2019-09-03 Thread GitBox
gerlowskija commented on a change in pull request #665: Fixes SOLR-13539
URL: https://github.com/apache/lucene-solr/pull/665#discussion_r320246350
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/handler/loader/XMLLoader.java
 ##
 @@ -429,7 +434,18 @@ public SolrInputDocument readDoc(XMLStreamReader parser) 
throws XMLStreamExcepti
 break;
   } else if ("field".equals(parser.getLocalName())) {
 // should I warn in some text has been found too
-Object v = isNull ? null : text.toString();
+Object v;
 
 Review comment:
   So, just to make sure I understand: what is "binary XML" here?  Is this XML 
where some node in the XML doc has binary content?  Is this normal XML where 
the whole XML doc/string has been encoded using some binary format for 
compression or quicker transmission?
   
   > this was the reason binary XML support was not working at least since 6.6.2
   
   Interesting.  Is this something that Solr claimed to support or had support 
for at some point?  Or this is a new ability that Solr has never had that 
you're adding here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] gerlowskija commented on issue #665: Fixes SOLR-13539

2019-09-03 Thread GitBox
gerlowskija commented on issue #665: Fixes SOLR-13539
URL: https://github.com/apache/lucene-solr/pull/665#issuecomment-527440986
 
 
   > As i already mentioned on #755, any plans on merging this back to 8.x?
   
   Yep.  Generally committers are encouraged to let things "cook" on master 
awhile before merging things back to release branches like `branch_8x`.  This 
gives us more confidence that there's no weird flaky failures we've introduced. 
 It's also more convenient to wait to merge commits back to other branches once 
all related commits have been put on master.
   
   I'm waiting mostly on the latter, since I've run the tests a good bit.  So 
once this gets merged (and I merge the patch that Tim Owen put on SOLR-13539), 
then I'll move everything back to `branch_8x` all at once.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8961) CheckIndex: pre-exorcise document id salvage

2019-09-03 Thread Christine Poerschke (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921391#comment-16921391
 ] 

Christine Poerschke commented on LUCENE-8961:
-

Thanks [~jpountz] for your input.

The latest attached patch facilitates potential salvaging of terms by making 
the {{CheckIndex}} class extensible so that developer's own deriving classes 
could:
 * customise the checkIntegrity call
 * filter the fields being checked
 * intercept any (field,term) pairs e.g. for logging purposes

It seems to me to be a rather awkward change though and if out-of-the-box 
{{CheckIndex}} would not support id salvaging then a stand-alone tool just for 
that purpose might be a cleaner solution? Either way, I won't have bandwidth to 
pursue this further in the near future i.e. just sharing things 'as is' in case 
it might help others in the meantime.

> CheckIndex: pre-exorcise document id salvage
> 
>
> Key: LUCENE-8961
> URL: https://issues.apache.org/jira/browse/LUCENE-8961
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8961.patch, LUCENE-8961.patch
>
>
> The 
> [CheckIndex|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.2.0/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java]
>  tool supports the exorcising of corrupt segments from an index.
> This ticket proposes to add an extra option which could first be used to 
> potentially salvage the document ids of the segment(s) about to be exorcised. 
> Re-ingestion for those documents could then be arranged so as to repair the 
> data damage caused by the exorcising.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8961) CheckIndex: pre-exorcise document id salvage

2019-09-03 Thread Christine Poerschke (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-8961:

Attachment: LUCENE-8961.patch

> CheckIndex: pre-exorcise document id salvage
> 
>
> Key: LUCENE-8961
> URL: https://issues.apache.org/jira/browse/LUCENE-8961
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8961.patch, LUCENE-8961.patch
>
>
> The 
> [CheckIndex|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.2.0/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java]
>  tool supports the exorcising of corrupt segments from an index.
> This ticket proposes to add an extra option which could first be used to 
> potentially salvage the document ids of the segment(s) about to be exorcised. 
> Re-ingestion for those documents could then be arranged so as to repair the 
> data damage caused by the exorcising.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] gerlowskija commented on a change in pull request #665: Fixes SOLR-13539

2019-09-03 Thread GitBox
gerlowskija commented on a change in pull request #665: Fixes SOLR-13539
URL: https://github.com/apache/lucene-solr/pull/665#discussion_r320246350
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/handler/loader/XMLLoader.java
 ##
 @@ -429,7 +434,18 @@ public SolrInputDocument readDoc(XMLStreamReader parser) 
throws XMLStreamExcepti
 break;
   } else if ("field".equals(parser.getLocalName())) {
 // should I warn in some text has been found too
-Object v = isNull ? null : text.toString();
+Object v;
 
 Review comment:
   So, just to make sure I understand: what is "binary XML" here?  Is this XML 
where some node in the XML doc has binary content?  Is this normal XML where 
the whole XML doc/string has been encoded using some binary format for 
compression or quicker transmission?
   
   > this was the reason binary XML support was not working at least since 6.6.2
   Interesting.  Is this something that Solr claimed to support or had support 
for at some point?  Or this is a new ability that Solr has never had that 
you're adding here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1949 - Still Failing

2019-09-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1949/

9 tests failed.
FAILED:  org.apache.lucene.index.TestBinaryDocValuesUpdates.testTonsOfUpdates

Error Message:
this IndexWriter is closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexFileDeleter.ensureOpen(IndexFileDeleter.java:346)
at 
org.apache.lucene.index.IndexFileDeleter.deleteFiles(IndexFileDeleter.java:669)
at 
org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:589)
at 
org.apache.lucene.index.FrozenBufferedUpdates.finishApply(FrozenBufferedUpdates.java:382)
at 
org.apache.lucene.index.FrozenBufferedUpdates.lambda$forceApply$0(FrozenBufferedUpdates.java:245)
at 
org.apache.lucene.index.FrozenBufferedUpdates.forceApply(FrozenBufferedUpdates.java:250)
at 
org.apache.lucene.index.FrozenBufferedUpdates.tryApply(FrozenBufferedUpdates.java:158)
at 
org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2575)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5099)
at 
org.apache.lucene.index.IndexWriter.updateDocValues(IndexWriter.java:1770)
at 
org.apache.lucene.index.TestBinaryDocValuesUpdates.testTonsOfUpdates(TestBinaryDocValuesUpdates.java:1324)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[GitHub] [lucene-solr] diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-03 Thread GitBox
diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip 
second grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r320201290
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SkipSecondStepSearchGroupShardResponseProcessor.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.grouping.distributed.responseprocessor;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.TotalHits;
+import org.apache.lucene.search.grouping.GroupDocs;
+import org.apache.lucene.search.grouping.SearchGroup;
+import org.apache.lucene.search.grouping.TopGroups;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.handler.component.ResponseBuilder;
+import org.apache.solr.handler.component.ShardDoc;
+import org.apache.solr.handler.component.ShardRequest;
+import org.apache.solr.handler.component.ShardResponse;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.grouping.GroupingSpecification;
+import 
org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer;
+
+public class SkipSecondStepSearchGroupShardResponseProcessor extends 
SearchGroupShardResponseProcessor {
+
+  @Override
+  protected SearchGroupsResultTransformer 
newSearchGroupsResultTransformer(SolrIndexSearcher solrIndexSearcher) {
+return new 
SearchGroupsResultTransformer.SkipSecondStepSearchResultResultTransformer(solrIndexSearcher);
+  }
+
+  @Override
+  protected SearchGroupsContainer newSearchGroupsContainer(ResponseBuilder rb) 
{
+return new 
SkipSecondStepSearchGroupsContainer(rb.getGroupingSpec().getFields());
+  }
+
+  @Override
+  public void process(ResponseBuilder rb, ShardRequest shardRequest) {
+super.process(rb, shardRequest);
+TopGroupsShardResponseProcessor.fillResultIds(rb);
+  }
+
+  protected static class SkipSecondStepSearchGroupsContainer extends 
SearchGroupsContainer {
+
+private final Map docIdToShard = new HashMap<>();
+
+public SkipSecondStepSearchGroupsContainer(String[] fields) {
+  super(fields);
+}
+
+@Override
+public void addSearchGroups(ShardResponse srsp, String field, 
Collection> searchGroups) {
+  super.addSearchGroups(srsp, field, searchGroups);
+  for (SearchGroup searchGroup : searchGroups) {
+assert(srsp.getShard() != null);
+docIdToShard.put(searchGroup.topDocSolrId, srsp.getShard());
+  }
+}
+
+@Override
+public void addMergedSearchGroups(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups ) {
+  // TODO: add comment or javadoc re: why this method is overridden as a 
no-op
+}
+
+@Override
+public void addSearchGroupToShards(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups) {
+  super.addSearchGroupToShards(rb, groupField, mergedTopGroups);
+
+  final GroupingSpecification groupingSpecification = rb.getGroupingSpec();
+  final Sort groupSort = 
groupingSpecification.getGroupSortSpec().getSort();
+
+  GroupDocs[] groups = new GroupDocs[mergedTopGroups.size()];
+
+  // This is the max score found in any document on any group
+  float maxScore = 0;
+  int index = 0;
+
+  for (SearchGroup group : mergedTopGroups) {
+maxScore = Math.max(maxScore, group.topDocScore);
+final String shard = docIdToShard.get(group.topDocSolrId);
+assert(shard != null);
+final ShardDoc sdoc = new ShardDoc();
+sdoc.score = group.topDocScore;
+sdoc.id = group.topDocSolrId;
+sdoc.shard = shard;
+
+groups[index++] = new GroupDocs<>(group.topDocScore,
+group.topDocScore,
+new TotalHits(1, TotalHits.Relation.EQUAL_TO), /* we don't know 
the actual number of hits in the group- we set it to 1 as we only keep track of 
the top doc */
+new ShardDoc[] { sdoc }, /* only top doc */
+group.groupValue,
+group.sortValues);
+  }
+  

[GitHub] [lucene-solr] diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-03 Thread GitBox
diegoceccarelli commented on a change in pull request #300: SOLR-11831: Skip 
second grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r320201261
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/SkipSecondStepSearchGroupShardResponseProcessor.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.grouping.distributed.responseprocessor;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.TotalHits;
+import org.apache.lucene.search.grouping.GroupDocs;
+import org.apache.lucene.search.grouping.SearchGroup;
+import org.apache.lucene.search.grouping.TopGroups;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.handler.component.ResponseBuilder;
+import org.apache.solr.handler.component.ShardDoc;
+import org.apache.solr.handler.component.ShardRequest;
+import org.apache.solr.handler.component.ShardResponse;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.grouping.GroupingSpecification;
+import 
org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer;
+
+public class SkipSecondStepSearchGroupShardResponseProcessor extends 
SearchGroupShardResponseProcessor {
+
+  @Override
+  protected SearchGroupsResultTransformer 
newSearchGroupsResultTransformer(SolrIndexSearcher solrIndexSearcher) {
+return new 
SearchGroupsResultTransformer.SkipSecondStepSearchResultResultTransformer(solrIndexSearcher);
+  }
+
+  @Override
+  protected SearchGroupsContainer newSearchGroupsContainer(ResponseBuilder rb) 
{
+return new 
SkipSecondStepSearchGroupsContainer(rb.getGroupingSpec().getFields());
+  }
+
+  @Override
+  public void process(ResponseBuilder rb, ShardRequest shardRequest) {
+super.process(rb, shardRequest);
+TopGroupsShardResponseProcessor.fillResultIds(rb);
+  }
+
+  protected static class SkipSecondStepSearchGroupsContainer extends 
SearchGroupsContainer {
+
+private final Map docIdToShard = new HashMap<>();
+
+public SkipSecondStepSearchGroupsContainer(String[] fields) {
+  super(fields);
+}
+
+@Override
+public void addSearchGroups(ShardResponse srsp, String field, 
Collection> searchGroups) {
+  super.addSearchGroups(srsp, field, searchGroups);
+  for (SearchGroup searchGroup : searchGroups) {
+assert(srsp.getShard() != null);
+docIdToShard.put(searchGroup.topDocSolrId, srsp.getShard());
+  }
+}
+
+@Override
+public void addMergedSearchGroups(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups ) {
+  // TODO: add comment or javadoc re: why this method is overridden as a 
no-op
+}
+
+@Override
+public void addSearchGroupToShards(ResponseBuilder rb, String groupField, 
Collection> mergedTopGroups) {
+  super.addSearchGroupToShards(rb, groupField, mergedTopGroups);
+
+  final GroupingSpecification groupingSpecification = rb.getGroupingSpec();
+  final Sort groupSort = 
groupingSpecification.getGroupSortSpec().getSort();
+
+  GroupDocs[] groups = new GroupDocs[mergedTopGroups.size()];
+
+  // This is the max score found in any document on any group
+  float maxScore = 0;
+  int index = 0;
+
+  for (SearchGroup group : mergedTopGroups) {
+maxScore = Math.max(maxScore, group.topDocScore);
+final String shard = docIdToShard.get(group.topDocSolrId);
+assert(shard != null);
+final ShardDoc sdoc = new ShardDoc();
+sdoc.score = group.topDocScore;
+sdoc.id = group.topDocSolrId;
+sdoc.shard = shard;
+
+groups[index++] = new GroupDocs<>(group.topDocScore,
 
 Review comment:
   Ok, after reviewing the code I agree that is OK and better to use `NaN`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this 

[jira] [Commented] (SOLR-13734) JWTAuthPlugin to support multiple issuers

2019-09-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921302#comment-16921302
 ] 

Jan Høydahl commented on SOLR-13734:


The proposed solution can be exemplified with this {{security.json}} snippet:
{code:javascript}
{
  "authentication": {
"class": "solr.JWTAuthPlugin",
"wellKnownUrl": "https://idp1/.well-known/openid-configuration;,
"aud": "https://solr-cluster/;,
"scope": "solr:read solr:write solr:admin",
"issuers": {
  "idp2": {
"wellKnownUrl": "https://idp2/.well-known/openid-configuration;,
"iss": "https://idp2/;,
"jwkUrl": ["https://idp2/jwk-endpoint;, 
"https://other.domain/jwk-endpoint;]
  },
  "idp3": {
"wellKnownUrl": "https://idp3/.well-known/openid-configuration;
  }
}
  }
}
{code}
The new parameter is the *{{issuers}}* key which can take a dictionary of JSON 
objects, each representing an additional issuer (IdP). Each issuer 
configuration will support a small subset of the existing configuration options:
 * {{wellKnownUrl}} - discovery endpoint. This will often be the only parameter 
needed since the plugin will resolve JWK, 'iss' etc from it.
 * {{iss}} - to explicitly configure issuer id for this issuer. This is 
different from the name given in the "issuers" dictionary.
 * {{jwkUrl}} - to explicitly configure JWKS endpoint(s) supported by this 
issuer

All other settings, such as timeout values, scopes etc, will be configured as 
before, and the 'primary' issuer must still be configured with top-level 
properties as today.

The reasoning behind a named dictionary instead of an array for the new 
'issuers' property is to be able to address each issuer for later removal or 
modification using REST API. Such REST api support for multiple issuers will 
also be deferred to future JIRAs.

When a request comes in for authentication, the plugin will follow this flow:
 # When validating the 'iss' claim of the JWT, pass if it matches one of the 
configured issuers (today must match main 'iss')
 # When validating the signature of the JWT, use an extended 
{{JwksVerificationKeyResolver}} with the following logic
 ## Select an issuer based on the 'iss' string from the JWT header of the 
incoming token
 ## Validate the JWT using the (cached) JWKs of that issuer

> JWTAuthPlugin to support multiple issuers
> -
>
> Key: SOLR-13734
> URL: https://issues.apache.org/jira/browse/SOLR-13734
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: JWT, authentication
>
> In some large enterprise environments, there is more than one [Identity 
> Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
> users. The equivalent example from the public internet is logging in to a 
> website and choose between multiple pre-defined IdPs (such as Google, GitHub, 
> Facebook etc) in the Oauth2/OIDC flow.
> In the enterprise the IdPs could be public ones but most likely they will be 
> private IdPs in various networks inside the enterprise. Users will interact 
> with a search application, e.g. one providing enterprise wide search, and 
> will authenticate with one out of several IdPs depending on their local 
> affiliation. The search app will then request an access token (JWT) for the 
> user and issue requests to Solr using that token.
> The JWT plugin currently supports exactly one IdP. This JIRA will extend 
> support for multiple IdPs for access token validation only. To limit the 
> scope of this Jira, Admin UI login must still happen to the "primary" IdP. 
> Supporting multiple IdPs for Admin UI login can be done in followup issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-09-03 Thread GitBox
atris commented on a change in pull request #823: LUCENE-8939: Introduce Shared 
Count Early Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#discussion_r320184927
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/IndexSearcher.java
 ##
 @@ -467,9 +467,19 @@ public TopDocs searchAfter(ScoreDoc after, Query query, 
int numHits) throws IOEx
 
 final CollectorManager manager = new 
CollectorManager() {
 
+  private HitsThresholdChecker hitsThresholdChecker;
   @Override
   public TopScoreDocCollector newCollector() throws IOException {
-return TopScoreDocCollector.create(cappedNumHits, after, 
TOTAL_HITS_THRESHOLD);
+
+if (hitsThresholdChecker == null) {
+  if (executor == null || leafSlices.length <= 1) {
+hitsThresholdChecker = 
HitsThresholdChecker.create(TOTAL_HITS_THRESHOLD);
+  } else {
+hitsThresholdChecker = 
HitsThresholdChecker.createShared(TOTAL_HITS_THRESHOLD);
+  }
+}
 
 Review comment:
   Meaning, in the constructor? We will have to create a named implementation 
of the `CollectorManager` for that


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5498) Allow DIH to report its state to ZooKeeper

2019-09-03 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921280#comment-16921280
 ] 

Mikhail Khludnev commented on SOLR-5498:


Isn't it covered by ZkPropertiesWriter? 

> Allow DIH to report its state to ZooKeeper
> --
>
> Key: SOLR-5498
> URL: https://issues.apache.org/jira/browse/SOLR-5498
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 4.5
>Reporter: Rafał Kuć
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5498.patch, SOLR-5498_version.patch
>
>
> I thought it may be good to be able for DIH to be fully controllable by Solr 
> in SolrCloud. So when once instance fails another could be automatically 
> started and so on. This issue is the first small step there - it makes 
> SolrCloud report DIH state to ZooKeeper once it is started and remove its 
> state once it is stopped or indexing job failed. In non-cloud mode that 
> functionality is not used. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13734) JWTAuthPlugin to support multiple issuers

2019-09-03 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13734:
---
Description: 
In some large enterprise environments, there is more than one [Identity 
Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
users. The equivalent example from the public internet is logging in to a 
website and choose between multiple pre-defined IdPs (such as Google, GitHub, 
Facebook etc) in the Oauth2/OIDC flow.

In the enterprise the IdPs could be public ones but most likely they will be 
private IdPs in various networks inside the enterprise. Users will interact 
with a search application, e.g. one providing enterprise wide search, and will 
authenticate with one out of several IdPs depending on their local affiliation. 
The search app will then request an access token (JWT) for the user and issue 
requests to Solr using that token.

The JWT plugin currently supports exactly one IdP. This JIRA will extend 
support for multiple IdPs for access token validation only. To limit the scope 
of this Jira, Admin UI login must still happen to the "primary" IdP. Supporting 
multiple IdPs for Admin UI login can be done in followup issues.

  was:
In some large enterprise environments, there are more than one [Identity 
Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
users. The classic example from the public internet is logging in to a do a 
site and choose between multiple pre-defined IdPs (such as Google, GitHub, 
Facebook etc).

In the enterprise world the IdPs will not be these public providers but IdPs 
inside various networks inside the enterprise.

The JWT plugin currently supports exactly one IdP. This JIRA will in the first 
phase extend support for multiple IdPs for access token validation only, not 
Admin UI login, meaning there will be a "main IdP" and optionally multiple 
"additional IdPs". Admin UI login will be towards main IdP but validation of 
access tokens may be with any of the additional IdPs.


> JWTAuthPlugin to support multiple issuers
> -
>
> Key: SOLR-13734
> URL: https://issues.apache.org/jira/browse/SOLR-13734
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: JWT, authentication
>
> In some large enterprise environments, there is more than one [Identity 
> Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
> users. The equivalent example from the public internet is logging in to a 
> website and choose between multiple pre-defined IdPs (such as Google, GitHub, 
> Facebook etc) in the Oauth2/OIDC flow.
> In the enterprise the IdPs could be public ones but most likely they will be 
> private IdPs in various networks inside the enterprise. Users will interact 
> with a search application, e.g. one providing enterprise wide search, and 
> will authenticate with one out of several IdPs depending on their local 
> affiliation. The search app will then request an access token (JWT) for the 
> user and issue requests to Solr using that token.
> The JWT plugin currently supports exactly one IdP. This JIRA will extend 
> support for multiple IdPs for access token validation only. To limit the 
> scope of this Jira, Admin UI login must still happen to the "primary" IdP. 
> Supporting multiple IdPs for Admin UI login can be done in followup issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13734) JWTAuthPlugin to support multiple issuers

2019-09-03 Thread Jira
Jan Høydahl created SOLR-13734:
--

 Summary: JWTAuthPlugin to support multiple issuers
 Key: SOLR-13734
 URL: https://issues.apache.org/jira/browse/SOLR-13734
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: security
Reporter: Jan Høydahl
Assignee: Jan Høydahl


In some large enterprise environments, there are more than one [Identity 
Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
users. The classic example from the public internet is logging in to a do a 
site and choose between multiple pre-defined IdPs (such as Google, GitHub, 
Facebook etc).

In the enterprise world the IdPs will not be these public providers but IdPs 
inside various networks inside the enterprise.

The JWT plugin currently supports exactly one IdP. This JIRA will in the first 
phase extend support for multiple IdPs for access token validation only, not 
Admin UI login, meaning there will be a "main IdP" and optionally multiple 
"additional IdPs". Admin UI login will be towards main IdP but validation of 
access tokens may be with any of the additional IdPs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8960) Add LatLonDocValuesPointInPolygonQuery

2019-09-03 Thread Ignacio Vera (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera resolved LUCENE-8960.
--
Fix Version/s: 8.3
 Assignee: Ignacio Vera
   Resolution: Fixed

> Add LatLonDocValuesPointInPolygonQuery
> --
>
> Key: LUCENE-8960
> URL: https://issues.apache.org/jira/browse/LUCENE-8960
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Minor
> Fix For: 8.3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently LatLonDocValuesField contain queries for bounding box and circle. 
> This issue adds a polygon query as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8960) Add LatLonDocValuesPointInPolygonQuery

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921235#comment-16921235
 ] 

ASF subversion and git services commented on LUCENE-8960:
-

Commit 54685c5e7f5f84d28e02e42c583d5eb70588532d in lucene-solr's branch 
refs/heads/branch_8x from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=54685c5 ]

LUCENE-8960: Add LatLonDocValuesPointInPolygonQuery (#851)


> Add LatLonDocValuesPointInPolygonQuery
> --
>
> Key: LUCENE-8960
> URL: https://issues.apache.org/jira/browse/LUCENE-8960
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently LatLonDocValuesField contain queries for bounding box and circle. 
> This issue adds a polygon query as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8960) Add LatLonDocValuesPointInPolygonQuery

2019-09-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921227#comment-16921227
 ] 

ASF subversion and git services commented on LUCENE-8960:
-

Commit 5cbb33fa28523d8dca2a6a409008eb1e81d0a815 in lucene-solr's branch 
refs/heads/master from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5cbb33f ]

LUCENE-8960: Add LatLonDocValuesPointInPolygonQuery (#851)




> Add LatLonDocValuesPointInPolygonQuery
> --
>
> Key: LUCENE-8960
> URL: https://issues.apache.org/jira/browse/LUCENE-8960
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently LatLonDocValuesField contain queries for bounding box and circle. 
> This issue adds a polygon query as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >