[jira] [Commented] (LUCENE-5189) Numeric DocValues Updates
[ https://issues.apache.org/jira/browse/LUCENE-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927490#comment-16927490 ] Mikhail Khludnev commented on LUCENE-5189: -- Giving that LUCENE-8585, LUCENE-8374 optimizes for absent values, shouldn't we have an IW method to nuke DV at certain docs? > Numeric DocValues Updates > - > > Key: LUCENE-5189 > URL: https://issues.apache.org/jira/browse/LUCENE-5189 > Project: Lucene - Core > Issue Type: New Feature > Components: core/index >Reporter: Shai Erera >Assignee: Shai Erera >Priority: Major > Fix For: 4.6, 6.0 > > Attachments: LUCENE-5189-4x.patch, LUCENE-5189-4x.patch, > LUCENE-5189-no-lost-updates.patch, LUCENE-5189-renames.patch, > LUCENE-5189-segdv.patch, LUCENE-5189-updates-order.patch, > LUCENE-5189-updates-order.patch, LUCENE-5189.patch, LUCENE-5189.patch, > LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, > LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, LUCENE-5189.patch, > LUCENE-5189.patch, LUCENE-5189_process_events.patch, > LUCENE-5189_process_events.patch > > > In LUCENE-4258 we started to work on incremental field updates, however the > amount of changes are immense and hard to follow/consume. The reason is that > we targeted postings, stored fields, DV etc., all from the get go. > I'd like to start afresh here, with numeric-dv-field updates only. There are > a couple of reasons to that: > * NumericDV fields should be easier to update, if e.g. we write all the > values of all the documents in a segment for the updated field (similar to > how livedocs work, and previously norms). > * It's a fairly contained issue, attempting to handle just one data type to > update, yet requires many changes to core code which will also be useful for > updating other data types. > * It has value in and on itself, and we don't need to allow updating all the > data types in Lucene at once ... we can do that gradually. > I have some working patch already which I'll upload next, explaining the > changes. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12490) Query DSL supports for further referring and exclusion in JSON facets
[ https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-12490: Summary: Query DSL supports for further referring and exclusion in JSON facets (was: introduce json.queries supports DSL for further referring and exclusion in JSON facets ) > Query DSL supports for further referring and exclusion in JSON facets > -- > > Key: SOLR-12490 > URL: https://issues.apache.org/jira/browse/SOLR-12490 > Project: Solr > Issue Type: Improvement > Components: Facet Module, faceting >Reporter: Mikhail Khludnev >Priority: Major > Labels: newdev > > It's spin off from the > [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. > > h2. Problem > # after SOLR-9685 we can tag separate clauses in hairish queries like > {{parent}}, {{bool}} > # we can {{domain.excludeTags}} > # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 > > # but we can refer only separate params in {{domain.filter}}, it's not > possible to refer separate clauses > see the first comment -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12490) introduce json.queries supports DSL for further referring and exclusion in JSON facets
[ https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16926538#comment-16926538 ] Mikhail Khludnev commented on SOLR-12490: - I think we'd rather continue with adding yet another small cut. {code} { "query" : {...}, "params":{ "childFq":[{ "#color" :"color:black" }, { "#size" : "size:L" }] }, "facet":{ "sku_colors_in_prods":{ "type" : "terms", "field" : "color", "domain" : { "excludeTags":["top", "color"], "filter":[ "{!json_param}childFq" ] } } } } {code} Ideas are: * put json as param value, parser garbles it to meaningless string, but it's still available via {{req.getJSON()}}. * filter string invokes new query parser which convert json param as query DSL, need to decide how to keep {{JsonQueryConverter}} counter. Shouldn't be a big deal. Right? > introduce json.queries supports DSL for further referring and exclusion in > JSON facets > --- > > Key: SOLR-12490 > URL: https://issues.apache.org/jira/browse/SOLR-12490 > Project: Solr > Issue Type: Improvement > Components: Facet Module, faceting >Reporter: Mikhail Khludnev >Priority: Major > Labels: newdev > > It's spin off from the > [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. > > h2. Problem > # after SOLR-9685 we can tag separate clauses in hairish queries like > {{parent}}, {{bool}} > # we can {{domain.excludeTags}} > # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 > > # but we can refer only separate params in {{domain.filter}}, it's not > possible to refer separate clauses > see the first comment -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-12490) introduce json.queries supports DSL for further referring and exclusion in JSON facets
[ https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev reassigned SOLR-12490: --- Assignee: (was: Mikhail Khludnev) > introduce json.queries supports DSL for further referring and exclusion in > JSON facets > --- > > Key: SOLR-12490 > URL: https://issues.apache.org/jira/browse/SOLR-12490 > Project: Solr > Issue Type: Improvement > Components: Facet Module, faceting >Reporter: Mikhail Khludnev >Priority: Major > Labels: newdev > > It's spin off from the > [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. > > h2. Problem > # after SOLR-9685 we can tag separate clauses in hairish queries like > {{parent}}, {{bool}} > # we can {{domain.excludeTags}} > # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 > > # but we can refer only separate params in {{domain.filter}}, it's not > possible to refer separate clauses > see the first comment -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13748) mm (min should match) param for {!bool} query parser
Mikhail Khludnev created SOLR-13748: --- Summary: mm (min should match) param for {!bool} query parser Key: SOLR-13748 URL: https://issues.apache.org/jira/browse/SOLR-13748 Project: Solr Issue Type: Sub-task Components: query parsers Reporter: Mikhail Khludnev -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3666) DataImportHandler status command in SolrCloud does not work properly
[ https://issues.apache.org/jira/browse/SOLR-3666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev resolved SOLR-3666. Resolution: Won't Fix There's nothing like this in DIH now. > DataImportHandler status command in SolrCloud does not work properly > - > > Key: SOLR-3666 > URL: https://issues.apache.org/jira/browse/SOLR-3666 > Project: Solr > Issue Type: Bug > Components: contrib - DataImportHandler, SolrCloud >Affects Versions: 4.0-ALPHA >Reporter: Sauvik Sarkar >Priority: Major > > The dataimport?command=status command does not work correctly when invoked on > the node not running the DIH in a SolrCloud configuration. > The expectation is that no matter which node is importing any other node > should be able get the import status information. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13738) UnifiedHighlighter
[ https://issues.apache.org/jira/browse/SOLR-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13738: Summary: UnifiedHighlighter (was: RequestHandlerBase ... ClassCastException: class ... .lucene.search.IndexSearcher cannot be cast to class ... .solr.search.SolrIndexSearcher ...) > UnifiedHighlighter > -- > > Key: SOLR-13738 > URL: https://issues.apache.org/jira/browse/SOLR-13738 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.2 >Reporter: Jochen Barth >Priority: Major > > Mikhail Khludnev said, this is a bug; > Here the complete error message for the Query below: > Just tested wirth 8.1.1: works. > {quote} > 2019-08-30 12:40:40.476 ERROR (qtp2116511124-65) [ x:Suchindex] > o.a.s.h.RequestHandlerBase java.lang.ClassCastException: class > org.apache.lucene.search.IndexSearcher cannot be cast to class > org.apache.solr.search.SolrIndexSearcher (or > g.apache.lucene.search.IndexSearcher and > org.apache.solr.search.SolrIndexSearcher are in unnamed module of loader > org.eclipse.jetty.webapp.WebAppClassLoader @5ed190be) > at > org.apache.solr.search.join.GraphQuery.createWeight(GraphQuery.java:115) > at > org.apache.lucene.search.uhighlight.FieldOffsetStrategy.createOffsetsEnumsWeightMatcher(FieldOffsetStrategy.java:137) > at > org.apache.lucene.search.uhighlight.FieldOffsetStrategy.createOffsetsEnumFromReader(FieldOffsetStrategy.java:74) > at > org.apache.lucene.search.uhighlight.MemoryIndexOffsetStrategy.getOffsetsEnum(MemoryIndexOffsetStrategy.java:110) > at > org.apache.lucene.search.uhighlight.FieldHighlighter.highlightFieldForDoc(FieldHighlighter.java:76) > at > org.apache.lucene.search.uhighlight.UnifiedHighlighter.highlightFieldsAsObjects(UnifiedHighlighter.java:641) > at > org.apache.lucene.search.uhighlight.UnifiedHighlighter.highlightFields(UnifiedHighlighter.java:510) > at > org.apache.solr.highlight.UnifiedSolrHighlighter.doHighlighting(UnifiedSolrHighlighter.java:149) > at > org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:171) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:305) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2578) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:780) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at
[jira] [Updated] (SOLR-13738) UnifiedHighlighter can't highlight GraphQuery
[ https://issues.apache.org/jira/browse/SOLR-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13738: Summary: UnifiedHighlighter can't highlight GraphQuery (was: UnifiedHighlighter) > UnifiedHighlighter can't highlight GraphQuery > - > > Key: SOLR-13738 > URL: https://issues.apache.org/jira/browse/SOLR-13738 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.2 >Reporter: Jochen Barth >Priority: Major > > Mikhail Khludnev said, this is a bug; > Here the complete error message for the Query below: > Just tested wirth 8.1.1: works. > {quote} > 2019-08-30 12:40:40.476 ERROR (qtp2116511124-65) [ x:Suchindex] > o.a.s.h.RequestHandlerBase java.lang.ClassCastException: class > org.apache.lucene.search.IndexSearcher cannot be cast to class > org.apache.solr.search.SolrIndexSearcher (or > g.apache.lucene.search.IndexSearcher and > org.apache.solr.search.SolrIndexSearcher are in unnamed module of loader > org.eclipse.jetty.webapp.WebAppClassLoader @5ed190be) > at > org.apache.solr.search.join.GraphQuery.createWeight(GraphQuery.java:115) > at > org.apache.lucene.search.uhighlight.FieldOffsetStrategy.createOffsetsEnumsWeightMatcher(FieldOffsetStrategy.java:137) > at > org.apache.lucene.search.uhighlight.FieldOffsetStrategy.createOffsetsEnumFromReader(FieldOffsetStrategy.java:74) > at > org.apache.lucene.search.uhighlight.MemoryIndexOffsetStrategy.getOffsetsEnum(MemoryIndexOffsetStrategy.java:110) > at > org.apache.lucene.search.uhighlight.FieldHighlighter.highlightFieldForDoc(FieldHighlighter.java:76) > at > org.apache.lucene.search.uhighlight.UnifiedHighlighter.highlightFieldsAsObjects(UnifiedHighlighter.java:641) > at > org.apache.lucene.search.uhighlight.UnifiedHighlighter.highlightFields(UnifiedHighlighter.java:510) > at > org.apache.solr.highlight.UnifiedSolrHighlighter.doHighlighting(UnifiedSolrHighlighter.java:149) > at > org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:171) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:305) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2578) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:780) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678) > at > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > at org.eclipse.jetty.server.Server.handle(Server.java:505) >
[jira] [Created] (SOLR-13740) Assert returning ExtendedFileField
Mikhail Khludnev created SOLR-13740: --- Summary: Assert returning ExtendedFileField Key: SOLR-13740 URL: https://issues.apache.org/jira/browse/SOLR-13740 Project: Solr Issue Type: Test Security Level: Public (Default Security Level. Issues are Public) Components: Schema and Analysis Reporter: Mikhail Khludnev It works, commit sometimes later {code} diff --git a/solr/core/src/test/org/apache/solr/schema/ExternalFileFieldSortTest.java b/solr/core/src/test/org/apache/solr/schema/ExternalFileFieldSortTest.java index 632b413..4106e15 100644 --- a/solr/core/src/test/org/apache/solr/schema/ExternalFileFieldSortTest.java +++ b/solr/core/src/test/org/apache/solr/schema/ExternalFileFieldSortTest.java @@ -48,8 +48,9 @@ addDocuments(); assertQ("query", -req("q", "*:*", "sort", "eff asc"), +req("q", "*:*", "sort", "eff asc", "fl", "id,field(eff)"), "//result/doc[position()=1]/str[.='3']", +"//result/doc[position()=1]/float[@name='field(eff)' and .='0.001']", "//result/doc[position()=2]/str[.='1']", "//result/doc[position()=10]/str[.='8']"); } {code} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13727) V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" instead of using regex pattern
[ https://issues.apache.org/jira/browse/SOLR-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13727: Status: Patch Available (was: Open) > V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" > instead of using regex pattern > > > Key: SOLR-13727 > URL: https://issues.apache.org/jira/browse/SOLR-13727 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, v2 API >Affects Versions: 8.2 >Reporter: Megan Carey >Priority: Major > Labels: easyfix, patch > Attachments: SOLR-13727.patch > > Time Spent: 40m > Remaining Estimate: 0h > > When the HttpSolrClient is formatting a V2Request, it needs to change the > endpoint from the default "/solr/..." to "/api/...". It does so by simply > calling String.replace, which replaces the first instance of "/solr" in the > URL with "/api". > > In the case where the host's address starts with "solr" and the HTTP protocol > is appended, this call changes the address for the request. Example: > if baseUrl is "http://solr-host.com/8983/solr;, this call will change to > "http:/api-host.com:8983/solr" > > We should use a regex pattern to ensure that we're replacing the correct > portion of the URL. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13727) V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" instead of using regex pattern
[ https://issues.apache.org/jira/browse/SOLR-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13727: Attachment: SOLR-13727.patch Status: Open (was: Open) > V2Requests: HttpSolrClient replaces first instance of "/solr" with "/api" > instead of using regex pattern > > > Key: SOLR-13727 > URL: https://issues.apache.org/jira/browse/SOLR-13727 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, v2 API >Affects Versions: 8.2 >Reporter: Megan Carey >Priority: Major > Labels: easyfix, patch > Attachments: SOLR-13727.patch > > Time Spent: 40m > Remaining Estimate: 0h > > When the HttpSolrClient is formatting a V2Request, it needs to change the > endpoint from the default "/solr/..." to "/api/...". It does so by simply > calling String.replace, which replaces the first instance of "/solr" in the > URL with "/api". > > In the case where the host's address starts with "solr" and the HTTP protocol > is appended, this call changes the address for the request. Example: > if baseUrl is "http://solr-host.com/8983/solr;, this call will change to > "http:/api-host.com:8983/solr" > > We should use a regex pattern to ensure that we're replacing the correct > portion of the URL. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9505) Extra tests to confirm Atomic Update remove behaviour
[ https://issues.apache.org/jira/browse/SOLR-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-9505: --- Status: Patch Available (was: Open) > Extra tests to confirm Atomic Update remove behaviour > - > > Key: SOLR-9505 > URL: https://issues.apache.org/jira/browse/SOLR-9505 > Project: Solr > Issue Type: Test >Affects Versions: 7.0 >Reporter: Tim Owen >Priority: Minor > Attachments: SOLR-9505.patch > > > The behaviour of the Atomic Update {{remove}} operation in the code doesn't > match the description in the Confluence documentation, which has been > questioned already. From looking at the source code, and using curl to > confirm, the {{remove}} operation only removes the first occurrence of a > value from a multi-valued field, it does not remove all occurrences. The > {{removeregex}} operation does remove all, however. > There are unit tests for Atomic Updates, but they didn't assert this > behaviour, so I've added some extra assertions to confirm that, and a couple > of extra tests including one that checks that {{removeregex}} does a Regex > match of the whole value, not just a find-anywhere operation. > I think it's the documentation that needs clarifying - the code behaves as > expected (assuming {{remove}} was intended to work that way?) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13735) DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout expired: 300000/300000 ms
[ https://issues.apache.org/jira/browse/SOLR-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921416#comment-16921416 ] Mikhail Khludnev commented on SOLR-13735: - {{2019-09-01 10:11:27.436 ERROR (qtp1650813924-22) [c:c_member_lots_a s:shard1}} {{r:core_node3 x:c_collection_shard1_replica_n1] o.a.s.h.RequestHandlerBase}} {{java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout}} {{expired: 30/30 ms}} {{ at}} {{org.eclipse.jetty.server.HttpInput$ErrorState.noContent(HttpInput.java:1080)}} {{ at org.eclipse.jetty.server.HttpInput.read(HttpInput.java:313)}} {{ at}} {{org.apache.solr.servlet.ServletInputStreamWrapper.read(ServletInputStreamWrapper.java:74)}} {{ at}} {{org.apache.commons.io.input.ProxyInputStream.read(ProxyInputStream.java:100)}} {{ at}} {{org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:79)}} {{ at}} {{org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:88)}} {{ at}} {{org.apache.solr.common.util.FastInputStream.peek(FastInputStream.java:60)}} {{ at}} {{org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:107)}} {{ at}} {{org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)}} {{ at}} {{org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)}} > DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout > expired: 30/30 ms > - > > Key: SOLR-13735 > URL: https://issues.apache.org/jira/browse/SOLR-13735 > Project: Solr > Issue Type: Sub-task > Components: contrib - DataImportHandler >Reporter: Mikhail Khludnev >Priority: Minor > > see mail thread linked. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13735) DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout expired: 300000/300000 ms
[ https://issues.apache.org/jira/browse/SOLR-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921417#comment-16921417 ] Mikhail Khludnev commented on SOLR-13735: - SOLR-9908 has a test stub to start with. > DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout > expired: 30/30 ms > - > > Key: SOLR-13735 > URL: https://issues.apache.org/jira/browse/SOLR-13735 > Project: Solr > Issue Type: Sub-task > Components: contrib - DataImportHandler >Reporter: Mikhail Khludnev >Priority: Minor > > see mail thread linked. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13735) DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout expired: 300000/300000 ms
[ https://issues.apache.org/jira/browse/SOLR-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13735: Description: see mail thread linked. > DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout > expired: 30/30 ms > - > > Key: SOLR-13735 > URL: https://issues.apache.org/jira/browse/SOLR-13735 > Project: Solr > Issue Type: Sub-task > Components: contrib - DataImportHandler >Reporter: Mikhail Khludnev >Priority: Minor > > see mail thread linked. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13735) DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout expired: 300000/300000 ms
Mikhail Khludnev created SOLR-13735: --- Summary: DIH on SolrCloud more than 5 mins causes TimeoutException: Idle timeout expired: 30/30 ms Key: SOLR-13735 URL: https://issues.apache.org/jira/browse/SOLR-13735 Project: Solr Issue Type: Sub-task Components: contrib - DataImportHandler Reporter: Mikhail Khludnev -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5498) Allow DIH to report its state to ZooKeeper
[ https://issues.apache.org/jira/browse/SOLR-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921280#comment-16921280 ] Mikhail Khludnev commented on SOLR-5498: Isn't it covered by ZkPropertiesWriter? > Allow DIH to report its state to ZooKeeper > -- > > Key: SOLR-5498 > URL: https://issues.apache.org/jira/browse/SOLR-5498 > Project: Solr > Issue Type: Improvement > Components: contrib - DataImportHandler >Affects Versions: 4.5 >Reporter: Rafał Kuć >Assignee: Shalin Shekhar Mangar >Priority: Minor > Fix For: 4.9, 6.0 > > Attachments: SOLR-5498.patch, SOLR-5498_version.patch > > > I thought it may be good to be able for DIH to be fully controllable by Solr > in SolrCloud. So when once instance fails another could be automatically > started and so on. This issue is the first small step there - it makes > SolrCloud report DIH state to ZooKeeper once it is started and remove its > state once it is stopped or indexing job failed. In non-cloud mode that > functionality is not used. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev reassigned SOLR-13720: --- Assignee: Mikhail Khludnev > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Stanislav Livotov >Assignee: Mikhail Khludnev >Priority: Minor > Labels: noob > Fix For: 8.3 > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13720: Description: According to Solr [documentation|#SolrPlugins-QParserPlugin] QParser is treated as a legal plugin. However, it is impossible to create an effective ToParentBlockJoin query without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter method from BlockJoinParentQParser) or dirty hacks(like creating org.apache.solr.search.join package with some accessor method to package-private methods in plugin code and adding it in WEB-INF/lib directory in order to be loaded by the same ClassLoader). I don't see a truly clean way how to fix it, but at least we can help custom plugin developers to create it a little bit easier by making BlockJoinParentQParser#getCachedFilter public and BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for BitDocIdSetFilterWrapper#filter. In order to create was: According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is treated as a legal plugin. However, it is impossible to create an effective ToParentBlockJoin query without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter method from BlockJoinParentQParser) or dirty hacks(like creating org.apache.solr.search.join package with some accessor method to package-private methods in plugin code and adding it in WEB-INF/lib directory in order to be loaded by the same ClassLoader). I don't see a truly clean way how to fix it, but at least we can help custom plugin developers to create it a little bit easier by making BlockJoinParentQParser#getCachedFilter public and BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for BitDocIdSetFilterWrapper#filter. In order to create > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Stanislav Livotov >Assignee: Mikhail Khludnev >Priority: Minor > Labels: noob > Fix For: 8.3 > > Attachments: SOLR-13720.patch > > > According to Solr [documentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13720: Component/s: query parsers > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Stanislav Livotov >Priority: Minor > Fix For: 8.3 > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13720: Labels: noob (was: ) > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Stanislav Livotov >Priority: Minor > Labels: noob > Fix For: 8.3 > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13720: Resolution: Fixed Status: Resolved (was: Patch Available) > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Stanislav Livotov >Priority: Minor > Labels: noob > Fix For: 8.3 > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13720: Affects Version/s: (was: master (9.0)) 8.2 > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.2 >Reporter: Stanislav Livotov >Priority: Minor > Fix For: 8.3 > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13720: Priority: Minor (was: Major) > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0) >Reporter: Stanislav Livotov >Priority: Minor > Fix For: 8.3 > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13720: Fix Version/s: (was: master (9.0)) 8.3 > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0) >Reporter: Stanislav Livotov >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13720: Issue Type: Improvement (was: Bug) > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0) >Reporter: Stanislav Livotov >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory
[ https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918580#comment-16918580 ] Mikhail Khludnev commented on SOLR-13035: - attached patch to master with resolved conflicts > Utilize solr.data.home / solrDataHome in solr.xml to set all writable files > in single directory > --- > > Key: SOLR-13035 > URL: https://issues.apache.org/jira/browse/SOLR-13035 > Project: Solr > Issue Type: Improvement >Reporter: Amrit Sarkar >Assignee: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, > SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, > image-2019-08-28-23-57-39-826.png > > > {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is > already available as per SOLR-6671. > The writable content in Solr are index files, core properties, and ZK data if > embedded zookeeper is started in SolrCloud mode. It would be great if all > writable content can come under the same directory to have separate READ-ONLY > and WRITE-ONLY directories. > It can then also solve official docker Solr image issues: > https://github.com/docker-solr/docker-solr/issues/74 > https://github.com/docker-solr/docker-solr/issues/133 -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory
[ https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13035: Attachment: SOLR-13035.patch > Utilize solr.data.home / solrDataHome in solr.xml to set all writable files > in single directory > --- > > Key: SOLR-13035 > URL: https://issues.apache.org/jira/browse/SOLR-13035 > Project: Solr > Issue Type: Improvement >Reporter: Amrit Sarkar >Assignee: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, > SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, > image-2019-08-28-23-57-39-826.png > > > {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is > already available as per SOLR-6671. > The writable content in Solr are index files, core properties, and ZK data if > embedded zookeeper is started in SolrCloud mode. It would be great if all > writable content can come under the same directory to have separate READ-ONLY > and WRITE-ONLY directories. > It can then also solve official docker Solr image issues: > https://github.com/docker-solr/docker-solr/issues/74 > https://github.com/docker-solr/docker-solr/issues/133 -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory
[ https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918114#comment-16918114 ] Mikhail Khludnev edited comment on SOLR-13035 at 8/29/19 12:20 PM: --- Hello [~sarkaramr...@gmail.com] , I resolved conflicts a little, and launched windows script. Here's what I have At first I added -v -V and examine output. This one make sense GC_LOG_OPTS = "-Xlog:gc*:file=\"data\logs\solr_gc.log\":time,up This one looks odd SOLR_DATA_HOME = data\data\data Here's what I have in Solr Admin. Shouldn't tmp folder moved to writable home -Djava.io.tmpdir=lucene-solr\solr\server\tmp That's odd -Dsolr.data.home=data\data\data These are ok -Dsolr.log.dir=data\logs -Dsolr.solr.home=\lucene-solr\solr\server\solr -Xlog:gc*:file="data\logs\solr_gc.log": !image-2019-08-28-23-57-39-826.png! I'm not sure if this is expected behavior. Also, notice creating pid file at solr/bin/solr-8983.port Also usage prompt seems vague to me -w dir Solr will create all writable directories and files relative to this. solr.data.home if relative, will be set as SOLR_VAR_ROOT\{this}/solr.data.home solr.log.dir if relative, will be set as SOLR_VAR_ROOT\{this}/solr.log.dir These references SOLR_VAR_ROOT\{this} isn't obvious. was (Author: mkhludnev): Hello [~sarkaramr...@gmail.com] , I resolved conflicts a little, and launched windows script. Here's what I have At first I added -v -V and examine output. This one make sense GC_LOG_OPTS = "-Xlog:gc*:file=\"data\logs\solr_gc.log\":time,up This one looks odd SOLR_DATA_HOME = data\data\data Here's what I have in Solr Admin. Shouldn't tmp folder moved to writable home -Djava.io.tmpdir=lucene-solr\solr\server\tmp That's odd -Dsolr.data.home=data\data\data This is ok -Dsolr.log.dir=data\logs -Dsolr.solr.home=\lucene-solr\solr\server\solr -Xlog:gc*:file="data\logs\solr_gc.log": !image-2019-08-28-23-57-39-826.png! I'm not sure if this is expected behavior. Also usage prompt seems vague to me -w dir Solr will create all writable directories and files relative to this. solr.data.home if relative, will be set as SOLR_VAR_ROOT\{this}/solr.data.home solr.log.dir if relative, will be set as SOLR_VAR_ROOT\{this}/solr.log.dir These references SOLR_VAR_ROOT\{this} isn't obvious. > Utilize solr.data.home / solrDataHome in solr.xml to set all writable files > in single directory > --- > > Key: SOLR-13035 > URL: https://issues.apache.org/jira/browse/SOLR-13035 > Project: Solr > Issue Type: Improvement >Reporter: Amrit Sarkar >Assignee: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, > SOLR-13035.patch, SOLR-13035.patch, image-2019-08-28-23-57-39-826.png > > > {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is > already available as per SOLR-6671. > The writable content in Solr are index files, core properties, and ZK data if > embedded zookeeper is started in SolrCloud mode. It would be great if all > writable content can come under the same directory to have separate READ-ONLY > and WRITE-ONLY directories. > It can then also solve official docker Solr image issues: > https://github.com/docker-solr/docker-solr/issues/74 > https://github.com/docker-solr/docker-solr/issues/133 -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12291) Async prematurely reports completed status that causes severe shard loss
[ https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918455#comment-16918455 ] Mikhail Khludnev commented on SOLR-12291: - It does. I haven't thought about porting to 7.x. Would you like me to do so? > Async prematurely reports completed status that causes severe shard loss > > > Key: SOLR-12291 > URL: https://issues.apache.org/jira/browse/SOLR-12291 > Project: Solr > Issue Type: Bug > Components: Backup/Restore, SolrCloud >Reporter: Varun Thacker >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 8.1, master (9.0) > > Attachments: SOLR-12291.patch, SOLR-12291.patch, SOLR-12291.patch, > SOLR-12291.patch, SOLR-12291.patch, SOLR-12291.patch, SOLR-12291.patch, > SOLR-122911.patch > > > The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists > on one node > When multiple replicas of a slice are on the same node we only track one > replica's async request. This happens because the async requestMap's key is > "node_name" > I discovered this when [~alabax] shared some logs of a restore issue, where > the second replica got added before the first replica had completed it's > restorecore action. > While looking at the logs I noticed that the overseer never called > REQUESTSTATUS for the restorecore action , almost as if it had missed > tracking that particular async request. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory
[ https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918114#comment-16918114 ] Mikhail Khludnev commented on SOLR-13035: - Hello [~sarkaramr...@gmail.com] , I resolved conflicts a little, and launched windows script. Here's what I have At first I added -v -V and examine output. This one make sense GC_LOG_OPTS = "-Xlog:gc*:file=\"data\logs\solr_gc.log\":time,up This one looks odd SOLR_DATA_HOME = data\data\data Here's what I have in Solr Admin. Shouldn't tmp folder moved to writable home -Djava.io.tmpdir=lucene-solr\solr\server\tmp That's odd -Dsolr.data.home=data\data\data This is ok -Dsolr.log.dir=data\logs -Dsolr.solr.home=\lucene-solr\solr\server\solr -Xlog:gc*:file="data\logs\solr_gc.log": !image-2019-08-28-23-57-39-826.png! I'm not sure if this is expected behavior. Also usage prompt seems vague to me -w dir Solr will create all writable directories and files relative to this. solr.data.home if relative, will be set as SOLR_VAR_ROOT\{this}/solr.data.home solr.log.dir if relative, will be set as SOLR_VAR_ROOT\{this}/solr.log.dir These references SOLR_VAR_ROOT\{this} isn't obvious. > Utilize solr.data.home / solrDataHome in solr.xml to set all writable files > in single directory > --- > > Key: SOLR-13035 > URL: https://issues.apache.org/jira/browse/SOLR-13035 > Project: Solr > Issue Type: Improvement >Reporter: Amrit Sarkar >Assignee: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, > SOLR-13035.patch, SOLR-13035.patch, image-2019-08-28-23-57-39-826.png > > > {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is > already available as per SOLR-6671. > The writable content in Solr are index files, core properties, and ZK data if > embedded zookeeper is started in SolrCloud mode. It would be great if all > writable content can come under the same directory to have separate READ-ONLY > and WRITE-ONLY directories. > It can then also solve official docker Solr image issues: > https://github.com/docker-solr/docker-solr/issues/74 > https://github.com/docker-solr/docker-solr/issues/133 -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory
[ https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13035: Attachment: image-2019-08-28-23-57-39-826.png > Utilize solr.data.home / solrDataHome in solr.xml to set all writable files > in single directory > --- > > Key: SOLR-13035 > URL: https://issues.apache.org/jira/browse/SOLR-13035 > Project: Solr > Issue Type: Improvement >Reporter: Amrit Sarkar >Assignee: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, > SOLR-13035.patch, SOLR-13035.patch, image-2019-08-28-23-57-39-826.png > > > {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is > already available as per SOLR-6671. > The writable content in Solr are index files, core properties, and ZK data if > embedded zookeeper is started in SolrCloud mode. It would be great if all > writable content can come under the same directory to have separate READ-ONLY > and WRITE-ONLY directories. > It can then also solve official docker Solr image issues: > https://github.com/docker-solr/docker-solr/issues/74 > https://github.com/docker-solr/docker-solr/issues/133 -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916086#comment-16916086 ] Mikhail Khludnev commented on SOLR-13720: - Thank you, [~slivotov]. Would you mind to add tests just checking public methods existence. > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0) >Reporter: Stanislav Livotov >Priority: Major > Fix For: master (9.0) > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser
[ https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916086#comment-16916086 ] Mikhail Khludnev edited comment on SOLR-13720 at 8/26/19 7:28 PM: -- Thank you, [~slivotov]. Would you mind to add tests just checking public methods existence? was (Author: mkhludnev): Thank you, [~slivotov]. Would you mind to add tests just checking public methods existence. > Impossible to create effective ToParenBlockJoinQuery in custom QParser > -- > > Key: SOLR-13720 > URL: https://issues.apache.org/jira/browse/SOLR-13720 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0) >Reporter: Stanislav Livotov >Priority: Major > Fix For: master (9.0) > > Attachments: SOLR-13720.patch > > > According to Solr [ducumentation|#SolrPlugins-QParserPlugin] QParser is > treated as a legal plugin. > > However, it is impossible to create an effective ToParentBlockJoin query > without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter > method from BlockJoinParentQParser) or dirty hacks(like creating > org.apache.solr.search.join package with some accessor method to > package-private methods in plugin code and adding it in WEB-INF/lib directory > in order to be loaded by the same ClassLoader). > I don't see a truly clean way how to fix it, but at least we can help custom > plugin developers to create it a little bit easier by making > BlockJoinParentQParser#getCachedFilter public and > BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for > BitDocIdSetFilterWrapper#filter. > > > In order to create -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (SOLR-13719) SolrClient.ping() in 8.2, using SolrJ
[ https://issues.apache.org/jira/browse/SOLR-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev reopened SOLR-13719: - > SolrClient.ping() in 8.2, using SolrJ > - > > Key: SOLR-13719 > URL: https://issues.apache.org/jira/browse/SOLR-13719 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java >Affects Versions: 8.2 > Environment: linux mint 19, java8 >Reporter: Benjamin Wade Friedman >Priority: Trivial > Labels: beginner, easyfix, newbie > > {color:#22}I started a local SolrCloud instance with two nodes and two > replicas per node. I created one empty collection on each node. So I guess > I have two shard per collection. > {color} > > I tried to use the ping method in Solrj to verify my connected client. When > I try to use it, it throws ... > > Caused by: org.apache.solr.common.SolrException: No collection param > specified on request and no default collection has been set: [] > at > org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1071) > ~[solr-solrj-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - > ivera - 2019-07-19 15:11:07] > > I cannot pass a collection name to the ping request. And the > CloudSolrClient.Builder does not allow me to declare a default collection. > {color:#22}BaseCloudSolrClient.setDefault{color}{color:#22}{color:#22}Collection(String) > is effectively deprecated because CloudSolrClient no longer has a public > constructor. {color}{color} > > {color:#22}{color:#22}Can we add an argument to the Builder > constructor to accept a string for the default collection? Or a new setter > on the Builder? {color}{color} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13718) SPLITSHARD using async can cause data loss
[ https://issues.apache.org/jira/browse/SOLR-13718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915713#comment-16915713 ] Mikhail Khludnev commented on SOLR-13718: - bq. seems like there's a considerable refactoring there around handling of these collection API responses It's probably due to SOLR-12291. Let me know if you need some comments. > SPLITSHARD using async can cause data loss > -- > > Key: SOLR-13718 > URL: https://issues.apache.org/jira/browse/SOLR-13718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.7.2 >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya >Priority: Major > Attachments: SOLR-13718.patch, solr.zip > > > When using SPLITSHARD with async, if there are underlying failures in the > SPLIT core command or other sub-commands of SPLITSHARD, then SPLITSHARD > succeeds and results in two empty sub-shards. > There are various potential failures with SPLIT core command, here's a way to > reproduce using a Solr 6x index in Solr 7x. > Steps to reproduce (in Solr 7x): > {code} > 1. Import the attached configset, and create a collection. > 2. Move in the attached data directory (index created in Solr6x) in place of > the created collection's data directory. Do a collection RELOAD. > 3. Issue a *:* query, we see 5 documents. > 4. Issue a SPLITSHARD, and then issue *:*, we see 0 documents. > {code} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13716) Check that (negative) sharded join behavior is consistent
Mikhail Khludnev created SOLR-13716: --- Summary: Check that (negative) sharded join behavior is consistent Key: SOLR-13716 URL: https://issues.apache.org/jira/browse/SOLR-13716 Project: Solr Issue Type: Test Security Level: Public (Default Security Level. Issues are Public) Components: query parsers Reporter: Mikhail Khludnev Error might be inconsistent, make sure it's always the same. see the thread linked. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13715) Explore join and multithread facets
Mikhail Khludnev created SOLR-13715: --- Summary: Explore join and multithread facets Key: SOLR-13715 URL: https://issues.apache.org/jira/browse/SOLR-13715 Project: Solr Issue Type: Test Security Level: Public (Default Security Level. Issues are Public) Components: query parsers Reporter: Mikhail Khludnev User reported an issue when both of subj meet together. See the link to the mail thread. Include json facet and scored join as well. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13650) Support for named global classloaders
[ https://issues.apache.org/jira/browse/SOLR-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911092#comment-16911092 ] Mikhail Khludnev commented on SOLR-13650: - bq. "pakacage" Is it a typo in a refguide or on purpose? > Support for named global classloaders > - > > Key: SOLR-13650 > URL: https://issues.apache.org/jira/browse/SOLR-13650 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > {code:json} > curl -X POST -H 'Content-type:application/json' --data-binary ' > { > "add-package": { >"name": "my-package" , > "url" : "http://host:port/url/of/jar;, > "sha512":"" > } > }' http://localhost:8983/api/cluster > {code} > This means that Solr creates a globally accessible classloader with a name > {{my-package}} which contains all the jars of that package. > A component should be able to use the package by using the {{"package" : > "my-package"}}. > eg: > {code:json} > curl -X POST -H 'Content-type:application/json' --data-binary ' > { > "create-searchcomponent": { > "name": "my-searchcomponent" , > "class" : "my.path.to.ClassName", > "package" : "my-package" > } > }' http://localhost:8983/api/c/mycollection/config > {code} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13663) XML Query Parser to Support SpanPositionRangeQuery
[ https://issues.apache.org/jira/browse/SOLR-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907613#comment-16907613 ] Mikhail Khludnev commented on SOLR-13663: - Sorry. * master https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=70162d3fb1a03b1ecd14135aec79cd1ccb481636 * branch_8x https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=ea8f921e71e628e2334ea2fdf2f3c95c6fe427a8 > XML Query Parser to Support SpanPositionRangeQuery > -- > > Key: SOLR-13663 > URL: https://issues.apache.org/jira/browse/SOLR-13663 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-13663.patch, SOLR-13663.patch > > Time Spent: 40m > Remaining Estimate: 0h > > Currently the XML Query Parser support a vast array of span queries, > including the SpanFirstQuery, but it doesn't support the generic > SpanPositionRangeQuery. > < SpanPositionRange start="2" end="3"> > prejudice > > > Scope of this issue is to introduce the related builder and allow the > possibility to build such queries. > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13663) XML Query Parser to Support SpanPositionRangeQuery
[ https://issues.apache.org/jira/browse/SOLR-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13663: Resolution: Fixed Fix Version/s: 8.3 Status: Resolved (was: Patch Available) > XML Query Parser to Support SpanPositionRangeQuery > -- > > Key: SOLR-13663 > URL: https://issues.apache.org/jira/browse/SOLR-13663 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Alessandro Benedetti >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13663.patch, SOLR-13663.patch > > Time Spent: 40m > Remaining Estimate: 0h > > Currently the XML Query Parser support a vast array of span queries, > including the SpanFirstQuery, but it doesn't support the generic > SpanPositionRangeQuery. > < SpanPositionRange start="2" end="3"> > prejudice > > > Scope of this issue is to introduce the related builder and allow the > possibility to build such queries. > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-13663) XML Query Parser to Support SpanPositionRangeQuery
[ https://issues.apache.org/jira/browse/SOLR-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev reassigned SOLR-13663: --- Assignee: Mikhail Khludnev > XML Query Parser to Support SpanPositionRangeQuery > -- > > Key: SOLR-13663 > URL: https://issues.apache.org/jira/browse/SOLR-13663 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Alessandro Benedetti >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13663.patch, SOLR-13663.patch > > Time Spent: 40m > Remaining Estimate: 0h > > Currently the XML Query Parser support a vast array of span queries, > including the SpanFirstQuery, but it doesn't support the generic > SpanPositionRangeQuery. > < SpanPositionRange start="2" end="3"> > prejudice > > > Scope of this issue is to introduce the related builder and allow the > possibility to build such queries. > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13663) XML Query Parser to Support SpanPositionRangeQuery
[ https://issues.apache.org/jira/browse/SOLR-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13663: Attachment: SOLR-13663.patch > XML Query Parser to Support SpanPositionRangeQuery > -- > > Key: SOLR-13663 > URL: https://issues.apache.org/jira/browse/SOLR-13663 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-13663.patch, SOLR-13663.patch > > Time Spent: 40m > Remaining Estimate: 0h > > Currently the XML Query Parser support a vast array of span queries, > including the SpanFirstQuery, but it doesn't support the generic > SpanPositionRangeQuery. > < SpanPositionRange start="2" end="3"> > prejudice > > > Scope of this issue is to introduce the related builder and allow the > possibility to build such queries. > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13663) XML Query Parser to Support SpanPositionRangeQuery
[ https://issues.apache.org/jira/browse/SOLR-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907047#comment-16907047 ] Mikhail Khludnev commented on SOLR-13663: - It touches only Lucene codebase, shouldn't it go into Lucene project? > XML Query Parser to Support SpanPositionRangeQuery > -- > > Key: SOLR-13663 > URL: https://issues.apache.org/jira/browse/SOLR-13663 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-13663.patch > > Time Spent: 40m > Remaining Estimate: 0h > > Currently the XML Query Parser support a vast array of span queries, > including the SpanFirstQuery, but it doesn't support the generic > SpanPositionRangeQuery. > < SpanPositionRange start="2" end="3"> > prejudice > > > Scope of this issue is to introduce the related builder and allow the > possibility to build such queries. > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13188) NullPointerException in org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
[ https://issues.apache.org/jira/browse/SOLR-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906005#comment-16906005 ] Mikhail Khludnev commented on SOLR-13188: - I suppose there should be NPE check somewhere around https://github.com/apache/lucene-solr/blob/07ca02b7375a9c2564aba4c905e880a32d16e1df/solr/core/src/java/org/apache/solr/search/join/BlockJoinParentQParser.java#L59 > NullPointerException in > org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667) > -- > > Key: SOLR-13188 > URL: https://issues.apache.org/jira/browse/SOLR-13188 > Project: Solr > Issue Type: Bug >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The > attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that > you will obtain by following the steps below: > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > {noformat} >Reporter: Marek >Priority: Minor > Labels: diffblue, newdev > Attachments: home.zip > > > Requesting the following URL causes Solr to return an HTTP 500 error response: > {noformat} > http://localhost:8983/solr/films/select?q={!parent%20fq={!collapse%20field=id}} > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > ERROR (qtp689401025-21) [ x:films] o.a.s.s.HttpSolrCall > null:java.lang.NullPointerException > at > org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667) > at > org.apache.lucene.search.join.QueryBitSetProducer.getBitSet(QueryBitSetProducer.java:73) > at > org.apache.solr.search.join.BlockJoinParentQParser$BitDocIdSetFilterWrapper.getDocIdSet(BlockJoinParentQParser.java:135) > at > org.apache.solr.search.SolrConstantScoreQuery$ConstantWeight.scorer(SolrConstantScoreQuery.java:99) > at org.apache.lucene.search.Weight.bulkScorer(Weight.java:177) > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649) > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443) > at > org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200) > at > org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604) > at > org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1420) > at > org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567) > at > org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434) > at > org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340) > [...] > {noformat} > In org/apache/lucene/search/join/QueryBitSetProducer.java[73]
[jira] [Commented] (SOLR-13663) XML Query Parser to Support SpanPositionRangeQuery
[ https://issues.apache.org/jira/browse/SOLR-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905894#comment-16905894 ] Mikhail Khludnev commented on SOLR-13663: - +1 > XML Query Parser to Support SpanPositionRangeQuery > -- > > Key: SOLR-13663 > URL: https://issues.apache.org/jira/browse/SOLR-13663 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.2 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-13663.patch > > Time Spent: 40m > Remaining Estimate: 0h > > Currently the XML Query Parser support a vast array of span queries, > including the SpanFirstQuery, but it doesn't support the generic > SpanPositionRangeQuery. > < SpanPositionRange start="2" end="3"> > prejudice > > > Scope of this issue is to introduce the related builder and allow the > possibility to build such queries. > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13545) ContentStreamUpdateRequest no longer closes stream
[ https://issues.apache.org/jira/browse/SOLR-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892594#comment-16892594 ] Mikhail Khludnev edited comment on SOLR-13545 at 7/25/19 9:51 AM: -- note https://issues.apache.org/jira/browse/SOLR-13637?focusedCommentId=16892277=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16892277 was (Author: mkhludnev): spawn SOLR-13651 > ContentStreamUpdateRequest no longer closes stream > -- > > Key: SOLR-13545 > URL: https://issues.apache.org/jira/browse/SOLR-13545 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4, 7.5, 7.6, 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.1.1 > Environment: Windows - file locking may not cause a visible failure > on Linux? >Reporter: Colvin Cowie >Priority: Major > Fix For: 8.2 > > Attachments: ContentStreamUpdateRequestBug.java, SOLR-13545.patch, > SOLR-13545.patch, SOLR-13545.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Since the change made in SOLR-12142 _ContentStreamUpdateRequest_ no longer > closes the stream that it opens. Therefore if streaming a file, it cannot be > deleted until the process exits. > > {code:java} > @Override > public RequestWriter.ContentWriter getContentWriter(String expectedType) { > if (contentStreams == null || contentStreams.isEmpty() || > contentStreams.size() > 1) return null; > ContentStream stream = contentStreams.get(0); > return new RequestWriter.ContentWriter() { > @Override > public void write(OutputStream os) throws IOException { > IOUtils.copy(stream.getStream(), os); > } > @Override > public String getContentType() { > return stream.getContentType(); > } > }; > } > {code} > IOUtils.copy will not close the stream. Adding a close to the write(), is > enough to "fix" it for the test case I've attached, e.g. > > {code:java} > @Override > public void write(OutputStream os) throws IOException { > final InputStream innerStream = stream.getStream(); > try { > IOUtils.copy(innerStream, os); > } finally { > IOUtils.closeQuietly(innerStream); > } > } > {code} > > I don't know whether any other streaming classes have similar issues > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-13651) BasicAuthIntegrationTest.testBasicAuth failure caused by No contentStream in .doEdit(SecurityConfHandler.java:103)
[ https://issues.apache.org/jira/browse/SOLR-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev closed SOLR-13651. --- > BasicAuthIntegrationTest.testBasicAuth failure caused by No contentStream in > .doEdit(SecurityConfHandler.java:103) > -- > > Key: SOLR-13651 > URL: https://issues.apache.org/jira/browse/SOLR-13651 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: 8.2 >Reporter: Mikhail Khludnev >Priority: Major > > Test fails on v2 auth request. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13651) BasicAuthIntegrationTest.testBasicAuth failure caused by No contentStream in .doEdit(SecurityConfHandler.java:103)
[ https://issues.apache.org/jira/browse/SOLR-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev resolved SOLR-13651. - Resolution: Duplicate Thanks, [~noble.paul] > BasicAuthIntegrationTest.testBasicAuth failure caused by No contentStream in > .doEdit(SecurityConfHandler.java:103) > -- > > Key: SOLR-13651 > URL: https://issues.apache.org/jira/browse/SOLR-13651 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: 8.2 >Reporter: Mikhail Khludnev >Priority: Major > > Test fails on v2 auth request. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13651) BasicAuthIntegrationTest.testBasicAuth failure caused by No contentStream in .doEdit(SecurityConfHandler.java:103)
[ https://issues.apache.org/jira/browse/SOLR-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892599#comment-16892599 ] Mikhail Khludnev commented on SOLR-13651: - https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/911/console Reproduces on {{branch_8x}} {code} [junit4] 2> 26528 INFO (TEST-BasicAuthIntegrationTest.testBasicAuth-seed#[B292FDDCA6F4D6F2]) [ ] o.a.h.i.e.RetryExec I/O exception (org.apache.http.NoHttpResponseException) caught when processing request to {s}->https://127.0.0.1:40505: The target server failed to respond [junit4] 2> 26528 INFO (TEST-BasicAuthIntegrationTest.testBasicAuth-seed#[B292FDDCA6F4D6F2]) [ ] o.a.h.i.e.RetryExec Retrying request to {s}->https://127.0.0.1:40505 [junit4] 2> 26588 INFO (qtp1361614499-740) [n:127.0.0.1:40505_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/cluster/security/authentication params={} status=0 QTime=0 [junit4] 2> 26601 ERROR (qtp1952522099-600) [n:127.0.0.1:44583_solr ] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: No contentStream [junit4] 2>at org.apache.solr.handler.admin.SecurityConfHandler.doEdit(SecurityConfHandler.java:103) [junit4] 2>at org.apache.solr.handler.admin.SecurityConfHandler.handleRequestBody(SecurityConfHandler.java:85) [junit4] 2>at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) [junit4] 2>at org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:247) [junit4] 2>at org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:341) .. [junit4] 2> 26601 INFO (qtp1952522099-600) [n:127.0.0.1:44583_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/cluster/security/authentication params={wt=javabin=2} status=400 QTime=0 .. [junit4] FAILURE 6.82s J2 | BasicAuthIntegrationTest.testBasicAuth <<< [junit4]> Throwable #1: java.lang.AssertionError: expected:<401> but was:<400> [junit4]>at __randomizedtesting.SeedInfo.seed([B292FDDCA6F4D6F2:EFC8BCE02A75588]:0) [junit4]>at org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:151) ant test -Dtestcase=BasicAuthIntegrationTest -Dtests.method=testBasicAuth -Dtests.seed=B292FDDCA6F4D6F2 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=et-EE -Dtests.timezone=Pacific/Easter -Dtests.asserts=true -Dtests.file.encoding=US-ASCII {code} > BasicAuthIntegrationTest.testBasicAuth failure caused by No contentStream in > .doEdit(SecurityConfHandler.java:103) > -- > > Key: SOLR-13651 > URL: https://issues.apache.org/jira/browse/SOLR-13651 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: 8.2 >Reporter: Mikhail Khludnev >Priority: Major > > Test fails on v2 auth request. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13545) ContentStreamUpdateRequest no longer closes stream
[ https://issues.apache.org/jira/browse/SOLR-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892594#comment-16892594 ] Mikhail Khludnev commented on SOLR-13545: - spawn SOLR-13651 > ContentStreamUpdateRequest no longer closes stream > -- > > Key: SOLR-13545 > URL: https://issues.apache.org/jira/browse/SOLR-13545 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4, 7.5, 7.6, 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.1.1 > Environment: Windows - file locking may not cause a visible failure > on Linux? >Reporter: Colvin Cowie >Priority: Major > Fix For: 8.2 > > Attachments: ContentStreamUpdateRequestBug.java, SOLR-13545.patch, > SOLR-13545.patch, SOLR-13545.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Since the change made in SOLR-12142 _ContentStreamUpdateRequest_ no longer > closes the stream that it opens. Therefore if streaming a file, it cannot be > deleted until the process exits. > > {code:java} > @Override > public RequestWriter.ContentWriter getContentWriter(String expectedType) { > if (contentStreams == null || contentStreams.isEmpty() || > contentStreams.size() > 1) return null; > ContentStream stream = contentStreams.get(0); > return new RequestWriter.ContentWriter() { > @Override > public void write(OutputStream os) throws IOException { > IOUtils.copy(stream.getStream(), os); > } > @Override > public String getContentType() { > return stream.getContentType(); > } > }; > } > {code} > IOUtils.copy will not close the stream. Adding a close to the write(), is > enough to "fix" it for the test case I've attached, e.g. > > {code:java} > @Override > public void write(OutputStream os) throws IOException { > final InputStream innerStream = stream.getStream(); > try { > IOUtils.copy(innerStream, os); > } finally { > IOUtils.closeQuietly(innerStream); > } > } > {code} > > I don't know whether any other streaming classes have similar issues > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13651) BasicAuthIntegrationTest.testBasicAuth failure caused by No contentStream in .doEdit(SecurityConfHandler.java:103)
Mikhail Khludnev created SOLR-13651: --- Summary: BasicAuthIntegrationTest.testBasicAuth failure caused by No contentStream in .doEdit(SecurityConfHandler.java:103) Key: SOLR-13651 URL: https://issues.apache.org/jira/browse/SOLR-13651 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Authentication Affects Versions: 8.2 Reporter: Mikhail Khludnev Test fails on v2 auth request. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13545) ContentStreamUpdateRequest no longer closes stream
[ https://issues.apache.org/jira/browse/SOLR-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892208#comment-16892208 ] Mikhail Khludnev commented on SOLR-13545: - Reverting {{/ContentStreamUpdateRequest.java}} locally doesn't heal the test failure at branch_8x. I see that this failure happens on v2 auth request. > ContentStreamUpdateRequest no longer closes stream > -- > > Key: SOLR-13545 > URL: https://issues.apache.org/jira/browse/SOLR-13545 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4, 7.5, 7.6, 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.1.1 > Environment: Windows - file locking may not cause a visible failure > on Linux? >Reporter: Colvin Cowie >Priority: Major > Fix For: 8.2 > > Attachments: ContentStreamUpdateRequestBug.java, SOLR-13545.patch, > SOLR-13545.patch, SOLR-13545.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Since the change made in SOLR-12142 _ContentStreamUpdateRequest_ no longer > closes the stream that it opens. Therefore if streaming a file, it cannot be > deleted until the process exits. > > {code:java} > @Override > public RequestWriter.ContentWriter getContentWriter(String expectedType) { > if (contentStreams == null || contentStreams.isEmpty() || > contentStreams.size() > 1) return null; > ContentStream stream = contentStreams.get(0); > return new RequestWriter.ContentWriter() { > @Override > public void write(OutputStream os) throws IOException { > IOUtils.copy(stream.getStream(), os); > } > @Override > public String getContentType() { > return stream.getContentType(); > } > }; > } > {code} > IOUtils.copy will not close the stream. Adding a close to the write(), is > enough to "fix" it for the test case I've attached, e.g. > > {code:java} > @Override > public void write(OutputStream os) throws IOException { > final InputStream innerStream = stream.getStream(); > try { > IOUtils.copy(innerStream, os); > } finally { > IOUtils.closeQuietly(innerStream); > } > } > {code} > > I don't know whether any other streaming classes have similar issues > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13545) ContentStreamUpdateRequest no longer closes stream
[ https://issues.apache.org/jira/browse/SOLR-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891423#comment-16891423 ] Mikhail Khludnev edited comment on SOLR-13545 at 7/23/19 10:30 PM: --- Reproduced for me on branch_8x. I'll check it tomorrow. {{ant test -Dtestcase=BasicAuthIntegrationTest -Dtests.method=testBasicAuth -Dtests.seed=B292FDDCA6F4D6F2 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=et-EE -Dtests.timezone=Pacific/Easter -Dtests.asserts=true -Dtests.file.encoding=US-ASCII}} was (Author: mkhludnev): Reproduced for me. I'll check it tomorrow. > ContentStreamUpdateRequest no longer closes stream > -- > > Key: SOLR-13545 > URL: https://issues.apache.org/jira/browse/SOLR-13545 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4, 7.5, 7.6, 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.1.1 > Environment: Windows - file locking may not cause a visible failure > on Linux? >Reporter: Colvin Cowie >Priority: Major > Fix For: 8.2 > > Attachments: ContentStreamUpdateRequestBug.java, SOLR-13545.patch, > SOLR-13545.patch, SOLR-13545.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Since the change made in SOLR-12142 _ContentStreamUpdateRequest_ no longer > closes the stream that it opens. Therefore if streaming a file, it cannot be > deleted until the process exits. > > {code:java} > @Override > public RequestWriter.ContentWriter getContentWriter(String expectedType) { > if (contentStreams == null || contentStreams.isEmpty() || > contentStreams.size() > 1) return null; > ContentStream stream = contentStreams.get(0); > return new RequestWriter.ContentWriter() { > @Override > public void write(OutputStream os) throws IOException { > IOUtils.copy(stream.getStream(), os); > } > @Override > public String getContentType() { > return stream.getContentType(); > } > }; > } > {code} > IOUtils.copy will not close the stream. Adding a close to the write(), is > enough to "fix" it for the test case I've attached, e.g. > > {code:java} > @Override > public void write(OutputStream os) throws IOException { > final InputStream innerStream = stream.getStream(); > try { > IOUtils.copy(innerStream, os); > } finally { > IOUtils.closeQuietly(innerStream); > } > } > {code} > > I don't know whether any other streaming classes have similar issues > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13545) ContentStreamUpdateRequest no longer closes stream
[ https://issues.apache.org/jira/browse/SOLR-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891423#comment-16891423 ] Mikhail Khludnev commented on SOLR-13545: - Reproduced for me. I'll check it tomorrow. > ContentStreamUpdateRequest no longer closes stream > -- > > Key: SOLR-13545 > URL: https://issues.apache.org/jira/browse/SOLR-13545 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4, 7.5, 7.6, 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.1.1 > Environment: Windows - file locking may not cause a visible failure > on Linux? >Reporter: Colvin Cowie >Priority: Major > Fix For: 8.2 > > Attachments: ContentStreamUpdateRequestBug.java, SOLR-13545.patch, > SOLR-13545.patch, SOLR-13545.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Since the change made in SOLR-12142 _ContentStreamUpdateRequest_ no longer > closes the stream that it opens. Therefore if streaming a file, it cannot be > deleted until the process exits. > > {code:java} > @Override > public RequestWriter.ContentWriter getContentWriter(String expectedType) { > if (contentStreams == null || contentStreams.isEmpty() || > contentStreams.size() > 1) return null; > ContentStream stream = contentStreams.get(0); > return new RequestWriter.ContentWriter() { > @Override > public void write(OutputStream os) throws IOException { > IOUtils.copy(stream.getStream(), os); > } > @Override > public String getContentType() { > return stream.getContentType(); > } > }; > } > {code} > IOUtils.copy will not close the stream. Adding a close to the write(), is > enough to "fix" it for the test case I've attached, e.g. > > {code:java} > @Override > public void write(OutputStream os) throws IOException { > final InputStream innerStream = stream.getStream(); > try { > IOUtils.copy(innerStream, os); > } finally { > IOUtils.closeQuietly(innerStream); > } > } > {code} > > I don't know whether any other streaming classes have similar issues > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-9961) RestoreCore needs the option to download files in parallel.
[ https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev reassigned SOLR-9961: -- Assignee: Mikhail Khludnev > RestoreCore needs the option to download files in parallel. > --- > > Key: SOLR-9961 > URL: https://issues.apache.org/jira/browse/SOLR-9961 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Affects Versions: 6.2.1 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-9961.patch, SOLR-9961.patch, SOLR-9961.patch, > SOLR-9961.patch, SOLR-9961.patch > > > My backup to cloud storage (Google cloud storage in this case, but I think > this is a general problem) takes 8 minutes ... the restore of the same core > takes hours. The restore loop in RestoreCore is serial and doesn't allow me > to parallelize the expensive part of this operation (the IO from the remote > cloud storage service). We need the option to parallelize the download (like > distcp). > Also, I tried downloading the same directory using gsutil and it was very > fast, like 2 minutes. So I know it's not the pipe that's limiting perf here. > Here's a very rough patch that does the parallelization. We may also want to > consider a two-step approach: 1) download in parallel to a temp dir, 2) > perform all the of the checksum validation against the local temp dir. That > will save round trips to the remote cloud storage. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9961) RestoreCore needs the option to download files in parallel.
[ https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890741#comment-16890741 ] Mikhail Khludnev commented on SOLR-9961: bq. backups to HDFS and to S3 (via S3A) [~TimOwen], beware of SOLR-11556. > RestoreCore needs the option to download files in parallel. > --- > > Key: SOLR-9961 > URL: https://issues.apache.org/jira/browse/SOLR-9961 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Affects Versions: 6.2.1 >Reporter: Timothy Potter >Priority: Major > Attachments: SOLR-9961.patch, SOLR-9961.patch, SOLR-9961.patch, > SOLR-9961.patch, SOLR-9961.patch > > > My backup to cloud storage (Google cloud storage in this case, but I think > this is a general problem) takes 8 minutes ... the restore of the same core > takes hours. The restore loop in RestoreCore is serial and doesn't allow me > to parallelize the expensive part of this operation (the IO from the remote > cloud storage service). We need the option to parallelize the download (like > distcp). > Also, I tried downloading the same directory using gsutil and it was very > fast, like 2 minutes. So I know it's not the pipe that's limiting perf here. > Here's a very rough patch that does the parallelization. We may also want to > consider a two-step approach: 1) download in parallel to a temp dir, 2) > perform all the of the checksum validation against the local temp dir. That > will save round trips to the remote cloud storage. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9961) RestoreCore needs the option to download files in parallel.
[ https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16889991#comment-16889991 ] Mikhail Khludnev commented on SOLR-9961: [~TimOwen], I'm not able to measure it now, your observations are really appreciated. Note, it's purposed for clouds where it might be more significant than in hdfs. > RestoreCore needs the option to download files in parallel. > --- > > Key: SOLR-9961 > URL: https://issues.apache.org/jira/browse/SOLR-9961 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Affects Versions: 6.2.1 >Reporter: Timothy Potter >Priority: Major > Attachments: SOLR-9961.patch, SOLR-9961.patch, SOLR-9961.patch, > SOLR-9961.patch, SOLR-9961.patch > > > My backup to cloud storage (Google cloud storage in this case, but I think > this is a general problem) takes 8 minutes ... the restore of the same core > takes hours. The restore loop in RestoreCore is serial and doesn't allow me > to parallelize the expensive part of this operation (the IO from the remote > cloud storage service). We need the option to parallelize the download (like > distcp). > Also, I tried downloading the same directory using gsutil and it was very > fast, like 2 minutes. So I know it's not the pipe that's limiting perf here. > Here's a very rough patch that does the parallelization. We may also want to > consider a two-step approach: 1) download in parallel to a temp dir, 2) > perform all the of the checksum validation against the local temp dir. That > will save round trips to the remote cloud storage. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-11556: Resolution: Fixed Fix Version/s: 8.3 Status: Resolved (was: Patch Available) https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/900/testReport/org.apache.solr.cloud.api.collections/TestLocalFSCloudBackupRestore/ > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-11556.patch, SOLR-11556.patch, SOLR-11556.patch, > SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9961) RestoreCore needs the option to download files in parallel.
[ https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16889599#comment-16889599 ] Mikhail Khludnev commented on SOLR-9961: Linking a bunch of jiras proving that {{fs.hdfs.impl.disable.cache=true}} is ours' everything, which hard to believe for me. > RestoreCore needs the option to download files in parallel. > --- > > Key: SOLR-9961 > URL: https://issues.apache.org/jira/browse/SOLR-9961 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Affects Versions: 6.2.1 >Reporter: Timothy Potter >Priority: Major > Attachments: SOLR-9961.patch, SOLR-9961.patch, SOLR-9961.patch, > SOLR-9961.patch, SOLR-9961.patch > > > My backup to cloud storage (Google cloud storage in this case, but I think > this is a general problem) takes 8 minutes ... the restore of the same core > takes hours. The restore loop in RestoreCore is serial and doesn't allow me > to parallelize the expensive part of this operation (the IO from the remote > cloud storage service). We need the option to parallelize the download (like > distcp). > Also, I tried downloading the same directory using gsutil and it was very > fast, like 2 minutes. So I know it's not the pipe that's limiting perf here. > Here's a very rough patch that does the parallelization. We may also want to > consider a two-step approach: 1) download in parallel to a temp dir, 2) > perform all the of the checksum validation against the local temp dir. That > will save round trips to the remote cloud storage. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887962#comment-16887962 ] Mikhail Khludnev commented on SOLR-11556: - I'm going to commit it soon. Let me know if there is something which might be a concern. > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-11556.patch, SOLR-11556.patch, SOLR-11556.patch, > SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-11556: Attachment: SOLR-11556.patch > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-11556.patch, SOLR-11556.patch, SOLR-11556.patch, > SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-11556: Attachment: SOLR-11556.patch > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-11556.patch, SOLR-11556.patch, SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-11556: Status: Patch Available (was: Open) > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-11556.patch, SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-11556: Attachment: SOLR-11556.patch Status: Open (was: Open) > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-11556.patch, SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9961) RestoreCore needs the option to download files in parallel.
[ https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-9961: --- Attachment: SOLR-9961.patch > RestoreCore needs the option to download files in parallel. > --- > > Key: SOLR-9961 > URL: https://issues.apache.org/jira/browse/SOLR-9961 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Affects Versions: 6.2.1 >Reporter: Timothy Potter >Priority: Major > Attachments: SOLR-9961.patch, SOLR-9961.patch, SOLR-9961.patch, > SOLR-9961.patch, SOLR-9961.patch > > > My backup to cloud storage (Google cloud storage in this case, but I think > this is a general problem) takes 8 minutes ... the restore of the same core > takes hours. The restore loop in RestoreCore is serial and doesn't allow me > to parallelize the expensive part of this operation (the IO from the remote > cloud storage service). We need the option to parallelize the download (like > distcp). > Also, I tried downloading the same directory using gsutil and it was very > fast, like 2 minutes. So I know it's not the pipe that's limiting perf here. > Here's a very rough patch that does the parallelization. We may also want to > consider a two-step approach: 1) download in parallel to a temp dir, 2) > perform all the of the checksum validation against the local temp dir. That > will save round trips to the remote cloud storage. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev reassigned SOLR-11556: --- Assignee: Mikhail Khludnev (was: Timothy Potter) > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13630) Check if HdfsTestUtil.teardownClass() may shutdown HDFS fully
[ https://issues.apache.org/jira/browse/SOLR-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13630: Status: Patch Available (was: Open) > Check if HdfsTestUtil.teardownClass() may shutdown HDFS fully > - > > Key: SOLR-13630 > URL: https://issues.apache.org/jira/browse/SOLR-13630 > Project: Solr > Issue Type: Sub-task >Reporter: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13630.patch > > > I want to check if it's feasible to stop all hdfs threads instead of ignoring > them in lingering. > Spoiler: -no sense-. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13630) Check if HdfsTestUtil.teardownClass() may shutdown HDFS fully
[ https://issues.apache.org/jira/browse/SOLR-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13630: Attachment: SOLR-13630.patch > Check if HdfsTestUtil.teardownClass() may shutdown HDFS fully > - > > Key: SOLR-13630 > URL: https://issues.apache.org/jira/browse/SOLR-13630 > Project: Solr > Issue Type: Sub-task >Reporter: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13630.patch > > > I want to check if it's feasible to stop all hdfs threads instead of ignoring > them in lingering. > Spoiler: -no sense-. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13630) Check if HdfsTestUtil.teardownClass() may shutdown HDFS fully
[ https://issues.apache.org/jira/browse/SOLR-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13630: Summary: Check if HdfsTestUtil.teardownClass() may shutdown HDFS fully (was: Check if possible to shutdown HDFS fully) > Check if HdfsTestUtil.teardownClass() may shutdown HDFS fully > - > > Key: SOLR-13630 > URL: https://issues.apache.org/jira/browse/SOLR-13630 > Project: Solr > Issue Type: Sub-task >Reporter: Mikhail Khludnev >Priority: Major > > I want to check if it's feasible to stop all hdfs threads instead of ignoring > them in lingering. > Spoiler: -no sense-. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13630) Check if possible to shutdown HDFS fully
Mikhail Khludnev created SOLR-13630: --- Summary: Check if possible to shutdown HDFS fully Key: SOLR-13630 URL: https://issues.apache.org/jira/browse/SOLR-13630 Project: Solr Issue Type: Sub-task Reporter: Mikhail Khludnev I want to check if it's feasible to stop all hdfs threads instead of ignoring them in lingering. Spoiler: -no sense-. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13587) Close BackupRepository after every usage
[ https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877708#comment-16877708 ] Mikhail Khludnev edited comment on SOLR-13587 at 7/3/19 10:39 AM: -- I suppose we need to go with it, enforce closing backup repoes. Need to think how to enforce it across tests. Here's why: HdfsBackupRepo should FS.newInstance and then it should supply it for HDFSDirectories, and close afterward. Otherwise, it either will open (HD)FS on the every file (that's expected to be slow) or accidentally hit FS already closed like in SOLR-9961. Concerns, opinions? was (Author: mkhludnev): I suppose we need to go with it, enforce closing backup repoes. Need to think how to enforce it across tests. Here's why: HdfsBackupRepo should FS.newInstance and then it should supply it for HDFSDirectories, and close afterward. Otherwise, it either will open (HD)FS on the every file (that's expected to be slow) or accidentally hit FS already closed. Concerns, opinions? > Close BackupRepository after every usage > > > Key: SOLR-13587 > URL: https://issues.apache.org/jira/browse/SOLR-13587 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore >Affects Versions: 8.1 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13587.patch > > > Turns out BackupRepository is created every operation, but never closed. I > suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in > {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore > operation to make sure that closing hdfs filesystem doesn't break it see > SOLR-9961 for the case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13587) Close BackupRepository after every usage
[ https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877708#comment-16877708 ] Mikhail Khludnev commented on SOLR-13587: - I suppose we need to go with it, enforce closing backup repoes. Need to think how to enforce it across tests. Here's why: HdfsBackupRepo should FS.newInstance and then it should supply it for HDFSDirectories, and close afterward. Otherwise, it either will open (HD)FS on the every file (that's expected to be slow) or accidentally hit FS already closed. Concerns, opinions? > Close BackupRepository after every usage > > > Key: SOLR-13587 > URL: https://issues.apache.org/jira/browse/SOLR-13587 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore >Affects Versions: 8.1 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13587.patch > > > Turns out BackupRepository is created every operation, but never closed. I > suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in > {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore > operation to make sure that closing hdfs filesystem doesn't break it see > SOLR-9961 for the case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8902) Index-time join ToParentBlockJoinQuery query produces incorrect result with child wildcards
[ https://issues.apache.org/jira/browse/LUCENE-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev resolved LUCENE-8902. -- Resolution: Not A Problem bq. Returns 2 docs ["id1", "id3"]. It should only return "id1" and not "id3" here. Very strange behavior. # Not at all. Child query matches id2's children, but since it's absent in parent mask, it lands on the next bit, which it id3. # I don't think child free is supported, although I don't remember why # not having the last segment doc in parent mask (id=id5) should cause an exception IIRC. # please obey jira usage rules, come to mailing list first > Index-time join ToParentBlockJoinQuery query produces incorrect result with > child wildcards > --- > > Key: LUCENE-8902 > URL: https://issues.apache.org/jira/browse/LUCENE-8902 > Project: Lucene - Core > Issue Type: Bug > Components: modules/join >Affects Versions: 8.1.1 >Reporter: Andrei >Priority: Major > > When I do a index-time join query on certain parent docs with a wildcard > query for child docs, sometimes I get the wrong answer. Example: > > ||Parent Doc||Children|| > |id=id0| none| > |id=id1| # program=P1| > |id=id2| # program=P1 > # program=P2| > |id=id3| none| > |id=id4| # program=P1| > |id=id5| # program=P1 > # program=P2| > So essentially I have 6 parent docs, doc 0 has no children, doc 1 has 1 > child, doc 2 has 2 children, etc. > 1. The following query gives the correct results: > BitSetProducer parentSet = new QueryBitSetProducer(new > TermInSetQuery("id", toSet("id0", "id1", "id2", "id3", > "id4", "id5"))); > Query q = new ToParentBlockJoinQuery(new TermInSetQuery("program", > toSet("P1", "P2")), parentSet, ScoreMode.None); > Returns the correct result (4 docs: ["id1", "id2", "id4", > "id5"] > > 2. This also gives correct result (same as above): > BitSetProducer parentSet = new QueryBitSetProducer(new > TermInSetQuery("id", toSet("id0", "id1", "id2", "id3", > "id4", "id5"))); > Query q = new ToParentBlockJoinQuery(new WildcardQuery(new > Term("program", "*")), parentSet, ScoreMode.None); > > 3. Also correct (same as above) > BitSetProducer parentSet = new QueryBitSetProducer(new > WildcardQuery(new Term("id", "*"))); > Query q = new ToParentBlockJoinQuery(new WildcardQuery(new > Term("program", "*")), parentSet, ScoreMode.None); > so far so good. > > 4. This one gives incorrect result: > BitSetProducer parentSet = new QueryBitSetProducer(new > TermInSetQuery("id", toSet("id0", "id1", "id3"))); > Query q = new ToParentBlockJoinQuery(new WildcardQuery(new > Term("program", "*")), parentSet, > org.apache.lucene.search.join.ScoreMode.None); > Returns 2 docs ["id1", "id3"]. It should only return "id1" and > not "id3" here. Very strange behavior. > > 5. Just asking for "id3" also incorrectly returns it: > BitSetProducer parentSet = new QueryBitSetProducer(new TermQuery(new > Term("id", "id3"))); > Query q = new ToParentBlockJoinQuery(new WildcardQuery(new > Term("program", "*")), parentSet, > org.apache.lucene.search.join.ScoreMode.None); > > 6. But as soon as I add "id2" to the parent query, it works again.. > BitSetProducer parentSet = new QueryBitSetProducer(new > TermInSetQuery("id", toSet( "id3", "id2"))); > Query q = new ToParentBlockJoinQuery(new WildcardQuery(new > Term("program", "*")), parentSet, > org.apache.lucene.search.join.ScoreMode.None); > Gives the correct result ["id2"] > > I am attaching the unit test that demonstrates this: > [https://pastebin.com/aJ1LDLCS] > I don't know if I am doing something wrong, or if there is an issue. > Thank you for looking into it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-13598) ReplicationFactorTest.test failures. Expected rf=2 ... got 1
[ https://issues.apache.org/jira/browse/SOLR-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev closed SOLR-13598. --- > ReplicationFactorTest.test failures. Expected rf=2 ... got 1 > - > > Key: SOLR-13598 > URL: https://issues.apache.org/jira/browse/SOLR-13598 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Priority: Major > > It seems it occurs _mostly_ on Windows > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8030/ > Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseSerialGC > 2 tests failed. > FAILED: org.apache.solr.cloud.ReplicationFactorTest.test > Error Message: > Expected rf=2 because batch should have succeeded on 2 replicas (only one > replica should be down) but got 1; -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13598) ReplicationFactorTest.test failures. Expected rf=2 ... got 1
[ https://issues.apache.org/jira/browse/SOLR-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev resolved SOLR-13598. - Resolution: Duplicate > ReplicationFactorTest.test failures. Expected rf=2 ... got 1 > - > > Key: SOLR-13598 > URL: https://issues.apache.org/jira/browse/SOLR-13598 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Priority: Major > > It seems it occurs _mostly_ on Windows > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8030/ > Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseSerialGC > 2 tests failed. > FAILED: org.apache.solr.cloud.ReplicationFactorTest.test > Error Message: > Expected rf=2 because batch should have succeeded on 2 replicas (only one > replica should be down) but got 1; -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13598) ReplicationFactorTest.test failures. Expected rf=2 ... got 1
[ https://issues.apache.org/jira/browse/SOLR-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876931#comment-16876931 ] Mikhail Khludnev commented on SOLR-13598: - I don't plan to work on it anytime soon. > ReplicationFactorTest.test failures. Expected rf=2 ... got 1 > - > > Key: SOLR-13598 > URL: https://issues.apache.org/jira/browse/SOLR-13598 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Priority: Major > > It seems it occurs _mostly_ on Windows > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8030/ > Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseSerialGC > 2 tests failed. > FAILED: org.apache.solr.cloud.ReplicationFactorTest.test > Error Message: > Expected rf=2 because batch should have succeeded on 2 replicas (only one > replica should be down) but got 1; -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13598) ReplicationFactorTest.test failures. Expected rf=2 ... got 1
Mikhail Khludnev created SOLR-13598: --- Summary: ReplicationFactorTest.test failures. Expected rf=2 ... got 1 Key: SOLR-13598 URL: https://issues.apache.org/jira/browse/SOLR-13598 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Mikhail Khludnev It seems it occurs _mostly_ on Windows Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8030/ Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.ReplicationFactorTest.test Error Message: Expected rf=2 because batch should have succeeded on 2 replicas (only one replica should be down) but got 1; -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12490) introduce json.queries supports DSL for further referring and exclusion in JSON facets
[ https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-12490: Summary: introduce json.queries supports DSL for further referring and exclusion in JSON facets (was: referring/excluding clauses from JSON query DSL in JSON facets. ) > introduce json.queries supports DSL for further referring and exclusion in > JSON facets > --- > > Key: SOLR-12490 > URL: https://issues.apache.org/jira/browse/SOLR-12490 > Project: Solr > Issue Type: Improvement > Components: Facet Module, faceting >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Labels: newdev > > It's spin off from the > [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. > > h2. Problem > # after SOLR-9685 we can tag separate clauses in hairish queries like > {{parent}}, {{bool}} > # we can {{domain.excludeTags}} > # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 > > # but we can refer only separate params in {{domain.filter}}, it's not > possible to refer separate clauses > see the first comment -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12490) referring/excluding clauses from JSON query DSL in JSON facets.
[ https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-12490: Labels: newdev (was: ) > referring/excluding clauses from JSON query DSL in JSON facets. > > > Key: SOLR-12490 > URL: https://issues.apache.org/jira/browse/SOLR-12490 > Project: Solr > Issue Type: Improvement > Components: Facet Module, faceting >Reporter: Mikhail Khludnev >Priority: Major > Labels: newdev > > It's spin off from the > [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. > > h2. Problem > # after SOLR-9685 we can tag separate clauses in hairish queries like > {{parent}}, {{bool}} > # we can {{domain.excludeTags}} > # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 > > # but we can refer only separate params in {{domain.filter}}, it's not > possible to refer separate clauses > see the first comment -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-12490) referring/excluding clauses from JSON query DSL in JSON facets.
[ https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev reassigned SOLR-12490: --- Assignee: Mikhail Khludnev > referring/excluding clauses from JSON query DSL in JSON facets. > > > Key: SOLR-12490 > URL: https://issues.apache.org/jira/browse/SOLR-12490 > Project: Solr > Issue Type: Improvement > Components: Facet Module, faceting >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Labels: newdev > > It's spin off from the > [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. > > h2. Problem > # after SOLR-9685 we can tag separate clauses in hairish queries like > {{parent}}, {{bool}} > # we can {{domain.excludeTags}} > # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 > > # but we can refer only separate params in {{domain.filter}}, it's not > possible to refer separate clauses > see the first comment -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12490) referring/excluding clauses from JSON query DSL in JSON facets.
[ https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514420#comment-16514420 ] Mikhail Khludnev edited comment on SOLR-12490 at 7/2/19 10:18 AM: -- Here is fewer-impact approach induced by chat with [~osavrasov]. *The proposal is to introduce {{json.queries}}, it's like arbitrary {{json.param}} but it's translated with query DSL* {code} { "query" : { "#top":{ "parent": { "query": "sku-title:foo", "filters" : "$childFq", // non-json old style param reference "which": "scope:product" } } }, // like .param but parsed with query dsl syntax "queries":{ "childFq":[{ "#color" :"color:black" }, { "#size" : "size:L" }] }, "facet":{ "sku_colors_in_prods":{ "type" : "terms", "field" : "color", "domain" : { "excludeTags":["top", // we need to drop top-level parent query "color"],// excluding one child filter clause "filter":[ {"param":"childFq"} // referring to .queries.childFq ] }, "facet": { // counting products "prod_count":"uniqueBlock(_root_)" } } } } {code} was (Author: mkhludnev): Here is fewer-impact approach induced by chat with [~osavrasov]. The proposal is to introduce {{json.queries}}, it's like arbitrary {{json.param}} but it's translated with query DSL {code} { "query" : { "#top":{ "parent": { "query": "sku-title:foo", "filters" : "$childFq", // non-json old style param reference "which": "scope:product" } } }, // like .param but parsed with query dsl syntax "queries":{ "childFq":[{ "#color" :"color:black" }, { "#size" : "size:L" }] }, "facet":{ "sku_colors_in_prods":{ "type" : "terms", "field" : "color", "domain" : { "excludeTags":["top", // we need to drop top-level parent query "color"],// excluding one child filter clause "filter":[ {"param":"childFq"} // referring to .queries.childFq ] }, "facet": { // counting products "prod_count":"uniqueBlock(_root_)" } } } } {code} > referring/excluding clauses from JSON query DSL in JSON facets. > > > Key: SOLR-12490 > URL: https://issues.apache.org/jira/browse/SOLR-12490 > Project: Solr > Issue Type: Improvement > Components: Facet Module, faceting >Reporter: Mikhail Khludnev >Priority: Major > > It's spin off from the > [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. > > h2. Problem > # after SOLR-9685 we can tag separate clauses in hairish queries like > {{parent}}, {{bool}} > # we can {{domain.excludeTags}} > # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 > > # but we can refer only separate params in {{domain.filter}}, it's not > possible to refer separate clauses > see the first comment -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12490) referring/excluding clauses from JSON query DSL in JSON facets.
[ https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-12490: Description: It's spin off from the [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. h2. Problem # after SOLR-9685 we can tag separate clauses in hairish queries like {{parent}}, {{bool}} # we can {{domain.excludeTags}} # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 # but we can refer only separate params in {{domain.filter}}, it's not possible to refer separate clauses see the first comment was: It's spin off from the [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. h2. Problem # after SOLR-9685 we can tag separate clauses in hairish queries like {{parent}}, {{bool}} # we can {{domain.excludeTags}} # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 # but we can refer only separate params in {{domain.filter}}, it's not possible to refer separate clauses h2. {color:#ff}Revoked{color} -Proposal- pls see the first comment instead. # -tag child clauses multiple times- {code} { "query" : { "#top":{ "parent": { "query": "sku-title:foo", "filters" : [ "scope:sku", { "#sku,color" : "color:black" }, // multiple tags { "#sku,size" : "size:L" } ], "which": "scope:product" } } } } {code} # -refer to sku clauses, either by- ## (1) {{domain.filter.tag}} -in addition to {{param}}, or- ## (2) {{domain.includeTags}} -mimicking- {{excludeTags}} {code} "facet":{ "sku_colors_in_prods":{ "type" : "terms", "field" : "color", "domain" : { "excludeTags":["top","color"], // we need to drop top-level parent query "filter":[ {"tag":"sku"} // (1) ], "includeTags":"sku" // (2) }, "facet":"uniqueBlock(_root_)" } } {code} WDYT, [~osavrasov], [~ysee...@gmail.com]? > referring/excluding clauses from JSON query DSL in JSON facets. > > > Key: SOLR-12490 > URL: https://issues.apache.org/jira/browse/SOLR-12490 > Project: Solr > Issue Type: Improvement > Components: Facet Module, faceting >Reporter: Mikhail Khludnev >Priority: Major > > It's spin off from the > [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720]. > > h2. Problem > # after SOLR-9685 we can tag separate clauses in hairish queries like > {{parent}}, {{bool}} > # we can {{domain.excludeTags}} > # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 > > # but we can refer only separate params in {{domain.filter}}, it's not > possible to refer separate clauses > see the first comment -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-13081) In-Place Update doesn't work when route.field is defined
[ https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev closed SOLR-13081. --- > In-Place Update doesn't work when route.field is defined > > > Key: SOLR-13081 > URL: https://issues.apache.org/jira/browse/SOLR-13081 > Project: Solr > Issue Type: Bug > Components: update >Reporter: Dr Oleg Savrasov >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 8.1, master (9.0) > > Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, > SOLR-13081.patch, SOLR-13081.patch > > > As soon as cloud collection is configured with route.field property, In-Place > Updates are not applied anymore. This happens because > AtomicUpdateDocumentMerger skips only id and version fields and doesn't > verify configured route.field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (LUCENE-5375) ToChildBlockJoinQuery becomes crazy on wrong subquery
[ https://issues.apache.org/jira/browse/LUCENE-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev closed LUCENE-5375. > ToChildBlockJoinQuery becomes crazy on wrong subquery > - > > Key: LUCENE-5375 > URL: https://issues.apache.org/jira/browse/LUCENE-5375 > Project: Lucene - Core > Issue Type: Bug > Components: modules/join >Affects Versions: 4.6 >Reporter: Dr Oleg Savrasov >Priority: Major > Labels: patch > Fix For: 4.6.1, 6.0 > > Attachments: LUCENE-5375.patch, SOLR-5553-1.patch, > SOLR-5553-insufficient_assertions.patch, SOLR-5553.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > If user supplies wrong subquery to ToParentBlockJoinQuery it reasonably > throws IllegalStateException. > (http://lucene.apache.org/core/4_0_0/join/org/apache/lucene/search/join/ToParentBlockJoinQuery.html > 'The child documents must be orthogonal to the parent documents: the wrapped > child query must never return a parent document.'). However > ToChildBlockJoinQuery just goes crazy silently. I want to provide simple > patch for ToChildBlockJoinQuery with if-throw clause and test. > See > http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201311.mbox/%3cf415ce3a-ebe5-4d15-adf1-c5ead32a1...@sheffield.ac.uk%3E -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13577: Resolution: Fixed Fix Version/s: 8.2 Status: Resolved (was: Patch Available) > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 8.2 > > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, > screenshot-1.png, still failed on Windows consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9961) RestoreCore needs the option to download files in parallel.
[ https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876525#comment-16876525 ] Mikhail Khludnev commented on SOLR-9961: Design would be: * {{BackupRepositoryFactory}} holds shared thread pool * thread pool is injected into created {{BackupRepository}} optionally * Restore (Backup) operation(s) uses dedicated operation {{listAll(path, lambda)}} or {{forEach(list/file, lambda)}} * Repoes, which accepted thread pool, invoke the lambda in threads * Lambda accepts a repository delegate and expected to operate with it. This delegate reuses HDFS and close/release it after it's done. WDYT? > RestoreCore needs the option to download files in parallel. > --- > > Key: SOLR-9961 > URL: https://issues.apache.org/jira/browse/SOLR-9961 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Affects Versions: 6.2.1 >Reporter: Timothy Potter >Priority: Major > Attachments: SOLR-9961.patch, SOLR-9961.patch, SOLR-9961.patch, > SOLR-9961.patch > > > My backup to cloud storage (Google cloud storage in this case, but I think > this is a general problem) takes 8 minutes ... the restore of the same core > takes hours. The restore loop in RestoreCore is serial and doesn't allow me > to parallelize the expensive part of this operation (the IO from the remote > cloud storage service). We need the option to parallelize the download (like > distcp). > Also, I tried downloading the same directory using gsutil and it was very > fast, like 2 minutes. So I know it's not the pipe that's limiting perf here. > Here's a very rough patch that does the parallelization. We may also want to > consider a two-step approach: 1) download in parallel to a temp dir, 2) > perform all the of the checksum validation against the local temp dir. That > will save round trips to the remote cloud storage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13587) Close BackupRepository after every usage
[ https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876014#comment-16876014 ] Mikhail Khludnev commented on SOLR-13587: - [~krisden], Right. However, there's such thing as [https://issues.apache.org/jira/browse/SOLR-9961?focusedCommentId=15822297=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-15822297|unexpected closing]. Is there any suggestions from HDFS folks in scope of SOLR-5007? > Close BackupRepository after every usage > > > Key: SOLR-13587 > URL: https://issues.apache.org/jira/browse/SOLR-13587 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore >Affects Versions: 8.1 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13587.patch > > > Turns out BackupRepository is created every operation, but never closed. I > suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in > {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore > operation to make sure that closing hdfs filesystem doesn't break it see > SOLR-9961 for the case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13587) Close BackupRepository after every usage
[ https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13587: Status: Patch Available (was: Open) > Close BackupRepository after every usage > > > Key: SOLR-13587 > URL: https://issues.apache.org/jira/browse/SOLR-13587 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore >Affects Versions: 8.1 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13587.patch > > > Turns out BackupRepository is created every operation, but never closed. I > suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in > {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore > operation to make sure that closing hdfs filesystem doesn't break it see > SOLR-9961 for the case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875742#comment-16875742 ] Mikhail Khludnev commented on SOLR-9952: I don't know how it works with hdfs, but current restore code might be problematic with s3. It relies on list operation, which might not see all directory files right after they been written. So, it might require to add file contains full list of files. > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Components: Backup/Restore >Reporter: Mikhail Khludnev >Priority: Major > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr > on S3.pdf, core-site.xml.template > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9961) RestoreCore needs the option to download files in parallel.
[ https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875740#comment-16875740 ] Mikhail Khludnev commented on SOLR-9961: Here's the question: what should hold thread pool, repository factory (singleton) or repository instance, which is made for every operation (a few times) and are n't closed yet SOLR-13587? > RestoreCore needs the option to download files in parallel. > --- > > Key: SOLR-9961 > URL: https://issues.apache.org/jira/browse/SOLR-9961 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Affects Versions: 6.2.1 >Reporter: Timothy Potter >Priority: Major > Attachments: SOLR-9961.patch, SOLR-9961.patch, SOLR-9961.patch, > SOLR-9961.patch > > > My backup to cloud storage (Google cloud storage in this case, but I think > this is a general problem) takes 8 minutes ... the restore of the same core > takes hours. The restore loop in RestoreCore is serial and doesn't allow me > to parallelize the expensive part of this operation (the IO from the remote > cloud storage service). We need the option to parallelize the download (like > distcp). > Also, I tried downloading the same directory using gsutil and it was very > fast, like 2 minutes. So I know it's not the pipe that's limiting perf here. > Here's a very rough patch that does the parallelization. We may also want to > consider a two-step approach: 1) download in parallel to a temp dir, 2) > perform all the of the checksum validation against the local temp dir. That > will save round trips to the remote cloud storage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13587) Close BackupRepository after every usage
[ https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13587: Attachment: SOLR-13587.patch > Close BackupRepository after every usage > > > Key: SOLR-13587 > URL: https://issues.apache.org/jira/browse/SOLR-13587 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore >Affects Versions: 8.1 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13587.patch > > > Turns out BackupRepository is created every operation, but never closed. I > suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in > {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore > operation to make sure that closing hdfs filesystem doesn't break it see > SOLR-9961 for the case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13587) Close BackupRepository after every usage
[ https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875476#comment-16875476 ] Mikhail Khludnev commented on SOLR-13587: - Turns out even closing repositories doesn't prevent leaking MiniDFSCluster from leakage.. > Close BackupRepository after every usage > > > Key: SOLR-13587 > URL: https://issues.apache.org/jira/browse/SOLR-13587 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore >Affects Versions: 8.1 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > > Turns out BackupRepository is created every operation, but never closed. I > suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in > {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore > operation to make sure that closing hdfs filesystem doesn't break it see > SOLR-9961 for the case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9961) RestoreCore needs the option to download files in parallel.
[ https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-9961: --- Attachment: SOLR-9961.patch > RestoreCore needs the option to download files in parallel. > --- > > Key: SOLR-9961 > URL: https://issues.apache.org/jira/browse/SOLR-9961 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Affects Versions: 6.2.1 >Reporter: Timothy Potter >Priority: Major > Attachments: SOLR-9961.patch, SOLR-9961.patch, SOLR-9961.patch, > SOLR-9961.patch > > > My backup to cloud storage (Google cloud storage in this case, but I think > this is a general problem) takes 8 minutes ... the restore of the same core > takes hours. The restore loop in RestoreCore is serial and doesn't allow me > to parallelize the expensive part of this operation (the IO from the remote > cloud storage service). We need the option to parallelize the download (like > distcp). > Also, I tried downloading the same directory using gsutil and it was very > fast, like 2 minutes. So I know it's not the pipe that's limiting perf here. > Here's a very rough patch that does the parallelization. We may also want to > consider a two-step approach: 1) download in parallel to a temp dir, 2) > perform all the of the checksum validation against the local temp dir. That > will save round trips to the remote cloud storage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875008#comment-16875008 ] Mikhail Khludnev edited comment on SOLR-9242 at 6/29/19 7:46 AM: - I'm wondering why BackupRepository isn't ever closed. I suppose it causes leaks hdfs stuff in TestHdfsCloudBackupRestore. Isn't it worth to close repo every time? Or it's better to keep repo instances in BRFactory? I'm asking in scope of SOLR-9961 where we need to spawn threadpool once or per repo. UPD: raised SOLR-13587 was (Author: mkhludnev): I'm wondering why BackupRepository isn't ever closed. I suppose it causes leaks hdfs stuff in TestHdfsCloudBackupRestore. Isn't it worth to close repo every time? Or it's better to keep repo instances in BRFactory? I'm asking in scope of SOLR-9961 where we need to spawn threadpool once or per repo. > Collection level backup/restore should provide a param for specifying the > repository implementation it should use > - > > Key: SOLR-9242 > URL: https://issues.apache.org/jira/browse/SOLR-9242 > Project: Solr > Issue Type: Improvement >Reporter: Hrishikesh Gadre >Assignee: Varun Thacker >Priority: Major > Fix For: 6.2, 7.0 > > Attachments: 7726.log.gz, SOLR-9242.patch, SOLR-9242.patch, > SOLR-9242.patch, SOLR-9242.patch, SOLR-9242.patch, SOLR-9242_followup.patch, > SOLR-9242_followup2.patch > > > SOLR-7374 provides BackupRepository interface to enable storing Solr index > data to a configured file-system (e.g. HDFS, local file-system etc.). This > JIRA is to track the work required to extend this functionality at the > collection level. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13587) Close BackupRepository after every usage
Mikhail Khludnev created SOLR-13587: --- Summary: Close BackupRepository after every usage Key: SOLR-13587 URL: https://issues.apache.org/jira/browse/SOLR-13587 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Backup/Restore Affects Versions: 8.1 Reporter: Mikhail Khludnev Assignee: Mikhail Khludnev Turns out BackupRepository is created every operation, but never closed. I suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore operation to make sure that closing hdfs filesystem doesn't break it see SOLR-9961 for the case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13577: Attachment: (was: SOLR-13577.patch) > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, > screenshot-1.png, still failed on Windows consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13577: Attachment: SOLR-13577.patch > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, > screenshot-1.png, still failed on Windows consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13577: Attachment: SOLR-13577.patch > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, > screenshot-1.png, still failed on Windows consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875215#comment-16875215 ] Mikhail Khludnev commented on SOLR-13577: - Thanks for advice, [~hossman]. I decided to move waitToStop into JettyRunner, turns out it's not easy. How it's better to treat this method? > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, screenshot-1.png, still > failed on Windows consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org