[jira] [Commented] (SOLR-8274) Add per-request MDC logging based on user-provided value.
[ https://issues.apache.org/jira/browse/SOLR-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136353#comment-17136353 ] David Smiley commented on SOLR-8274: Note a recent issue SOLR-14566 proposes a particular way and there's some interesting conversation. > Add per-request MDC logging based on user-provided value. > - > > Key: SOLR-8274 > URL: https://issues.apache.org/jira/browse/SOLR-8274 > Project: Solr > Issue Type: Improvement > Components: logging >Reporter: Jason Gerlowski >Priority: Minor > Attachments: SOLR-8274.patch > > > *Problem 1* Currently, there's no way (AFAIK) to find all log messages > associated with a particular request. > *Problem 2* There's also no easy way for multi-tenant Solr setups to find all > log messages associated with a particular customer/tenant. > Both of these problems would be more manageable if Solr could be configured > to record an MDC tag based on a header, or some other user provided value. > This would allow admins to group together logs about a single request. If > the same header value is repeated multiple times this functionality could > also be used to group together arbitrary requests, such as those that come > from a particular user, etc. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14571) Index download speed while replicating is fixed at 5.1 in replication.html
[ https://issues.apache.org/jira/browse/SOLR-14571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Florin Babes updated SOLR-14571: Status: Open (was: Patch Available) > Index download speed while replicating is fixed at 5.1 in replication.html > -- > > Key: SOLR-14571 > URL: https://issues.apache.org/jira/browse/SOLR-14571 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 8.0, master (9.0), 8.5.2 >Reporter: Florin Babes >Priority: Trivial > Labels: AdminUI, Replication > Attachments: SOLR-14571.patch > > > Hello, > While checking ways to optimize the speed of replication I've noticed that > the index download speed is fixed at 5.1 in replication.html. There is a > reason for that? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14571) Index download speed while replicating is fixed at 5.1 in replication.html
[ https://issues.apache.org/jira/browse/SOLR-14571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Florin Babes updated SOLR-14571: Status: Patch Available (was: Open) > Index download speed while replicating is fixed at 5.1 in replication.html > -- > > Key: SOLR-14571 > URL: https://issues.apache.org/jira/browse/SOLR-14571 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 8.0, master (9.0), 8.5.2 >Reporter: Florin Babes >Priority: Trivial > Labels: AdminUI, Replication > Attachments: SOLR-14571.patch > > > Hello, > While checking ways to optimize the speed of replication I've noticed that > the index download speed is fixed at 5.1 in replication.html. There is a > reason for that? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14571) Index download speed while replicating is fixed at 5.1 in replication.html
[ https://issues.apache.org/jira/browse/SOLR-14571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Florin Babes updated SOLR-14571: Attachment: SOLR-14571.patch > Index download speed while replicating is fixed at 5.1 in replication.html > -- > > Key: SOLR-14571 > URL: https://issues.apache.org/jira/browse/SOLR-14571 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 8.0, master (9.0), 8.5.2 >Reporter: Florin Babes >Priority: Trivial > Labels: AdminUI, Replication > Attachments: SOLR-14571.patch > > > Hello, > While checking ways to optimize the speed of replication I've noticed that > the index download speed is fixed at 5.1 in replication.html. There is a > reason for that? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14571) Index download speed while replicating is fixed at 5.1 in replication.html
[ https://issues.apache.org/jira/browse/SOLR-14571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Florin Babes updated SOLR-14571: Attachment: (was: SOLR-14571.patch) > Index download speed while replicating is fixed at 5.1 in replication.html > -- > > Key: SOLR-14571 > URL: https://issues.apache.org/jira/browse/SOLR-14571 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 8.0, master (9.0), 8.5.2 >Reporter: Florin Babes >Priority: Trivial > Labels: AdminUI, Replication > > Hello, > While checking ways to optimize the speed of replication I've noticed that > the index download speed is fixed at 5.1 in replication.html. There is a > reason for that? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14384) Stack SolrRequestInfo
[ https://issues.apache.org/jira/browse/SOLR-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-14384: Fix Version/s: 8.6 Resolution: Fixed Status: Resolved (was: Patch Available) > Stack SolrRequestInfo > - > > Key: SOLR-14384 > URL: https://issues.apache.org/jira/browse/SOLR-14384 > Project: Solr > Issue Type: Improvement >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: 8.6 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Sometimes SolrRequestInfo need to be suspended/overridden with a new one that > is used temporarily. Examples are in the {{[subquery]}} transformer, and in > warm of caches, and in QuerySenderListener (another type of warming), maybe > others. This can be annoying to do correctly, and in at least one place it > isn't done correctly. SolrRequestInfoSuspender shows some complexity. In > this issue, [~dsmiley] proposes using a stack internally to SolrRequestInfo > that is push'ed and pop'ed. It's not the only way to solve this but it's one > way. > See linked issues for the context and discussion. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14384) Stack SolrRequestInfo
[ https://issues.apache.org/jira/browse/SOLR-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136333#comment-17136333 ] ASF subversion and git services commented on SOLR-14384: Commit 35bdf9b413512fa4b2e360df14991f27462ecb6f in lucene-solr's branch refs/heads/branch_8x from Nazerke Seidan [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=35bdf9b ] SOLR-14384: SolrRequestInfo now stacks internally. * "set" now MUST pair with a "clear" * fixes SolrIndexSearcher.warm which should have re-instated previous SRI * cleans up some SRI set/clear users Closes #1527 (cherry picked from commit 2da71c2a405483e2cf5270dfc20cbd760cd66486) > Stack SolrRequestInfo > - > > Key: SOLR-14384 > URL: https://issues.apache.org/jira/browse/SOLR-14384 > Project: Solr > Issue Type: Improvement >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Time Spent: 3h 10m > Remaining Estimate: 0h > > Sometimes SolrRequestInfo need to be suspended/overridden with a new one that > is used temporarily. Examples are in the {{[subquery]}} transformer, and in > warm of caches, and in QuerySenderListener (another type of warming), maybe > others. This can be annoying to do correctly, and in at least one place it > isn't done correctly. SolrRequestInfoSuspender shows some complexity. In > this issue, [~dsmiley] proposes using a stack internally to SolrRequestInfo > that is push'ed and pop'ed. It's not the only way to solve this but it's one > way. > See linked issues for the context and discussion. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14384) Stack SolrRequestInfo
[ https://issues.apache.org/jira/browse/SOLR-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136323#comment-17136323 ] ASF subversion and git services commented on SOLR-14384: Commit 2da71c2a405483e2cf5270dfc20cbd760cd66486 in lucene-solr's branch refs/heads/master from Nazerke Seidan [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2da71c2 ] SOLR-14384: SolrRequestInfo now stacks internally. * "set" now MUST pair with a "clear" * fixes SolrIndexSearcher.warm which should have re-instated previous SRI * cleans up some SRI set/clear users Closes #1527 > Stack SolrRequestInfo > - > > Key: SOLR-14384 > URL: https://issues.apache.org/jira/browse/SOLR-14384 > Project: Solr > Issue Type: Improvement >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Time Spent: 3h 10m > Remaining Estimate: 0h > > Sometimes SolrRequestInfo need to be suspended/overridden with a new one that > is used temporarily. Examples are in the {{[subquery]}} transformer, and in > warm of caches, and in QuerySenderListener (another type of warming), maybe > others. This can be annoying to do correctly, and in at least one place it > isn't done correctly. SolrRequestInfoSuspender shows some complexity. In > this issue, [~dsmiley] proposes using a stack internally to SolrRequestInfo > that is push'ed and pop'ed. It's not the only way to solve this but it's one > way. > See linked issues for the context and discussion. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley closed pull request #1527: SOLR-14384 Stack SolrRequestInfo
dsmiley closed pull request #1527: URL: https://github.com/apache/lucene-solr/pull/1527 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14516) NPE during Realtime GET
[ https://issues.apache.org/jira/browse/SOLR-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136308#comment-17136308 ] Noble Paul commented on SOLR-14516: --- Ishan Chattopadhyaya You are right. There are 2 places we need to fix this as a generic JSON serializer , we must support sending a null value for a String We should also fix this method? {code:java} public String stringValue() { if (fieldsData instanceof CharSequence || fieldsData instanceof Number) { return fieldsData.toString(); } else { return null; } } {code} > NPE during Realtime GET > --- > > Key: SOLR-14516 > URL: https://issues.apache.org/jira/browse/SOLR-14516 > Project: Solr > Issue Type: Bug >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: 8.6 > > > The exact reason is unknown. But the following is the stacktrace > > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException\n\tat > org.apache.solr.common.util.JsonTextWriter.writeStr(JsonTextWriter.java:83)\n\tat > org.apache.solr.schema.StrField.write(StrField.java:101)\n\tat > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:124)\n\tat > > org.apache.solr.response.JSONWriter.writeSolrDocument(JSONWriter.java:106)\n\tat > > org.apache.solr.response.TextResponseWriter.writeSolrDocumentList(TextResponseWriter.java:170)\n\tat > > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:147)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)\n\tat -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14575) Solr restore is failing when basic authentication is enabled
Yaswanth created SOLR-14575: --- Summary: Solr restore is failing when basic authentication is enabled Key: SOLR-14575 URL: https://issues.apache.org/jira/browse/SOLR-14575 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Backup/Restore Affects Versions: 8.2 Reporter: Yaswanth Hi Team, I was testing backup / restore for solrcloud and its failing exactly when I am trying to restore a successfully backed up collection. I am using solr 8.2 with basic authentication enabled and then creating a 2 replica collection. When I run the backup like curl -u xxx:xxx -k '[https://x.x.x.x:8080/solr/admin/collections?action=BACKUP&name=test&collection=test&location=/solrdatabkup'|https://x.x.x.x:8080/solr/admin/collections?action=BACKUP&name=test&collection=test&location=/solrdatabkup%27] it worked fine and I do see a folder was created with the collection name under /solrdatabackup But now when I deleted the existing collection and then try running restore api like curl -u xxx:xxx -k '[https://x.x.x.x:8080/solr/admin/collections?action=RESTORE&name=test&collection=test&location=/solrdatabkup'|https://x.x.x.x:8080/solr/admin/collections?action=BACKUP&name=test&collection=test&location=/solrdatabkup%27] its throwing an error like { "responseHeader":{ "status":500, "QTime":457}, "Operation restore caused *exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: ADDREPLICA failed to create replica",* "exception":{ "msg":"ADDREPLICA failed to create replica", "rspCode":500}, "error":{ "metadata":[ "error-class","org.apache.solr.common.SolrException", "root-error-class","org.apache.solr.common.SolrException"], "msg":"ADDREPLICA failed to create replica", "trace":"org.apache.solr.common.SolrException: ADDREPLICA failed to create replica\n\tat org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:280)\n\tat org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:252)\n\tat org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:820)\n\tat org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:786)\n\tat org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:546)\n\tat org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423)\n\tat org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)\n\tat org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)\n\tat org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)\n\tat org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat org.eclipse.jetty.server.Server.handle(Server.java:505)\n\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)\n\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)\n\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)\n\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)\n\tat org.eclipse.j
[jira] [Created] (SOLR-14574) Fix or suppress warnings in solr/core/src/test
Erick Erickson created SOLR-14574: - Summary: Fix or suppress warnings in solr/core/src/test Key: SOLR-14574 URL: https://issues.apache.org/jira/browse/SOLR-14574 Project: Solr Issue Type: Sub-task Reporter: Erick Erickson Assignee: Erick Erickson Just when I thought I was done I ran testClasses I'm going to do this a little differently. Rather than do a directory at a time, I'll just fix a bunch, push, fix a bunch more, push all on this Jira until I'm done. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14573) Fix or suppress warnings in solrj/src/test
Erick Erickson created SOLR-14573: - Summary: Fix or suppress warnings in solrj/src/test Key: SOLR-14573 URL: https://issues.apache.org/jira/browse/SOLR-14573 Project: Solr Issue Type: Sub-task Reporter: Erick Erickson Assignee: Erick Erickson Bah. the target testClasses shows over 1,000 _more_ warnings. I'm going to do this a little differently. Rather than do a directory at a time, I'll just fix a bunch, push, fix a bunch more, push all on this Jira until I'm done. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14569) Configuring a shardHandlerFactory on the /select requestHandler results in HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-14569: Summary: Configuring a shardHandlerFactory on the /select requestHandler results in HTTP 401 when searching on alias in secured Solr (was: HTTP 401 when searching on alias in secured Solr) > Configuring a shardHandlerFactory on the /select requestHandler results in > HTTP 401 when searching on alias in secured Solr > --- > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, > curl_requests-responses.txt, security.json, security.json, solr.log, > solr_conf.zip, updated_solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test to query on an alias. *Fixed and > updated as per [~gerlowskija]' comments* > *Patch applies on master branch (9x)*. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp > [file:/path/to/security.json|file:///path/to/security.json] > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=*:*) one collection > -- results in successful Solr response > - Query the alias (/select?q=*:*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136195#comment-17136195 ] Isabelle Giguere edited comment on SOLR-14569 at 6/15/20, 11:21 PM: It took a lot of trial and error, but... I figured it out. There's a shardHandlerFactory defined in some of the requestHandler in solrconfig.xml. There's also the global shardHandlerFactory defined in solr.xml Defining a specific shardHandlerFactory in the /select requestHandler somehow prevents the SolrAuth from being transferred to shard requests. Ref: solrconfig.xml in the attached configuration(s) {code} 5 5 5 {code} If the shardHandlerFactory in the /select requestHandler is removed, the query on an alias magically works. Authentication info is sent to shards. What kills me is why... I went through CHANGES.txt to find hints about why overriding the global shardHandlerFactory would cause any sort of failures. The only thing I could find was the mention that since Solr 8.0.0, "HttpShardHandlerFactory's defaultClient is now a Http2SolrClient". The parameters 'maxConnections', 'maxConnectionsPerHost', which are not supported anymore, and thus could have been a good reason for a failure, are found neither in solr.xml, nor in solrconfig.xml. A quick look at ShardHandlerFactory, HttpShardHandlerFactory, HttpShardHandler, RequestHandlerBase, SearchHandler does not provide any obvious explanation either. Actually, documentation clearly states that a shardHandlerFactory can be set for a requestHandler: https://lucene.apache.org/solr/guide/8_5/distributed-requests.html So I'm changing the title of this ticket. There is something odd here. But at least there's a workaround : do not configure a specific shardHandlerFactory on a requestHandler (especially not the /select search handler) was (Author: igiguere): It took a lot of trial and error, but... I figured it out. There's a shardHandlerFactory defined in some of the requestHandler in solrconfig.xml. There's also the global shardHandlerFactory defined in solr.xml Defining a specific shardHandlerFactory in the /select requestHandler somehow prevents the SolrAuth from being transferred to shard requests. Ref: solrconfig.xml in the attached configuration(s) {code} 5 5 5 {code} If the shardHandlerFactory in the /select requestHandler is removed, the query on an alias magically works. Authentication info is sent to shards. What kills me is why... I went through CHANGES.txt to find hints about why overriding the global shardHandlerFactory would cause any sort of failures. The only thing I could find was the mention that since Solr 8.0.0, "HttpShardHandlerFactory's defaultClient is now a Http2SolrClient". The parameters 'maxConnections', 'maxConnectionsPerHost', which are not supported anymore, and thus could have been a good reason for a failure, are found neither in solr.xml, nor in solrconfig.xml. A quick look at ShardHandlerFactory, HttpShardHandlerFactory, HttpShardHandler, RequestHandlerBase, SearchHandler does not provide any obvious explanation either. Sigh... > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, > curl_requests-responses.txt, security.json, security.json, solr.log, > solr_conf.zip, updated_solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test to query on an alias. *Fixed and > updated as per [~gerlowskija]' comments* > *Patch applies on master branch (9x)*. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the
[jira] [Commented] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136195#comment-17136195 ] Isabelle Giguere commented on SOLR-14569: - It took a lot of trial and error, but... I figured it out. There's a shardHandlerFactory defined in some of the requestHandler in solrconfig.xml. There's also the global shardHandlerFactory defined in solr.xml Defining a specific shardHandlerFactory in the /select requestHandler somehow prevents the SolrAuth from being transferred to shard requests. Ref: solrconfig.xml in the attached configuration(s) {code} 5 5 5 {code} If the shardHandlerFactory in the /select requestHandler is removed, the query on an alias magically works. Authentication info is sent to shards. What kills me is why... I went through CHANGES.txt to find hints about why overriding the global shardHandlerFactory would cause any sort of failures. The only thing I could find was the mention that since Solr 8.0.0, "HttpShardHandlerFactory's defaultClient is now a Http2SolrClient". The parameters 'maxConnections', 'maxConnectionsPerHost', which are not supported anymore, and thus could have been a good reason for a failure, are found neither in solr.xml, nor in solrconfig.xml. A quick look at ShardHandlerFactory, HttpShardHandlerFactory, HttpShardHandler, RequestHandlerBase, SearchHandler does not provide any obvious explanation either. Sigh... > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, > curl_requests-responses.txt, security.json, security.json, solr.log, > solr_conf.zip, updated_solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test to query on an alias. *Fixed and > updated as per [~gerlowskija]' comments* > *Patch applies on master branch (9x)*. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp > [file:/path/to/security.json|file:///path/to/security.json] > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=*:*) one collection > -- results in successful Solr response > - Query the alias (/select?q=*:*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8723) Bad interaction bewteen WordDelimiterGraphFilter, StopFilter and FlattenGraphFilter
[ https://issues.apache.org/jira/browse/LUCENE-8723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136193#comment-17136193 ] Nicolás Lichtmaier commented on LUCENE-8723: This is slightly different, but probably the same bug. I just encountered this. This gives an exception and you don't need to have assertions enabled: The code is: {code} Builder builder = CustomAnalyzer.builder(); builder.withTokenizer(StandardTokenizerFactory.class); builder.addTokenFilter(WordDelimiterGraphFilterFactory.class, "preserveOriginal", "1"); builder.addTokenFilter(LowerCaseFilterFactory.class); builder.addTokenFilter(StopFilterFactory.class); builder.addTokenFilter(FlattenGraphFilterFactory.class); Analyzer analyzer = builder.build(); TokenStream ts = analyzer.tokenStream("f", new StringReader("'MICROSOFT_KERBEROS_NAME_A' : undeclared\nidentifier\nkerb_w2k")); ts.reset(); while(ts.incrementToken()) ; {code} and the exception is: {code} Exception in thread "main" java.lang.IllegalArgumentException: Position length must be 1 or greater: got -7 at org.apache.lucene.analysis.tokenattributes.PackedTokenAttributeImpl.setPositionLength(PackedTokenAttributeImpl.java:74) at org.apache.lucene.analysis.core.FlattenGraphFilter.releaseBufferedToken(FlattenGraphFilter.java:214) at org.apache.lucene.analysis.core.FlattenGraphFilter.incrementToken(FlattenGraphFilter.java:258) at com.wolfram.textsearch.AnalyzerBug.main(AnalyzerBug.java:34) {code} I'm using Lucene 8.3 > Bad interaction bewteen WordDelimiterGraphFilter, StopFilter and > FlattenGraphFilter > --- > > Key: LUCENE-8723 > URL: https://issues.apache.org/jira/browse/LUCENE-8723 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Affects Versions: 7.7.1, 8.0, 8.3 >Reporter: Nicolás Lichtmaier >Priority: Major > > I was debugging an issue (missing tokens after analysis) and when I enabled > Java assertions I uncovered a bug when using WordDelimiterGraphFilter + > StopFilter + FlattenGraphFilter. > I could reproduce the issue in a small piece of code. This code gives an > assertion failure when assertions are enabled (-ea java option): > {code:java} > Builder builder = CustomAnalyzer.builder(); > builder.withTokenizer(StandardTokenizerFactory.class); > builder.addTokenFilter(WordDelimiterGraphFilterFactory.class, > "preserveOriginal", "1"); > builder.addTokenFilter(StopFilterFactory.class); > builder.addTokenFilter(FlattenGraphFilterFactory.class); > Analyzer analyzer = builder.build(); > > TokenStream ts = analyzer.tokenStream("*", new StringReader("x7in")); > ts.reset(); > while(ts.incrementToken()) > ; > {code} > This gives: > {code} > Exception in thread "main" java.lang.AssertionError: 2 > at > org.apache.lucene.analysis.core.FlattenGraphFilter.releaseBufferedToken(FlattenGraphFilter.java:195) > at > org.apache.lucene.analysis.core.FlattenGraphFilter.incrementToken(FlattenGraphFilter.java:258) > at com.wolfram.textsearch.AnalyzerError.main(AnalyzerError.java:32) > {code} > Maybe removing stop words after WordDelimiterGraphFilter is wrong, I don't > know. However is the only way to process stop-words generated by that filter. > In any case, it should not eat tokens or produce assertions. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14566) Record "NOW" on "coordinator" log messages
[ https://issues.apache.org/jira/browse/SOLR-14566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136192#comment-17136192 ] Tomas Eduardo Fernandez Lobbe commented on SOLR-14566: -- Note that Solr already has something similar, the "requestID". it's currently part of the debug component, but maybe it can be moved somewhere else if needed for all requests, I don't know. See https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/DebugComponent.java#L159. Also, if the requestID generation is not optimal there we can fix it there. > Record "NOW" on "coordinator" log messages > -- > > Key: SOLR-14566 > URL: https://issues.apache.org/jira/browse/SOLR-14566 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Currently, in SolrCore.java we log each search request that comes through > each core as it is finishing. This includes the path, query-params, QTime, > and status. In the case of a distributed search both the "coordinator" node > and each of the per-shard requests produce a log message. > When Solr is fielding many identical queries, such as those created by a > healthcheck or dashboard, it can be hard when examining logs to link the > per-shard requests with the "cooordinator" request that came in upstream. > One thing that would make this easier is if the {{NOW}} param added to > per-shard requests is also included in the log message from the > "coordinator". While {{NOW}} isn't unique strictly speaking, it often is in > practice, and along with the query-params would allow debuggers to associate > shard requests with coordinator requests a large majority of the time. > An alternative approach would be to create a {{qid}} or {{query-uuid}} when > the coordinator starts its work that can be logged everywhere. This provides > a stronger expectation around uniqueness, but would require UUID generation > on the coordinator, which may be non-negligible work at high QPS (maybe? I > have no idea). It also loses the neatness of reusing data already present on > the shard requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8723) Bad interaction bewteen WordDelimiterGraphFilter, StopFilter and FlattenGraphFilter
[ https://issues.apache.org/jira/browse/LUCENE-8723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolás Lichtmaier updated LUCENE-8723: --- Affects Version/s: 8.3 > Bad interaction bewteen WordDelimiterGraphFilter, StopFilter and > FlattenGraphFilter > --- > > Key: LUCENE-8723 > URL: https://issues.apache.org/jira/browse/LUCENE-8723 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Affects Versions: 7.7.1, 8.0, 8.3 >Reporter: Nicolás Lichtmaier >Priority: Major > > I was debugging an issue (missing tokens after analysis) and when I enabled > Java assertions I uncovered a bug when using WordDelimiterGraphFilter + > StopFilter + FlattenGraphFilter. > I could reproduce the issue in a small piece of code. This code gives an > assertion failure when assertions are enabled (-ea java option): > {code:java} > Builder builder = CustomAnalyzer.builder(); > builder.withTokenizer(StandardTokenizerFactory.class); > builder.addTokenFilter(WordDelimiterGraphFilterFactory.class, > "preserveOriginal", "1"); > builder.addTokenFilter(StopFilterFactory.class); > builder.addTokenFilter(FlattenGraphFilterFactory.class); > Analyzer analyzer = builder.build(); > > TokenStream ts = analyzer.tokenStream("*", new StringReader("x7in")); > ts.reset(); > while(ts.incrementToken()) > ; > {code} > This gives: > {code} > Exception in thread "main" java.lang.AssertionError: 2 > at > org.apache.lucene.analysis.core.FlattenGraphFilter.releaseBufferedToken(FlattenGraphFilter.java:195) > at > org.apache.lucene.analysis.core.FlattenGraphFilter.incrementToken(FlattenGraphFilter.java:258) > at com.wolfram.textsearch.AnalyzerError.main(AnalyzerError.java:32) > {code} > Maybe removing stop words after WordDelimiterGraphFilter is wrong, I don't > know. However is the only way to process stop-words generated by that filter. > In any case, it should not eat tokens or produce assertions. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1567: LUCENE-9402: Let MultiCollector handle minCompetitiveScore
tflobbe commented on a change in pull request #1567: URL: https://github.com/apache/lucene-solr/pull/1567#discussion_r440492115 ## File path: lucene/core/src/java/org/apache/lucene/search/MultiCollector.java ## @@ -134,69 +134,110 @@ public LeafCollector getLeafCollector(LeafReaderContext context) throws IOExcept case 1: return leafCollectors.get(0); default: -return new MultiLeafCollector(leafCollectors, cacheScores); +return new MultiLeafCollector(leafCollectors, cacheScores, scoreMode() == ScoreMode.TOP_SCORES); } } private static class MultiLeafCollector implements LeafCollector { private final boolean cacheScores; private final LeafCollector[] collectors; -private int numCollectors; +private final float[] minScores; +private final boolean skipNonCompetitiveScores; -private MultiLeafCollector(List collectors, boolean cacheScores) { +private MultiLeafCollector(List collectors, boolean cacheScores, boolean skipNonCompetitive) { this.collectors = collectors.toArray(new LeafCollector[collectors.size()]); this.cacheScores = cacheScores; - this.numCollectors = this.collectors.length; + this.skipNonCompetitiveScores = skipNonCompetitive; + this.minScores = this.skipNonCompetitiveScores ? new float[this.collectors.length] : null; } @Override public void setScorer(Scorable scorer) throws IOException { if (cacheScores) { scorer = new ScoreCachingWrappingScorer(scorer); } - scorer = new FilterScorable(scorer) { -@Override -public void setMinCompetitiveScore(float minScore) { - // Ignore calls to setMinCompetitiveScore so that if we wrap two - // collectors and one of them wants to skip low-scoring hits, then - // the other collector still sees all hits. We could try to reconcile - // min scores and take the maximum min score across collectors, but - // this is very unlikely to be helpful in practice. + if (skipNonCompetitiveScores) { +for (int i = 0; i < collectors.length; ++i) { + final LeafCollector c = collectors[i]; + assert c != null; Review comment: hmm I had the null check before, but I thought `setScorer` had to only be called before the `collect` calls because of the javadoc: `Called before successive calls to {@link #collect(int)}.`. I'll put the null checks back. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley opened a new pull request #1582: Remove some needless toAbsolutePath calls
dsmiley opened a new pull request #1582: URL: https://github.com/apache/lucene-solr/pull/1582 Continuation of #1546 Here, we avoid calling toAbsolutePath when not needed because we know it's already absolute. Look carefully; there's a few other changes. I liken this to pointless null checks in code when you know the thing isn't null. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14571) Index download speed while replicating is fixed at 5.1 in replication.html
[ https://issues.apache.org/jira/browse/SOLR-14571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136170#comment-17136170 ] Lucene/Solr QA commented on SOLR-14571: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} SOLR-14571 does not apply to master. Rebase required? Wrong Branch? See https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-14571 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13005726/SOLR-14571.patch | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/763/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Index download speed while replicating is fixed at 5.1 in replication.html > -- > > Key: SOLR-14571 > URL: https://issues.apache.org/jira/browse/SOLR-14571 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 8.0, master (9.0), 8.5.2 >Reporter: Florin Babes >Priority: Trivial > Labels: AdminUI, Replication > Attachments: SOLR-14571.patch > > > Hello, > While checking ways to optimize the speed of replication I've noticed that > the index download speed is fixed at 5.1 in replication.html. There is a > reason for that? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136158#comment-17136158 ] Isabelle Giguere edited comment on SOLR-14569 at 6/15/20, 9:40 PM: --- Attached: - updated_solr_conf.zip : "Same" as solr_conf.zip, but with the deprecated Trie*Fields and Filters replaced by current equivalents. - curl_requests-responses.txt : copy of activity on console for the 2 requests shown in solr.log - solr.log : shows 2 requests to Solr where updated_solr_conf.zip was uploaded, and security and collections were setup as is the description A few lines to help reading solr.log : - line 1243: -- Start GET request on one collection - line 1323: -- Response : 200 - line 1391: -- Start GET request on alias - line 1746: -- POST request to core test1_shard1_replica_n1 - line 1803: -- POST request to core test2_shard1_replica_n1 - line 2974: -- Response : 401 - line 3311: -- Solr response with HTTP 401 Extra note: upgrading Lucene Match version to 8.5.0 still fails for the alias. was (Author: igiguere): Attached: - updated_solr_conf.zip : "Same" as solr_conf.zip, but with the deprecated Trie*Fields and Filters replaced by current equivalents. - curl_requests-responses.txt : copy of activity on console for the 2 requests shown in solr.log - solr.log : shows 2 requests to Solr where updated_solr_conf.zip was uploaded, and security and collections were setup as is the description A few lines to help reading solr.log : - line 1243: -- Start GET request on one collection - line 1323: -- Response : 200 - line 1391: -- Start GET request on alias - line 1746: -- POST request to core test1_shard1_replica_n1 - line 1803: -- POST request to core test2_shard1_replica_n1 - line 2974: -- Response : 401 - line 3311: -- Solr response with HTTP 401 > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, > curl_requests-responses.txt, security.json, security.json, solr.log, > solr_conf.zip, updated_solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test to query on an alias. *Fixed and > updated as per [~gerlowskija]' comments* > *Patch applies on master branch (9x)*. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp > [file:/path/to/security.json|file:///path/to/security.json] > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=*:*) one collection > -- results in successful Solr response > - Query the alias (/select?q=*:*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-14569: Attachment: curl_requests-responses.txt > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, > curl_requests-responses.txt, security.json, security.json, solr.log, > solr_conf.zip, updated_solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test to query on an alias. *Fixed and > updated as per [~gerlowskija]' comments* > *Patch applies on master branch (9x)*. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp > [file:/path/to/security.json|file:///path/to/security.json] > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=*:*) one collection > -- results in successful Solr response > - Query the alias (/select?q=*:*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136158#comment-17136158 ] Isabelle Giguere commented on SOLR-14569: - Attached: - updated_solr_conf.zip : "Same" as solr_conf.zip, but with the deprecated Trie*Fields and Filters replaced by current equivalents. - curl_requests-responses.txt : copy of activity on console for the 2 requests shown in solr.log - solr.log : shows 2 requests to Solr where updated_solr_conf.zip was uploaded, and security and collections were setup as is the description A few lines to help reading solr.log : - line 1243: -- Start GET request on one collection - line 1323: -- Response : 200 - line 1391: -- Start GET request on alias - line 1746: -- POST request to core test1_shard1_replica_n1 - line 1803: -- POST request to core test2_shard1_replica_n1 - line 2974: -- Response : 401 - line 3311: -- Solr response with HTTP 401 > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, > curl_requests-responses.txt, security.json, security.json, solr.log, > solr_conf.zip, updated_solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test to query on an alias. *Fixed and > updated as per [~gerlowskija]' comments* > *Patch applies on master branch (9x)*. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp > [file:/path/to/security.json|file:///path/to/security.json] > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=*:*) one collection > -- results in successful Solr response > - Query the alias (/select?q=*:*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-14569: Attachment: updated_solr_conf.zip > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, security.json, > security.json, solr.log, solr_conf.zip, updated_solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test to query on an alias. *Fixed and > updated as per [~gerlowskija]' comments* > *Patch applies on master branch (9x)*. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp > [file:/path/to/security.json|file:///path/to/security.json] > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=*:*) one collection > -- results in successful Solr response > - Query the alias (/select?q=*:*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-14569: Attachment: solr.log > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, security.json, > security.json, solr.log, solr_conf.zip, updated_solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test to query on an alias. *Fixed and > updated as per [~gerlowskija]' comments* > *Patch applies on master branch (9x)*. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp > [file:/path/to/security.json|file:///path/to/security.json] > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=*:*) one collection > -- results in successful Solr response > - Query the alias (/select?q=*:*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mayya-sharipova commented on pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents
mayya-sharipova commented on pull request #1351: URL: https://github.com/apache/lucene-solr/pull/1351#issuecomment-644394269 @mikemccand We should see speed-ups by default on sort for numeric fields, as long as these fields are indexed both with docValues and points, and a full total hits count is not needed. I will be submitting a PR for `luceneutil` for this use-case. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] epugh opened a new pull request #1581: SOLR-14572 document missing SearchComponents
epugh opened a new pull request #1581: URL: https://github.com/apache/lucene-solr/pull/1581 # Description Learned about a existing SearchComponent from 2012. Went to look at the related Ref Guide page and saw it was missing, plus two other components. # Solution Add a table linking to the JavaDocs. Also point to the http://solr.cool website. # Tests ant build of the ref guide. # Checklist Please review the following and check all that apply: - [ X] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [ X] I have created a Jira issue and added the issue ID to my pull request title. - [ X] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [X ] I have developed this patch against the `master` branch. - [ ] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ X] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14572) Ref Guide doesn't cover all SearchComponents on the Search Components page
David Eric Pugh created SOLR-14572: -- Summary: Ref Guide doesn't cover all SearchComponents on the Search Components page Key: SOLR-14572 URL: https://issues.apache.org/jira/browse/SOLR-14572 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: documentation Affects Versions: 8.5.2 Reporter: David Eric Pugh I went to [https://lucene.apache.org/solr/guide/8_5/requesthandlers-and-searchcomponents-in-solrconfig.html] to find details about the previously unknown to me {{ResponseLogComponent}}, and it wasn't listed. Poking around, I saw that two more {{SearchComponents}}, the {{PhrasesIdentificationComponent}} and {{RealTimeGetComponent}} aren't mentioned. I'd like to add a new section to the page to add an inventory of Solr components that ship. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8574) ExpressionFunctionValues should cache per-hit value
[ https://issues.apache.org/jira/browse/LUCENE-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136136#comment-17136136 ] Haoyu Zhai commented on LUCENE-8574: I've checked the current release and couldn't see this patch merged. And I think there's no other changes introducing similar functionality (not so sure). Probably we should merge this? > ExpressionFunctionValues should cache per-hit value > --- > > Key: LUCENE-8574 > URL: https://issues.apache.org/jira/browse/LUCENE-8574 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.5, 8.0 >Reporter: Michael McCandless >Assignee: Robert Muir >Priority: Major > Attachments: LUCENE-8574.patch > > Time Spent: 1h > Remaining Estimate: 0h > > The original version of {{ExpressionFunctionValues}} had a simple per-hit > cache, so that nested expressions that reference the same common variable > would compute the value for that variable the first time it was referenced > and then use that cached value for all subsequent invocations, within one > hit. I think it was accidentally removed in LUCENE-7609? > This is quite important if you have non-trivial expressions that reference > the same variable multiple times. > E.g. if I have these expressions: > {noformat} > x = c + d > c = b + 2 > d = b * 2{noformat} > Then evaluating x should only cause b's value to be computed once (for a > given hit), but today it's computed twice. The problem is combinatoric if b > then references another variable multiple times, etc. > I think to fix this we just need to restore the per-hit cache? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-9405) IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted
[ https://issues.apache.org/jira/browse/LUCENE-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer resolved LUCENE-9405. - Fix Version/s: 8.6 master (9.0) Resolution: Fixed > IndexWriter incorrectly calls closeMergeReaders twice when the merged segment > is 100% deleted > - > > Key: LUCENE-9405 > URL: https://issues.apache.org/jira/browse/LUCENE-9405 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Reporter: Michael McCandless >Assignee: Simon Willnauer >Priority: Minor > Fix For: master (9.0), 8.6 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > This is the first spinoff from a [controversial PR to add a new index-time > feature to Lucene to merge small segments during > commit|https://github.com/apache/lucene-solr/pull/1552]. This can > substantially reduce the number of small index segments to search. > See specifically [this discussion > there|https://github.com/apache/lucene-solr/pull/1552#discussion_r440298695]. > {{IndexWriter}} seems to be missing a {{success = true}} inside > {{mergeMiddle}} in the case where all segments being merged have 100% > deletions and the segments will simply be dropped. > In this case, in master today, I think we are incorrectly calling > {{closeMergedReaders}} twice, first with {{suppressExceptions = false}} and > second time with {{true}}. > There is a [dedicated test case here showing the > issue|https://github.com/apache/lucene-solr/commit/cab5ef5e6f2bdcda59fd669a298ec137af9d], > but that test case relies on changes in the controversial feature (added > {{MergePolicy.findFullFlushMerges}}). I think it should be possible to make > another test case show the bug without that controversial feature, and I am > unsure why our existing randomized tests have not uncovered this yet ... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9405) IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted
[ https://issues.apache.org/jira/browse/LUCENE-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136131#comment-17136131 ] ASF subversion and git services commented on LUCENE-9405: - Commit cb8b9a9cf4a9a7eb798bcfdb4abb2f0f68efb760 in lucene-solr's branch refs/heads/branch_8x from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cb8b9a9 ] LUCENE-9405: Ensure IndexWriter only closes merge readers once. (#1580) IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted ie. would produce a fully deleted segment. > IndexWriter incorrectly calls closeMergeReaders twice when the merged segment > is 100% deleted > - > > Key: LUCENE-9405 > URL: https://issues.apache.org/jira/browse/LUCENE-9405 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Reporter: Michael McCandless >Assignee: Simon Willnauer >Priority: Minor > Time Spent: 1h 10m > Remaining Estimate: 0h > > This is the first spinoff from a [controversial PR to add a new index-time > feature to Lucene to merge small segments during > commit|https://github.com/apache/lucene-solr/pull/1552]. This can > substantially reduce the number of small index segments to search. > See specifically [this discussion > there|https://github.com/apache/lucene-solr/pull/1552#discussion_r440298695]. > {{IndexWriter}} seems to be missing a {{success = true}} inside > {{mergeMiddle}} in the case where all segments being merged have 100% > deletions and the segments will simply be dropped. > In this case, in master today, I think we are incorrectly calling > {{closeMergedReaders}} twice, first with {{suppressExceptions = false}} and > second time with {{true}}. > There is a [dedicated test case here showing the > issue|https://github.com/apache/lucene-solr/commit/cab5ef5e6f2bdcda59fd669a298ec137af9d], > but that test case relies on changes in the controversial feature (added > {{MergePolicy.findFullFlushMerges}}). I think it should be possible to make > another test case show the bug without that controversial feature, and I am > unsure why our existing randomized tests have not uncovered this yet ... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw merged pull request #1580: LUCENE-9405: Ensure IndexWriter only closes merge readers once.
s1monw merged pull request #1580: URL: https://github.com/apache/lucene-solr/pull/1580 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9405) IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted
[ https://issues.apache.org/jira/browse/LUCENE-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136130#comment-17136130 ] ASF subversion and git services commented on LUCENE-9405: - Commit 47cffbcdd8aa4895c32b0b7a64379fd9f6dd02d5 in lucene-solr's branch refs/heads/master from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=47cffbc ] LUCENE-9405: Ensure IndexWriter only closes merge readers once. (#1580) IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted ie. would produce a fully deleted segment. > IndexWriter incorrectly calls closeMergeReaders twice when the merged segment > is 100% deleted > - > > Key: LUCENE-9405 > URL: https://issues.apache.org/jira/browse/LUCENE-9405 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Reporter: Michael McCandless >Assignee: Simon Willnauer >Priority: Minor > Time Spent: 1h 10m > Remaining Estimate: 0h > > This is the first spinoff from a [controversial PR to add a new index-time > feature to Lucene to merge small segments during > commit|https://github.com/apache/lucene-solr/pull/1552]. This can > substantially reduce the number of small index segments to search. > See specifically [this discussion > there|https://github.com/apache/lucene-solr/pull/1552#discussion_r440298695]. > {{IndexWriter}} seems to be missing a {{success = true}} inside > {{mergeMiddle}} in the case where all segments being merged have 100% > deletions and the segments will simply be dropped. > In this case, in master today, I think we are incorrectly calling > {{closeMergedReaders}} twice, first with {{suppressExceptions = false}} and > second time with {{true}}. > There is a [dedicated test case here showing the > issue|https://github.com/apache/lucene-solr/commit/cab5ef5e6f2bdcda59fd669a298ec137af9d], > but that test case relies on changes in the controversial feature (added > {{MergePolicy.findFullFlushMerges}}). I think it should be possible to make > another test case show the bug without that controversial feature, and I am > unsure why our existing randomized tests have not uncovered this yet ... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #1580: LUCENE-9405: Ensure IndexWriter only closes merge readers once.
msokolov commented on a change in pull request #1580: URL: https://github.com/apache/lucene-solr/pull/1580#discussion_r440405613 ## File path: lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java ## @@ -4167,4 +4167,38 @@ public void testSegmentCommitInfoId() throws IOException { } } } + + public void testMergeZeroDocsMergeIsClosedOnce() throws IOException { Review comment: Thanks for the test This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw commented on a change in pull request #1580: LUCENE-9405: Ensure IndexWriter only closes merge readers once.
s1monw commented on a change in pull request #1580: URL: https://github.com/apache/lucene-solr/pull/1580#discussion_r440405692 ## File path: lucene/CHANGES.txt ## @@ -266,6 +266,9 @@ Bug Fixes * LUCENE-9362: Fix equality check in ExpressionValueSource#rewrite. This fixes rewriting of inner value sources. (Dmitry Emets) +* LUCENE-9405: IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted. + (Simon Willnauer, Mike Mccandless, Mike Sokolov) Review comment: I will keep you anyways @msokolov This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw commented on a change in pull request #1580: LUCENE-9405: Ensure IndexWriter only closes merge readers once.
s1monw commented on a change in pull request #1580: URL: https://github.com/apache/lucene-solr/pull/1580#discussion_r440405474 ## File path: lucene/CHANGES.txt ## @@ -266,6 +266,9 @@ Bug Fixes * LUCENE-9362: Fix equality check in ExpressionValueSource#rewrite. This fixes rewriting of inner value sources. (Dmitry Emets) +* LUCENE-9405: IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted. + (Simon Willnauer, Mike Mccandless, Mike Sokolov) Review comment: I added him, thanks for pointing this out. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #1580: LUCENE-9405: Ensure IndexWriter only closes merge readers once.
msokolov commented on a change in pull request #1580: URL: https://github.com/apache/lucene-solr/pull/1580#discussion_r440405226 ## File path: lucene/CHANGES.txt ## @@ -266,6 +266,9 @@ Bug Fixes * LUCENE-9362: Fix equality check in ExpressionValueSource#rewrite. This fixes rewriting of inner value sources. (Dmitry Emets) +* LUCENE-9405: IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted. + (Simon Willnauer, Mike Mccandless, Mike Sokolov) Review comment: yeah, and you can remove me - I am just a conduit! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mikemccand commented on a change in pull request #1580: LUCENE-9405: Ensure IndexWriter only closes merge readers once.
mikemccand commented on a change in pull request #1580: URL: https://github.com/apache/lucene-solr/pull/1580#discussion_r440403479 ## File path: lucene/CHANGES.txt ## @@ -266,6 +266,9 @@ Bug Fixes * LUCENE-9362: Fix equality check in ExpressionValueSource#rewrite. This fixes rewriting of inner value sources. (Dmitry Emets) +* LUCENE-9405: IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted. + (Simon Willnauer, Mike Mccandless, Mike Sokolov) Review comment: Maybe also add @msfroh (Michael Froh)? -- I think he fixed this originally in the first PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw opened a new pull request #1580: LUCENE-9405: Ensure IndexWriter only closes merge readers once.
s1monw opened a new pull request #1580: URL: https://github.com/apache/lucene-solr/pull/1580 IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted ie. would produce a fully deleted segment. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-9405) IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted
[ https://issues.apache.org/jira/browse/LUCENE-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer reassigned LUCENE-9405: --- Assignee: Simon Willnauer I will try to look at this > IndexWriter incorrectly calls closeMergeReaders twice when the merged segment > is 100% deleted > - > > Key: LUCENE-9405 > URL: https://issues.apache.org/jira/browse/LUCENE-9405 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Reporter: Michael McCandless >Assignee: Simon Willnauer >Priority: Minor > > This is the first spinoff from a [controversial PR to add a new index-time > feature to Lucene to merge small segments during > commit|https://github.com/apache/lucene-solr/pull/1552]. This can > substantially reduce the number of small index segments to search. > See specifically [this discussion > there|https://github.com/apache/lucene-solr/pull/1552#discussion_r440298695]. > {{IndexWriter}} seems to be missing a {{success = true}} inside > {{mergeMiddle}} in the case where all segments being merged have 100% > deletions and the segments will simply be dropped. > In this case, in master today, I think we are incorrectly calling > {{closeMergedReaders}} twice, first with {{suppressExceptions = false}} and > second time with {{true}}. > There is a [dedicated test case here showing the > issue|https://github.com/apache/lucene-solr/commit/cab5ef5e6f2bdcda59fd669a298ec137af9d], > but that test case relies on changes in the controversial feature (added > {{MergePolicy.findFullFlushMerges}}). I think it should be possible to make > another test case show the bug without that controversial feature, and I am > unsure why our existing randomized tests have not uncovered this yet ... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mikemccand commented on pull request #1552: LUCENE-8962: merge small segments on commit
mikemccand commented on pull request #1552: URL: https://github.com/apache/lucene-solr/pull/1552#issuecomment-644317157 OK I opened [this issue](https://issues.apache.org/jira/browse/LUCENE-9406) to explore how to know what specifically `IndexWriter` is doing for merge-on-commit and other expert actions, and [this issue](https://issues.apache.org/jira/browse/LUCENE-9405) for the controversial pre-existing missing `success = true` `IndexWriter` bug. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9406) Make it simpler to track IndexWriter's events
Michael McCandless created LUCENE-9406: -- Summary: Make it simpler to track IndexWriter's events Key: LUCENE-9406 URL: https://issues.apache.org/jira/browse/LUCENE-9406 Project: Lucene - Core Issue Type: Improvement Components: core/index Reporter: Michael McCandless This is the second spinoff from a [controversial PR to add a new index-time feature to Lucene to merge small segments during commit|https://github.com/apache/lucene-solr/pull/1552]. That change can substantially reduce the number of small index segments to search. In that PR, there was a new proposed interface, {{IndexWriterEvents}}, giving the application a chance to track when {{IndexWriter}} kicked off merges during commit, how many, how long it waited, how often it gave up waiting, etc. Such telemetry from production usage is really helpful when tuning settings like which merges (e.g. a size threshold) to attempt on commit, and how long to wait during commit, etc. I am splitting out this issue to explore possible approaches to do this. E.g. [~simonw] proposed using a statistics class instead, but if I understood that correctly, I think that would put the role of aggregation inside {{IndexWriter}}, which is not ideal. Many interesting events, e.g. how many merges are being requested, how large are they, how long did they take to complete or fail, etc., can be gleaned by wrapping expert Lucene classes like {{MergePolicy}} and {{MergeScheduler}}. But for those events that cannot (e.g. {{IndexWriter}} stopped waiting for merges during commit), it would be very helpful to have some simple way to track so applications can better tune. It is also possible to subclass {{IndexWriter}} and override key methods, but I think that is inherently risky as {{IndexWriter}}'s protected methods are not considered to be a stable API, and the synchronization used by {{IndexWriter}} is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-14569: Description: The issue was first noticed on an instance of Solr 8.5.0, after securing Solr with security.json. Searching on a single collection returns the expected results, but searching on an alias returns HTTP 401. *Note that this issue is not reproduced when the collections are created using the _default configuration.* The attached patch includes a unit test to query on an alias. *Fixed and updated as per [~gerlowskija]' comments* *Patch applies on master branch (9x)*. The unit test is added to the test class that was originally part of the patch to fix SOLR-13510. I also attach: - our product-specific Solr configuration, modified to remove irrelevant plugins and fields - security.json with user 'admin' (pwd 'admin') -- Note that forwardCredentials true or false does not modify the behavior To test with this configuration: - Download and unzip Solr 8.5.0 - Modify ./bin/solr.in.sh : -- ZK_HOST (optional) -- SOLR_AUTH_TYPE="basic" -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" - Upload security.json into Zookeeper -- ./bin/solr zk cp [file:/path/to/security.json|file:///path/to/security.json] zk:/path/to/solr/security.json [-z :[/]] - Start Solr in cloud mode -- ./bin/solr -c - Upload the provided configuration - ./bin/solr zk upconfig -z :[/] -n conf_en -d /path/to/folder/conf/ - Create 2 collections using the uploaded configuration -- test1, test2 - Create an alias grouping the 2 collections -- test = test1, test2 - Query (/select?q=*:*) one collection -- results in successful Solr response - Query the alias (/select?q=*:*) -- results in HTTP 401 There is no need to add documents to observe the issue. was: The issue was first noticed on an instance of Solr 8.5.0, after securing Solr with security.json. Searching on a single collection returns the expected results, but searching on an alias returns HTTP 401. *Note that this issue is not reproduced when the collections are created using the _default configuration.* The attached patch includes a unit test that reproduces the issue. *Patch applies on master branch (9x)*: Do not include in the regular build ! The test is failing to illustrate this issue. The unit test is added to the test class that was originally part of the patch to fix SOLR-13510. I also attach: - our product-specific Solr configuration, modified to remove irrelevant plugins and fields - security.json with user 'admin' (pwd 'admin') -- Note that forwardCredentials true or false does not modify the behavior To test with this configuration: - Download and unzip Solr 8.5.0 - Modify ./bin/solr.in.sh : -- ZK_HOST (optional) -- SOLR_AUTH_TYPE="basic" -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" - Upload security.json into Zookeeper -- ./bin/solr zk cp [file:/path/to/security.json|file:///path/to/security.json] zk:/path/to/solr/security.json [-z :[/]] - Start Solr in cloud mode -- ./bin/solr -c - Upload the provided configuration - ./bin/solr zk upconfig -z :[/] -n conf_en -d /path/to/folder/conf/ - Create 2 collections using the uploaded configuration -- test1, test2 - Create an alias grouping the 2 collections -- test = test1, test2 - Query (/select?q=*:*) one collection -- results in successful Solr response - Query the alias (/select?q=*:*) -- results in HTTP 401 There is no need to add documents to observe the issue. > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, security.json, > security.json, solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test to query on an alias. *Fixed and > updated as per [~gerlowskija]' comments* > *Patch applies on master branch (9x)*. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configur
[jira] [Updated] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-14569: Attachment: SOLR-14569.patch > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, SOLR-14569.patch, security.json, > security.json, solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp > [file:/path/to/security.json|file:///path/to/security.json] > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=*:*) one collection > -- results in successful Solr response > - Query the alias (/select?q=*:*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-14569: Description: The issue was first noticed on an instance of Solr 8.5.0, after securing Solr with security.json. Searching on a single collection returns the expected results, but searching on an alias returns HTTP 401. *Note that this issue is not reproduced when the collections are created using the _default configuration.* The attached patch includes a unit test that reproduces the issue. *Patch applies on master branch (9x)*: Do not include in the regular build ! The test is failing to illustrate this issue. The unit test is added to the test class that was originally part of the patch to fix SOLR-13510. I also attach: - our product-specific Solr configuration, modified to remove irrelevant plugins and fields - security.json with user 'admin' (pwd 'admin') -- Note that forwardCredentials true or false does not modify the behavior To test with this configuration: - Download and unzip Solr 8.5.0 - Modify ./bin/solr.in.sh : -- ZK_HOST (optional) -- SOLR_AUTH_TYPE="basic" -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" - Upload security.json into Zookeeper -- ./bin/solr zk cp [file:/path/to/security.json|file:///path/to/security.json] zk:/path/to/solr/security.json [-z :[/]] - Start Solr in cloud mode -- ./bin/solr -c - Upload the provided configuration - ./bin/solr zk upconfig -z :[/] -n conf_en -d /path/to/folder/conf/ - Create 2 collections using the uploaded configuration -- test1, test2 - Create an alias grouping the 2 collections -- test = test1, test2 - Query (/select?q=*:*) one collection -- results in successful Solr response - Query the alias (/select?q=*:*) -- results in HTTP 401 There is no need to add documents to observe the issue. was: The issue was first noticed on an instance of Solr 8.5.0, after securing Solr with security.json. Searching on a single collection returns the expected results, but searching on an alias returns HTTP 401. *Note that this issue is not reproduced when the collections are created using the _default configuration.* The attached patch includes a unit test that reproduces the issue. *Patch applies on master branch (9x)*: Do not include in the regular build ! The test is failing to illustrate this issue. The unit test is added to the test class that was originally part of the patch to fix SOLR-13510. I also attach: - our product-specific Solr configuration, modified to remove irrelevant plugins and fields - security.json with user 'admin' (pwd 'admin') -- Note that forwardCredentials true or false does not modify the behavior To test with this configuration: - Download and unzip Solr 8.5.0 - Modify ./bin/solr.in.sh : -- ZK_HOST (optional) -- SOLR_AUTH_TYPE="basic" -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" - Upload security.json into Zookeeper -- ./bin/solr zk cp file:/path/to/security.json zk:/path/to/solr/security.json [-z :[/]] - Start Solr in cloud mode -- ./bin/solr -c - Upload the provided configuration - ./bin/solr zk upconfig -z :[/] -n conf_en -d /path/to/folder/conf/ - Create 2 collections using the uploaded configuration -- test1, test2 - Create an alias grouping the 2 collections -- test = test1, test2 - Query (/select?q=\*:\*) one collection -- results in successful Solr response - Query the alias (/select?q=\*:\*) -- results in HTTP 401 There is no need to add documents to observe the issue. > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, security.json, security.json, > solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr conf
[GitHub] [lucene-solr] s1monw commented on a change in pull request #1576: Alternative approach to LUCENE-8962
s1monw commented on a change in pull request #1576: URL: https://github.com/apache/lucene-solr/pull/1576#discussion_r440359230 ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -399,8 +423,19 @@ public String segString(Directory dir) { } return b.toString(); } + +boolean await(long timeout, TimeUnit unit) { + for (OneMerge merge : merges) { +if (merge.await(timeout, unit) == false) { Review comment: in a real change that's correct. in a prototype as this is it's really just there to visualize the idea. I didn't do this on purpose to not discuss impl details. that's not the point of this it's really just a PR to make commenting simpler. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-11973) Fail compilation on precommit warnings
[ https://issues.apache.org/jira/browse/SOLR-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-11973: -- Summary: Fail compilation on precommit warnings (was: Selectively fail on precommit WARN messages) > Fail compilation on precommit warnings > -- > > Key: SOLR-11973 > URL: https://issues.apache.org/jira/browse/SOLR-11973 > Project: Solr > Issue Type: Improvement > Components: Build >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > > Not quite sure whether this qualifies as something for Solr or Lucene > I'm working gradually on getting precommit lint warnings out of the code > base. I'd like to selectively fail a subtree once it's clean. I played around > a bit with Robert's suggestions on the dev list but couldn't quite get it to > work, then decided I needed to focus on one thing at a time. > See SOLR-10809 for the first clean directory Real Soon Now. > Bonus points would be working out how to fail on deprecation warnings when > building Solr too, although that's farther off in the future. > Assigning to myself, but anyone who knows the build ins and outs _please_ > feel free to take it! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136056#comment-17136056 ] Isabelle Giguere edited comment on SOLR-14569 at 6/15/20, 5:45 PM: --- Hi [~gerlowskija] Good catches. Sorry. Trying to work fast... ;) I did encounter the issue, at first, with a valid security.json. The one I uploaded does have a typo in it. Correcting it (new upload). And I just tested again, to be sure, with valid security.json, and attached config. Result is still HTTP 401 for me (on CentOS 7.7, if that matters) I just ran the unit test again, with the right password. It passed ! It means there's something wrong with the configuration. But how can solrconfig.xml or schema.xml have an impact on authentication, or alias queries in general ? That doesn't make sense. There's no documented incompatibility that I'm aware of. was (Author: igiguere): Hi [~gerlowskija] Good catches. Sorry. Trying to work fast... ;) I did encounter the issue, at first, with a valid security.json. The one I uploaded does have a typo in it. Correcting it (new upload). And I just tested again, to be sure, with valid security.json, and attached config. Result is still HTTP 401 for me (on CentOS 7.7, if that matters) I'm running the unit test again, with the right password. If it passes... It means there's something wrong with the configuration. But how can solrconfig.xml or schema.xml have an impact on authentication, or alias queries in general ? That doesn't make sense. > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, security.json, security.json, > solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp file:/path/to/security.json > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=\*:\*) one collection > -- results in successful Solr response > - Query the alias (/select?q=\*:\*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136056#comment-17136056 ] Isabelle Giguere edited comment on SOLR-14569 at 6/15/20, 5:43 PM: --- Hi [~gerlowskija] Good catches. Sorry. Trying to work fast... ;) I did encounter the issue, at first, with a valid security.json. The one I uploaded does have a typo in it. Correcting it (new upload). And I just tested again, to be sure, with valid security.json, and attached config. Result is still HTTP 401 for me (on CentOS 7.7, if that matters) I'm running the unit test again, with the right password. If it passes... It means there's something wrong with the configuration. But how can solrconfig.xml or schema.xml have an impact on authentication, or alias queries in general ? That doesn't make sense. was (Author: igiguere): Hi [~gerlowskija] Good catches. Sorry. Trying to work fast... ;) I did encounter the issue, at first, with a valid security.json. The one I uploaded does have a typo in it. Correcting it (new upload). And I just tested again, to be sure, with valid security.json, at attached config. I'm running the unit test again, with the right password. If it passes... It means there's something wrong with the configuration. But how can solrconfig.xml or schema.xml have an impact on authentication, or alias queries in general ? That doesn't make sense. > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, security.json, security.json, > solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp file:/path/to/security.json > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=\*:\*) one collection > -- results in successful Solr response > - Query the alias (/select?q=\*:\*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136056#comment-17136056 ] Isabelle Giguere edited comment on SOLR-14569 at 6/15/20, 5:42 PM: --- Hi [~gerlowskija] Good catches. Sorry. Trying to work fast... ;) I did encounter the issue, at first, with a valid security.json. The one I uploaded does have a typo in it. Correcting it (new upload). And I just tested again, to be sure, with valid security.json, at attached config. I'm running the unit test again, with the right password. If it passes... It means there's something wrong with the configuration. But how can solrconfig.xml or schema.xml have an impact on authentication, or alias queries in general ? That doesn't make sense. was (Author: igiguere): Hi [~gerlowskija] Good catches. Sorry. Trying to work fast... ;) I did encounter the issue, at first, with a valid security.json. The one I uploaded does have a typo in it. Correcting it (new upload). I'm running the unit test again, with the right password. If it passes... It means there's something wrong with the configuration. But how can solrconfig.xml or schema.xml have an impact on authentication, or alias queries in general ? That doesn't make sense. > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, security.json, security.json, > solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp file:/path/to/security.json > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=\*:\*) one collection > -- results in successful Solr response > - Query the alias (/select?q=\*:\*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136056#comment-17136056 ] Isabelle Giguere commented on SOLR-14569: - Hi [~gerlowskija] Good catches. Sorry. Trying to work fast... ;) I did encounter the issue, at first, with a valid security.json. The one I uploaded does have a typo in it. Correcting it (new upload). I'm running the unit test again, with the right password. If it passes... It means there's something wrong with the configuration. But how can solrconfig.xml or schema.xml have an impact on authentication, or alias queries in general ? That doesn't make sense. > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, security.json, security.json, > solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp file:/path/to/security.json > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=\*:\*) one collection > -- results in successful Solr response > - Query the alias (/select?q=\*:\*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-14569: Attachment: security.json > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, security.json, security.json, > solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp file:/path/to/security.json > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=\*:\*) one collection > -- results in successful Solr response > - Query the alias (/select?q=\*:\*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9405) IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted
Michael McCandless created LUCENE-9405: -- Summary: IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted Key: LUCENE-9405 URL: https://issues.apache.org/jira/browse/LUCENE-9405 Project: Lucene - Core Issue Type: Bug Components: core/index Reporter: Michael McCandless This is the first spinoff from a [controversial PR to add a new index-time feature to Lucene to merge small segments during commit|https://github.com/apache/lucene-solr/pull/1552]. This can substantially reduce the number of small index segments to search. See specifically [this discussion there|https://github.com/apache/lucene-solr/pull/1552#discussion_r440298695]. {{IndexWriter}} seems to be missing a {{success = true}} inside {{mergeMiddle}} in the case where all segments being merged have 100% deletions and the segments will simply be dropped. In this case, in master today, I think we are incorrectly calling {{closeMergedReaders}} twice, first with {{suppressExceptions = false}} and second time with {{true}}. There is a [dedicated test case here showing the issue|https://github.com/apache/lucene-solr/commit/cab5ef5e6f2bdcda59fd669a298ec137af9d], but that test case relies on changes in the controversial feature (added {{MergePolicy.findFullFlushMerges}}). I think it should be possible to make another test case show the bug without that controversial feature, and I am unsure why our existing randomized tests have not uncovered this yet ... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14570) Round Off in MM Edismax
[ https://issues.apache.org/jira/browse/SOLR-14570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lucky Sharma updated SOLR-14570: Priority: Minor (was: Major) > Round Off in MM Edismax > > > Key: SOLR-14570 > URL: https://issues.apache.org/jira/browse/SOLR-14570 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Lucky Sharma >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Currently in EdisMax, mm if we pass 75% and we have 3 tokens in the query, it > always takes the floor value i.e 1. > To support the round off values, i.e in the above case the minimum query > should match = 2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1576: Alternative approach to LUCENE-8962
dsmiley commented on a change in pull request #1576: URL: https://github.com/apache/lucene-solr/pull/1576#discussion_r440309711 ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -399,8 +423,19 @@ public String segString(Directory dir) { } return b.toString(); } + +boolean await(long timeout, TimeUnit unit) { + for (OneMerge merge : merges) { +if (merge.await(timeout, unit) == false) { Review comment: This looks suspicious when there is more than one merge. Shouldn't the timeout decrease as time is used up by earlier merges? In practice, when is there more than one? I've been confused on this matter when I developed a custom MP/MS. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440301532 ## File path: solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java ## @@ -383,10 +382,12 @@ public boolean equals(Object that) { return super.equals(that) && Objects.equals(this.name, other.name) && this.znodeVersion == other.znodeVersion; } -// @Override -// public int hashCode() { -//throw new UnsupportedOperationException("TODO unimplemented DocCollection.hashCode"); -// } + @Override + public int hashCode() { +return Objects.hash(super.hashCode(), znodeVersion, name, replicationFactor, +numNrtReplicas, numTlogReplicas, numPullReplicas, maxShardsPerNode, +autoAddReplicas, policy, readOnly); Review comment: The superclass (`ZkNodeProps`) `hashCode()` hashes (Edit: in this PR is returns 0, but it should instead return the map hash value) the `props` map, so already takes care of producing a hash value based on `replicationFactor`, `numNrtReplicas`, `numTlogReplicas`, `numPullReplicas`, `maxShardsPerNode`, `autoAddReplicas`, `policy` and `readOnly` that were extracted from the map. These do not have to be added here again but it's **not** incorrect to have them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440306014 ## File path: solr/solrj/src/java/org/apache/solr/common/util/ValidatingJsonMap.java ## @@ -348,10 +348,12 @@ public boolean equals(Object that) { return that instanceof Map && this.delegate.equals(that); } -// @Override -// public int hashCode() { -//throw new UnsupportedOperationException("TODO unimplemented ValidatingJsonMap.hashCode"); -// } + //TODO: Really uncertain about this. Hashing the map itself seems + // about as expensive as resolving with equals. Review comment: `hashCode()` and `equals()` are not used for the same type of computations, it's not one _or_ the other, so their relative costs are not necessarily an implementation criteria. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mikemccand commented on a change in pull request #1552: LUCENE-8962: merge small segments on commit
mikemccand commented on a change in pull request #1552: URL: https://github.com/apache/lucene-solr/pull/1552#discussion_r440298695 ## File path: lucene/core/src/java/org/apache/lucene/index/IndexWriter.java ## @@ -4483,6 +4593,7 @@ public int length() { // Merge would produce a 0-doc segment, so we do nothing except commit the merge to remove all the 0-doc segments that we "merged": assert merge.info.info.maxDoc() == 0; commitMerge(merge, mergeState); +success = true; Review comment: Phew, I dit some git archaeology (thanks @msokolov for the pointers!) and uncovered the branch commit for this "merge small segments on commit" feature where we added this controversial `success = true`: https://github.com/apache/lucene-solr/commit/cab5ef5e6f2bdcda59fd669a298ec137af9d +1 to pull the bugfix out into its own issue; I will open one. The above commit has a dedicated test case, but the problem is that test case (in the above commit) relies on this new feature (it uses the new `MergePolicy.findFullFlushMerges`). So we would need a new test case based on clean `master` branch showing the bug ... it looks like a test that merged 100% deleted segments ought to then incorrectly double-call `closeMergedReaders` (first with `suppressExceptions = false` then again with `true`) due to this missing `success = true` so it really should be easy to reproduce. Though, actually I'm surprised none of our random testing uncovered this. Not sure I full understand the bug yet :) I will open an issue! ## File path: lucene/core/src/java/org/apache/lucene/index/IndexWriter.java ## @@ -3257,6 +3320,52 @@ private long prepareCommitInternal() throws IOException { } finally { maybeCloseOnTragicEvent(); } + + if (mergeAwaitLatchRef != null) { +CountDownLatch mergeAwaitLatch = mergeAwaitLatchRef.get(); +// If we found and registered any merges above, within the flushLock, then we want to ensure that they +// complete execution. Note that since we released the lock, other merges may have been scheduled. We will +// block until the merges that we registered complete. As they complete, they will update toCommit to +// replace merged segments with the result of each merge. +config.getIndexWriterEvents().beginMergeOnCommit(); +mergeScheduler.merge(mergeSource, MergeTrigger.COMMIT); +long mergeWaitStart = System.nanoTime(); +int abandonedCount = 0; +long waitTimeMillis = (long) (config.getMaxCommitMergeWaitSeconds() * 1000.0); +try { + if (mergeAwaitLatch.await(waitTimeMillis, TimeUnit.MILLISECONDS) == false) { +synchronized (this) { + // Need to do this in a synchronized block, to make sure none of our commit merges are currently + // executing mergeFinished (since mergeFinished itself is called from within the IndexWriter lock). + // After we clear the value from mergeAwaitLatchRef, the merges we schedule will still execute as + // usual, but when they finish, they won't attempt to update toCommit or modify segment reference + // counts. + mergeAwaitLatchRef.set(null); + for (MergePolicy.OneMerge commitMerge : commitMerges) { +if (runningMerges.contains(commitMerge) || pendingMerges.contains(commitMerge)) { + abandonedCount++; +} + } +} + } +} catch (InterruptedException ie) { + throw new ThreadInterruptedException(ie); +} finally { + if (infoStream.isEnabled("IW")) { +infoStream.message("IW", String.format(Locale.ROOT, "Waited %.1f ms for commit merges", +(System.nanoTime() - mergeWaitStart)/1_000_000.0)); +infoStream.message("IW", "After executing commit merges, had " + toCommit.size() + " segments"); +if (abandonedCount > 0) { + infoStream.message("IW", "Abandoned " + abandonedCount + " commit merges after " + waitTimeMillis + " ms"); +} + } + if (abandonedCount > 0) { + config.getIndexWriterEvents().abandonedMergesOnCommit(abandonedCount); Review comment: OK let's remove this part and leave it for another day. I'll open a separate issue. ## File path: lucene/core/src/java/org/apache/lucene/index/IndexWriterConfig.java ## @@ -109,6 +110,9 @@ /** Default value for whether calls to {@link IndexWriter#close()} include a commit. */ public final static boolean DEFAULT_COMMIT_ON_CLOSE = true; + + /** Default value for time to wait for merges on commit (when using a {@link MergePolicy} that implements findFullFlushMerges). */ + public static final double DEFAULT_MAX_COMMIT_MERGE_WAIT_SECONDS = 30.0; Review comment: > maybe 0 as a de
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440304468 ## File path: solr/solrj/src/java/org/apache/solr/common/cloud/ZkNodeProps.java ## @@ -171,8 +167,10 @@ public boolean getBool(String key, boolean b) { public boolean equals(Object that) { return that instanceof ZkNodeProps && ((ZkNodeProps)that).propMap.equals(this.propMap); } -// @Override -// public int hashCode() { -//throw new UnsupportedOperationException("TODO unimplemented ZkNodeProps.hashCode"); -// } + + //TODO: I'm very uncertain about this + @Override + public int hashCode() { +return 0; Review comment: A constant `hashCode()` return value is correct but does not _spread_ objects into buckets in any useful way. Here we could return `propMap.hashCode()` instead. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13132) Improve JSON "terms" facet performance when sorted by relatedness
[ https://issues.apache.org/jira/browse/SOLR-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136007#comment-17136007 ] Michael Gibney commented on SOLR-13132: --- I just pushed some commits that should address many of the outstanding nocommits (mostly some minor refactoring and added javadocs). The thorniest issue remaining is I think that of caching (when to consult the filterCache for non-sweep collection). Previously, every bucket-based query (which, pre-sweep, was "all queries") consulted the filterCache – a serious problem for "terms" facets over high-cardinality fields. The code to address this in {{RelatednessAgg}} is carried over from work on SOLR-13108, roughly adapting the way the (undocumented?) {{cacheDf}} parameter is respected in {{FacetFieldProcessorByEnumTermsStream}}. The approach currently in {{RelatednessAgg}} was ported as closely as possible from the implementation in {{FacetFieldProcessorByEnumTermsStream}}, but differs by necessity (right?) in that the latter can use the {{DocsEnumState.minSetSizeCached}} over a single "slowAtomicReader()"-backed {{TermsEnum}}, whereas in {{RelatednessAgg}}, the terms may arrive out of order. The heuristic-based approach implemented in {{RelatednessAgg}} results from my assumption that forward-only {{TermsEnum}} would make the {{DocsEnumState}} approach a non-starter in the {{RelatednessAgg}} context. If I'm wrong or missing something here, that would be great, since I definitely would have preferred that approach, all else being equal! If on the other hand this assumption is valid, I can think of two possibilities: # Stick with caching everything. This would still be problematic for _non-sweep_ collection, but sweep collection should "solve" the problem by rendering it irrelevant for the default case. This would still be sub-optimal for refinement, but would probably be something we could deal with. In any case, this wouldn't make anything _worse_. ... or alternatively could _never_ consult filterCache, at least for {{TermQuery}}/{{FacetField}} and/or refinement requests? # Defer {{SKGSlotAcc.processSlot(...)}} until the end of the "collect" phase, before reading values (via {{SlotAcc.setValues(...)}}. Collection would "register" terms, which could be processed in a single index-order pass backed by a single {{TermsEnum}}. This would probably be out of scope for this issue, and I'm not sure it would work for, e.g., {{FacetFieldProcessorByHashDV}}, but I figured I'd mention it here anyway... > Improve JSON "terms" facet performance when sorted by relatedness > -- > > Key: SOLR-13132 > URL: https://issues.apache.org/jira/browse/SOLR-13132 > Project: Solr > Issue Type: Improvement > Components: Facet Module >Affects Versions: 7.4, master (9.0) >Reporter: Michael Gibney >Priority: Major > Attachments: SOLR-13132-with-cache-01.patch, > SOLR-13132-with-cache.patch, SOLR-13132.patch, SOLR-13132_testSweep.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > When sorting buckets by {{relatedness}}, JSON "terms" facet must calculate > {{relatedness}} for every term. > The current implementation uses a standard uninverted approach (either > {{docValues}} or {{UnInvertedField}}) to get facet counts over the domain > base docSet, and then uses that initial pass as a pre-filter for a > second-pass, inverted approach of fetching docSets for each relevant term > (i.e., {{count > minCount}}?) and calculating intersection size of those sets > with the domain base docSet. > Over high-cardinality fields, the overhead of per-term docSet creation and > set intersection operations increases request latency to the point where > relatedness sort may not be usable in practice (for my use case, even after > applying the patch for SOLR-13108, for a field with ~220k unique terms per > core, QTime for high-cardinality domain docSets were, e.g.: cardinality > 1816684=9000ms, cardinality 5032902=18000ms). > The attached patch brings the above example QTimes down to a manageable > ~300ms and ~250ms respectively. The approach calculates uninverted facet > counts over domain base, foreground, and background docSets in parallel in a > single pass. This allows us to take advantage of the efficiencies built into > the standard uninverted {{FacetFieldProcessorByArray[DV|UIF]}}), and avoids > the per-term docSet creation and set intersection overhead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440302683 ## File path: solr/solrj/src/java/org/apache/solr/common/cloud/Replica.java ## @@ -153,10 +152,12 @@ public boolean equals(Object o) { return name.equals(replica.name); } -// @Override -// public int hashCode() { -//throw new UnsupportedOperationException("TODO unimplemented Replica.hashCode()"); -// } + + @Override + public int hashCode() { +return Objects.hash(name, nodeName, collection); Review comment: `equals()` does not compare `nodeName` and `collection`, so `hashCode()` should not be based on these values. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440301532 ## File path: solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java ## @@ -383,10 +382,12 @@ public boolean equals(Object that) { return super.equals(that) && Objects.equals(this.name, other.name) && this.znodeVersion == other.znodeVersion; } -// @Override -// public int hashCode() { -//throw new UnsupportedOperationException("TODO unimplemented DocCollection.hashCode"); -// } + @Override + public int hashCode() { +return Objects.hash(super.hashCode(), znodeVersion, name, replicationFactor, +numNrtReplicas, numTlogReplicas, numPullReplicas, maxShardsPerNode, +autoAddReplicas, policy, readOnly); Review comment: The superclass (`ZkNodeProps`) `hashCode()` hashes the `props` map, so already takes care of producing a hash value based on `replicationFactor`, `numNrtReplicas`, `numTlogReplicas`, `numPullReplicas`, `maxShardsPerNode`, `autoAddReplicas`, `policy` and `readOnly` that were extracted from the map. These do not have to be added here again but it's **not** incorrect to have them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14571) Index download speed while replicating is fixed at 5.1 in replication.html
[ https://issues.apache.org/jira/browse/SOLR-14571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Florin Babes updated SOLR-14571: Attachment: SOLR-14571.patch > Index download speed while replicating is fixed at 5.1 in replication.html > -- > > Key: SOLR-14571 > URL: https://issues.apache.org/jira/browse/SOLR-14571 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 8.0, master (9.0), 8.5.2 >Reporter: Florin Babes >Priority: Trivial > Labels: AdminUI, Replication > Attachments: SOLR-14571.patch > > > Hello, > While checking ways to optimize the speed of replication I've noticed that > the index download speed is fixed at 5.1 in replication.html. There is a > reason for that? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14571) Index download speed while replicating is fixed at 5.1 in replication.html
[ https://issues.apache.org/jira/browse/SOLR-14571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Florin Babes updated SOLR-14571: Status: Patch Available (was: Open) > Index download speed while replicating is fixed at 5.1 in replication.html > -- > > Key: SOLR-14571 > URL: https://issues.apache.org/jira/browse/SOLR-14571 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 8.0, master (9.0), 8.5.2 >Reporter: Florin Babes >Priority: Trivial > Labels: AdminUI, Replication > Attachments: SOLR-14571.patch > > > Hello, > While checking ways to optimize the speed of replication I've noticed that > the index download speed is fixed at 5.1 in replication.html. There is a > reason for that? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440291350 ## File path: solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java ## @@ -288,10 +287,11 @@ public boolean equals(Object o) { return getClusterPreferences().equals(policy.getClusterPreferences()); } -// @Override -// public int hashCode() { -//throw new UnsupportedOperationException("TODO unimplemented"); -// } + //TODO: Uncertain about this one + @Override + public int hashCode() { +return Objects.hash(zkVersion); Review comment: `equals()` compares `policies`, `clusterPolicy` and `clusterPreferences`, but does not compare `zkVersion`. `hashCode()` should not use `zkVersion` in hash computation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440289119 ## File path: solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/AutoScalingConfig.java ## @@ -587,11 +585,11 @@ public boolean equals(Object o) { if (!getTriggerListenerConfigs().equals(that.getTriggerListenerConfigs())) return false; return getProperties().equals(that.getProperties()); } -// @Override -// public int hashCode() { -//throw new UnsupportedOperationException("TODO unimplemented"); -// } + @Override + public int hashCode() { +return Objects.hash(policy, zkVersion); Review comment: `equals()` does not compare `zkVersion`, so `hashCode()` should not be based on it either. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440287813 ## File path: solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/AutoScalingConfig.java ## @@ -305,10 +303,10 @@ public boolean equals(Object o) { return properties.equals(that.properties); } -//@Override -//public int hashCode() { -// throw new UnsupportedOperationException("TODO unimplemented"); -//} +@Override +public int hashCode() { + return Objects.hash(name, actionClass); Review comment: `equals()` is based on comparing `properties` and `hashCode()` on comparing `name` and `actionClass`. This can't guarantee two objets that are `equals()` are going to have the same `hashCode()`, as required. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mikemccand commented on pull request #1552: LUCENE-8962: merge small segments on commit
mikemccand commented on pull request #1552: URL: https://github.com/apache/lucene-solr/pull/1552#issuecomment-644226072 Just for my sanity to keep track of all the exciting PRs here :) Here is the original PR (that was pushed, then reverted, then led to this PR): https://github.com/apache/lucene-solr/pull/1155 And here is @s1monw's new proposed simpler approach: https://github.com/apache/lucene-solr/pull/1576 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440284972 ## File path: solr/core/src/java/org/apache/solr/pkg/PackageAPI.java ## @@ -211,7 +211,7 @@ public boolean equals(Object obj) { @Override public int hashCode() { - throw new UnsupportedOperationException("TODO unimplemented"); + return Objects.hash(version, manifestSHA512); Review comment: `equals()` above does not compare `manifestSHA512`. Unless `manifestSHA512` are equal if `version` are (in which case no need to add it here?), we could have two objets that are `equals()` yet have different `hashCode()`, which would be against the spec (see `java.lang.Object.hashCode()` javadoc). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14566) Record "NOW" on "coordinator" log messages
[ https://issues.apache.org/jira/browse/SOLR-14566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135976#comment-17135976 ] Robert Muir commented on SOLR-14566: IMO, there are two separate concerns: 1) propagating an existing id from any incoming request to any additional requests, and 2) creating a new one. As far as #1 goes, it means the user is already giving you an ID (e.g. from their search backend). And there is kind of an adhoc standard to put it these into a header such as x-request-id or X-Amzn-Trace-Id and there is support out there (e.g. AWS ELB) to set it. if theres a little flask web service as part of the architecture you can add something like https://pypi.org/project/flask-request-id-header/ to make use of it, as an example. so I think that might be improved over using a URL parameter for better interoperability. #2 is less interesting to me (e.g. generating an ID if the incoming request isn't marked by one). UUID is unfriendly to me anyway (e.g. starting with a timestamp portion will make the IDs roughly sortable which can be handy for ad-hoc debugging). > Record "NOW" on "coordinator" log messages > -- > > Key: SOLR-14566 > URL: https://issues.apache.org/jira/browse/SOLR-14566 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Currently, in SolrCore.java we log each search request that comes through > each core as it is finishing. This includes the path, query-params, QTime, > and status. In the case of a distributed search both the "coordinator" node > and each of the per-shard requests produce a log message. > When Solr is fielding many identical queries, such as those created by a > healthcheck or dashboard, it can be hard when examining logs to link the > per-shard requests with the "cooordinator" request that came in upstream. > One thing that would make this easier is if the {{NOW}} param added to > per-shard requests is also included in the log message from the > "coordinator". While {{NOW}} isn't unique strictly speaking, it often is in > practice, and along with the query-params would allow debuggers to associate > shard requests with coordinator requests a large majority of the time. > An alternative approach would be to create a {{qid}} or {{query-uuid}} when > the coordinator starts its work that can be logged everywhere. This provides > a stronger expectation around uniqueness, but would require UUID generation > on the coordinator, which may be non-negligible work at high QPS (maybe? I > have no idea). It also loses the neatness of reusing data already present on > the shard requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135974#comment-17135974 ] Jason Gerlowski commented on SOLR-14569: A quick attempt at reproducing didn't work for me. I took a few shortcuts - used an embedded ZK, turned on auth with "bin/solr auth enable", used my own solrconfig, etc, so maybe that's the problem. Trying a more faithful reproduction now. That said, I also tried out your attached unit test. It does fail with a 401, but I think there's a bug in the test. As written, your test case calls {{.setBasicAuthCredentials("reader", "reader")}} on the query it makes, but the way the security.json is set up, the correct password is "solr" for both the "reader" and the "solr" users. When I corrected that call to setBasicAuthCredentials, the test started passing for me. Including the updated snippet here: {code} @Test public void aliasTest() throws Exception { try (Http2SolrClient client = new Http2SolrClient.Builder(cluster.getJettySolrRunner(0).getBaseUrl().toString()) .build()){ // Query fails for alias for (int i = 0; i < 30; i++) { SolrRequest request = new QueryRequest(params("q", "*:*")).setBasicAuthCredentials("reader", "solr"); SolrResponse response = request.process(client, ALIAS); assertNotNull(response); assertNotNull(response.getResponse()); assertNotNull(response.getResponse().get("response")); } } } {code} > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, security.json, solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp file:/path/to/security.json > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=\*:\*) one collection > -- results in successful Solr response > - Query the alias (/select?q=\*:\*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14571) Index download speed while replicating is fixed at 5.1 in replication.html
Florin Babes created SOLR-14571: --- Summary: Index download speed while replicating is fixed at 5.1 in replication.html Key: SOLR-14571 URL: https://issues.apache.org/jira/browse/SOLR-14571 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Admin UI Affects Versions: 8.5.2, 8.0, master (9.0) Reporter: Florin Babes Hello, While checking ways to optimize the speed of replication I've noticed that the index download speed is fixed at 5.1 in replication.html. There is a reason for that? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
murblanc commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r440281749 ## File path: solr/core/src/java/org/apache/solr/cloud/rule/Rule.java ## @@ -365,7 +365,7 @@ public boolean equals(Object obj) { @Override public int hashCode() { - throw new UnsupportedOperationException("TODO unimplemented"); + return Objects.hash(name, fuzzy); Review comment: Given that `fuzzy` is not included in the `equals()` implementation just above, we could have two instances that are `equals()` but that have different values for `hashCode()`. I believe this is not correct. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135915#comment-17135915 ] Jason Gerlowski edited comment on SOLR-14569 at 6/15/20, 3:45 PM: -- Hi Isabelle, thanks for reporting, sorry you're running into this. The first thing I noticed is that the uploaded security.json file isn't valid JSON. (There's a comma missing after the "credentials" property). Maybe this is a just a typo in the file you uploaded, but it's possible it's contributing to your issue in some way. If you get a chance, try out your reproduction in a corrected security.json file. -It might also help us if you bumped up the log-level for some security classes and included the relevant log snippets here. Specifically the class "org.apache.solr.security.RuleBasedAuthorizationPluginBase" or the "org.apache.solr.security" package more generally. You can do this by editing log4j2.xml in your Solr install, or on the "Logging" panel in the Solr Admin UI.- EDIT: On second review, since the status code is a 401, this is likely caused by BasicAuth and not RuleBased-Authz. So the debug logging I was asking for prob won't be useful here. In the meantime I'll try to reproduce locally on my own. was (Author: gerlowskija): Hi Isabelle, thanks for reporting, sorry you're running into this. The first thing I noticed is that the uploaded security.json file isn't valid JSON. (There's a comma missing after the "credentials" property). Maybe this is a just a typo in the file you uploaded, but it's possible it's contributing to your issue in some way. If you get a chance, try out your reproduction in a corrected security.json file. It might also help us if you bumped up the log-level for some security classes and included the relevant log snippets here. Specifically the class "org.apache.solr.security.RuleBasedAuthorizationPluginBase" or the "org.apache.solr.security" package more generally. You can do this by editing log4j2.xml in your Solr install, or on the "Logging" panel in the Solr Admin UI. In the meantime I'll try to reproduce locally on my own. > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, security.json, solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp file:/path/to/security.json > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=\*:\*) one collection > -- results in successful Solr response > - Query the alias (/select?q=\*:\*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14566) Record "NOW" on "coordinator" log messages
[ https://issues.apache.org/jira/browse/SOLR-14566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135960#comment-17135960 ] Jason Gerlowski commented on SOLR-14566: Interesting, I guess you could build something unique out of info about the request. That's a third approach here, and it might end up being cheaper than UUID generation. I still lean towards the {{NOW}} approach a bit I think. It's clean in that our "correlation-key" is something that's already required on downstream nodes as-is. It doesn't need computed at all - we already have it for the downstream nodes. And it's semantically useful to debuggers as well, to get a sense for when the (upstream) request came in. (Though it's probably redundant with QTime in doing that). The only downside is that it's not strictly unique - but that should only really be a problem if you regularly see many identical queries come in within the same millisecond of one another. Will think on it a bit more and see what others chime in with. > Record "NOW" on "coordinator" log messages > -- > > Key: SOLR-14566 > URL: https://issues.apache.org/jira/browse/SOLR-14566 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Currently, in SolrCore.java we log each search request that comes through > each core as it is finishing. This includes the path, query-params, QTime, > and status. In the case of a distributed search both the "coordinator" node > and each of the per-shard requests produce a log message. > When Solr is fielding many identical queries, such as those created by a > healthcheck or dashboard, it can be hard when examining logs to link the > per-shard requests with the "cooordinator" request that came in upstream. > One thing that would make this easier is if the {{NOW}} param added to > per-shard requests is also included in the log message from the > "coordinator". While {{NOW}} isn't unique strictly speaking, it often is in > practice, and along with the query-params would allow debuggers to associate > shard requests with coordinator requests a large majority of the time. > An alternative approach would be to create a {{qid}} or {{query-uuid}} when > the coordinator starts its work that can be logged everywhere. This provides > a stronger expectation around uniqueness, but would require UUID generation > on the coordinator, which may be non-negligible work at high QPS (maybe? I > have no idea). It also loses the neatness of reusing data already present on > the shard requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] jpountz commented on a change in pull request #1567: LUCENE-9402: Let MultiCollector handle minCompetitiveScore
jpountz commented on a change in pull request #1567: URL: https://github.com/apache/lucene-solr/pull/1567#discussion_r440242303 ## File path: lucene/core/src/java/org/apache/lucene/search/MultiCollector.java ## @@ -134,69 +134,110 @@ public LeafCollector getLeafCollector(LeafReaderContext context) throws IOExcept case 1: return leafCollectors.get(0); default: -return new MultiLeafCollector(leafCollectors, cacheScores); +return new MultiLeafCollector(leafCollectors, cacheScores, scoreMode() == ScoreMode.TOP_SCORES); } } private static class MultiLeafCollector implements LeafCollector { private final boolean cacheScores; private final LeafCollector[] collectors; -private int numCollectors; +private final float[] minScores; +private final boolean skipNonCompetitiveScores; -private MultiLeafCollector(List collectors, boolean cacheScores) { +private MultiLeafCollector(List collectors, boolean cacheScores, boolean skipNonCompetitive) { this.collectors = collectors.toArray(new LeafCollector[collectors.size()]); this.cacheScores = cacheScores; - this.numCollectors = this.collectors.length; + this.skipNonCompetitiveScores = skipNonCompetitive; + this.minScores = this.skipNonCompetitiveScores ? new float[this.collectors.length] : null; } @Override public void setScorer(Scorable scorer) throws IOException { if (cacheScores) { scorer = new ScoreCachingWrappingScorer(scorer); } - scorer = new FilterScorable(scorer) { -@Override -public void setMinCompetitiveScore(float minScore) { - // Ignore calls to setMinCompetitiveScore so that if we wrap two - // collectors and one of them wants to skip low-scoring hits, then - // the other collector still sees all hits. We could try to reconcile - // min scores and take the maximum min score across collectors, but - // this is very unlikely to be helpful in practice. + if (skipNonCompetitiveScores) { +for (int i = 0; i < collectors.length; ++i) { + final LeafCollector c = collectors[i]; + assert c != null; Review comment: I don't think that this assertion is right, the collector could be null if the collector already threw a CollectionTerminatedException? (we don't disallow calling `setCollector` after collection started) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14524) Harden MultiThreadedOCPTest
[ https://issues.apache.org/jira/browse/SOLR-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135943#comment-17135943 ] Ilan Ginzburg commented on SOLR-14524: -- I believe this test is good as is, so would like to mark this Jira fixed. Is there a need to wait until SOLR-14546 is fixed to mark this one fixed? > Harden MultiThreadedOCPTest > --- > > Key: SOLR-14524 > URL: https://issues.apache.org/jira/browse/SOLR-14524 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: master (9.0) >Reporter: Ilan Ginzburg >Assignee: Mike Drob >Priority: Minor > Labels: test > Fix For: master (9.0) > > Time Spent: 1h 50m > Remaining Estimate: 0h > > {{MultiThreadedOCPTest.test()}} fails occasionally in Jenkins because of > timing of tasks enqueue to the Collection API queue. > This test in {{testFillWorkQueue()}} enqueues a large number of tasks (115, > more than the 100 Collection API parallel executors) to the Collection API > queue for a collection COLL_A, then observes a short delay and enqueues a > task for another collection COLL_B. > It verifies that the COLL_B task (that does not require the same lock as the > COLL_A tasks) completes before the third COLL_A task. > Test failures happen because when enqueues are slowed down enough, the first > 3 tasks on COLL_A complete even before the COLL_B task gets enqueued! > In one sample failed Jenkins test execution, the COLL_B task enqueue happened > 1275ms after the enqueue of the first COLL_A, leaving plenty of time for a > few (and possibly all) COLL_A tasks to complete. > Fix will be along the lines of: > * Make the “blocking” COLL_A task longer to execute (currently 1 second) to > compensate for slow enqueues. > * Verify the COLL_B task (a 1ms task) finishes before the long running > COLL_A task does. This would be a good indication that even though the > collection queue was filled with tasks waiting for a busy lock, a non > competing task was picked and executed right away. > * Delay the enqueue of the COLL_B task to the end of processing of the first > COLL_A task. This would guarantee that COLL_B is enqueued once at least some > COLL_A tasks started processing at the Overseer. Possibly also verify that > the long running task of COLL_A didn't finish execution yet when the COLL_B > task is enqueued... > * It might be possible to set a (very) long duration for the slow task of > COLL_A (to be less vulnerable to execution delays) without requiring the test > to wait for that task to complete, but only wait for the COLL_B task to > complete (so the test doesn't run for too long). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14541) Ensure classes that implement equals implement hashCode or suppress warnings
[ https://issues.apache.org/jira/browse/SOLR-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135920#comment-17135920 ] Erick Erickson commented on SOLR-14541: --- The linked PR implements straw-man hashCode methods. Please pay particular attention to the ones flagged with TODO, as they seem odd. I have _not_ examined the code in detail, I'd appreciate comments. Especially from the following people who I believe are more familiar with the details of the implementations. I'll review the remaining ones this week and commit late this week absent objections. [~ab] [~jbernste] [~romseygeek] [~noble.paul] [~ichattopadhyaya] > Ensure classes that implement equals implement hashCode or suppress warnings > > > Key: SOLR-14541 > URL: https://issues.apache.org/jira/browse/SOLR-14541 > Project: Solr > Issue Type: Sub-task >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Attachments: 0001-SOLR-14541-add-hashCode-for-some-classes.patch, > 0002-SOLR-14541-add-hashCode-for-some-classes-in-autoscal.patch, > 0003-SOLR-14541-add-hashCode-or-remove-equals-for-some-cl.patch > > Time Spent: 10m > Remaining Estimate: 0h > > While looking at warnings, I found that the following classes generate this > warning: > *overrides equals, but neither it nor any superclass overrides hashCode > method* > I can suppress the warning, but this has been a source of errors in the past > so I'm reluctant to just do that blindly. > NOTE: The Lucene one should probably be it's own Jira if it's going to have > hashCode implemented, but here for triage. > What I need for each method is for someone who has a clue about that > particular code to render an opinion that we can safely suppress the warning > or to provide a hashCode method. > Some of these have been here for a very long time and were implemented by > people no longer active... > lucene/suggest/src/java/org/apache/lucene/search/spell/LuceneLevenshteinDistance.java:39 > solr/solrj/src/java/org/apache/solr/common/cloud/ZkNodeProps.java:34 > solr/solrj/src/java/org/apache/solr/common/cloud/Replica.java:26 > solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java:49 > solr/core/src/java/org/apache/solr/cloud/rule/Rule.java:277 > solr/core/src/java/org/apache/solr/pkg/PackageAPI.java:177 > solr/core/src/java/org/apache/solr/packagemanager/SolrPackageInstance.java:31 > > Noble Paul says it's OK to suppress warnings for these: > solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/VersionedData.java:31 > > solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/AutoScalingConfig.java:61 > > solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/AutoScalingConfig.java:150 > > solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/AutoScalingConfig.java:252 > > solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/AutoScalingConfig.java:45 > > solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Policy.java:73 > > solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Preference.java:32 > > solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/ReplicaInfo.java:39 > > Joel Bernstein says it's OK to suppress warnings for these: > > solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/ReplicaCount.java:27 > > solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/expr/StreamExpression.java:25 > > solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/expr/StreamExpressionNamedParameter.java:23 > > solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/CloudSolrStream.java:467 > > solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/DeepRandomStream.java:417 > > solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/expr/StreamExpressionValue.java:22 > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] ErickErickson opened a new pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
ErickErickson opened a new pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579 I've created new hashCode methods for all of the classes that implement equals but not hashCode and removed associated SuppressWarnings. I've marked certain of them with TODOs where I'm really uncertain what the right thing to do is to draw attention, but I'd appreciate people looking at the others. gw check succeeds, but that's all I'm guaranteeing at present. I need to let this bake a while and come back and re-visit them in detail if people who know the particular code better don't give a thumbs-up. hashCode implementations can be tricky, and these are definitely straw-man. I'll add some more comments on the JIRA This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14569) HTTP 401 when searching on alias in secured Solr
[ https://issues.apache.org/jira/browse/SOLR-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135915#comment-17135915 ] Jason Gerlowski commented on SOLR-14569: Hi Isabelle, thanks for reporting, sorry you're running into this. The first thing I noticed is that the uploaded security.json file isn't valid JSON. (There's a comma missing after the "credentials" property). Maybe this is a just a typo in the file you uploaded, but it's possible it's contributing to your issue in some way. If you get a chance, try out your reproduction in a corrected security.json file. It might also help us if you bumped up the log-level for some security classes and included the relevant log snippets here. Specifically the class "org.apache.solr.security.RuleBasedAuthorizationPluginBase" or the "org.apache.solr.security" package more generally. You can do this by editing log4j2.xml in your Solr install, or on the "Logging" panel in the Solr Admin UI. In the meantime I'll try to reproduce locally on my own. > HTTP 401 when searching on alias in secured Solr > > > Key: SOLR-14569 > URL: https://issues.apache.org/jira/browse/SOLR-14569 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: master (9.0), 8.5 > Environment: Unit test on master branch (9x) built on Windows 10 with > Java 11 > Solr 8.5.0 instance running on CentOS 7.7 with Java 11 >Reporter: Isabelle Giguere >Priority: Major > Attachments: SOLR-14569.patch, security.json, solr_conf.zip > > > The issue was first noticed on an instance of Solr 8.5.0, after securing Solr > with security.json. > Searching on a single collection returns the expected results, but searching > on an alias returns HTTP 401. > *Note that this issue is not reproduced when the collections are created > using the _default configuration.* > The attached patch includes a unit test that reproduces the issue. > *Patch applies on master branch (9x)*: Do not include in the regular build ! > The test is failing to illustrate this issue. > The unit test is added to the test class that was originally part of the > patch to fix SOLR-13510. > I also attach: > - our product-specific Solr configuration, modified to remove irrelevant > plugins and fields > - security.json with user 'admin' (pwd 'admin') > -- Note that forwardCredentials true or false does not modify the behavior > To test with this configuration: > - Download and unzip Solr 8.5.0 > - Modify ./bin/solr.in.sh : > -- ZK_HOST (optional) > -- SOLR_AUTH_TYPE="basic" > -- SOLR_AUTHENTICATION_OPTS="-Dbasicauth=admin:admin" > - Upload security.json into Zookeeper > -- ./bin/solr zk cp file:/path/to/security.json > zk:/path/to/solr/security.json [-z :[/]] > - Start Solr in cloud mode > -- ./bin/solr -c > - Upload the provided configuration > - ./bin/solr zk upconfig -z :[/] -n conf_en -d > /path/to/folder/conf/ > - Create 2 collections using the uploaded configuration > -- test1, test2 > - Create an alias grouping the 2 collections > -- test = test1, test2 > - Query (/select?q=\*:\*) one collection > -- results in successful Solr response > - Query the alias (/select?q=\*:\*) > -- results in HTTP 401 > There is no need to add documents to observe the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on pull request #1504: SOLR-14462: cache more than one autoscaling session
murblanc commented on pull request #1504: URL: https://github.com/apache/lucene-solr/pull/1504#issuecomment-644156063 @noblepaul this PR seems to have fallen through the cracks... I'm looking at other aspects of Autoscaling issues and this being merged would make my life easier. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9395) ConstantValuesSource creates more than one DoubleValues unnecessarily
[ https://issues.apache.org/jira/browse/LUCENE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135876#comment-17135876 ] Michael McCandless commented on LUCENE-9395: {quote}Hello! {{org.apache.lucene.search.DoubleValuesSource#getValues}} is called about once per segment, and so it's not an allocation hotspot. {quote} Hmm it is true there are other possible allocation hot spots (e.g. anything per-hit, or per collected hit, etc.), but 1) some indices have a long tail of small segments (LUCENE-8962 is trying to help there), 2) if you use many dynamic expressions, you might be creating quite a few constant {{DoubleValues}} per segment and 3) every little bit of allocation does add some (yes, tiny, but still some) load to GC. Likely most of these allocations "die young" and so the added cost is really small, I agree. {quote}There are tons of implementations of this method and similarly for {{org.apache.lucene.search.Weight#scorer}} as well which is also called about once per segment that typically allocate a bunch of stuff. {quote} It is true there are other places that allocate per-segment objects, but I do not think that is a valid argument against fixing this one? That is like saying "the world is already dirty so why should I pick up this trash lying on the sidewalk myself and throw it away?". {quote}I don't think it's worth bothering changing the code. {quote} Whose/what "bother" are you referring to here? We commiters who would actually push the change? I would say the "bother" was really on [~hypothesisx86] who has already taken the initiative here to contribute a small improvement. Thank you [~hypothesisx86]! {quote}There is a very slight help for the GC (that I doubt you could even measure) and a very slight negative impact on complexity. {quote} +1. I also doubt it is measurable over the noise limit, but I don't think for small improvements like this that we really must be able to measure the gain as a blocker to improving. Say I find a change that removes one multiplication or addition per-hit at defaults. Likely I could not prove that change moves the needle, yet, it is still an improvement that we should want to make (all else being equal). And we should not flaunt waste in Lucene's sources. Yes, there is a miniscule change in code complexity, but it's exceptionally tiny in my opinion. I think when a newish developer offers a small contribution rather than replying with "let's not bother", we should strongly welcome and encourage it. This is how a healthy open-source community grows! > ConstantValuesSource creates more than one DoubleValues unnecessarily > -- > > Key: LUCENE-9395 > URL: https://issues.apache.org/jira/browse/LUCENE-9395 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: 8.5.2 >Reporter: Tony Xu >Priority: Minor > Attachments: LUCENE-9395.patch > > > At my day job, we use ConstantValuesSource to represent default values or a > constant query-level feature by calling *_DoubleValuesSource.constant_*. I > realized under the hood the _*ConstantValuesSource.getDoubleValues*_ creates > a new _*DoubleValues*_ which simply return the specified value each time it > is called. > Unless I missed something, I don't see a risk of creating one > _*DoubleValues*_ as use it as the return value of all _*getDoubleValues**()*_ > calls given that the constant _*DoubleValues*_ doesn't maintain any state. > We can also offer the user flexibilities of how to initialize it. > 1) _*DoubleValuesSource.constant(double constant)*_ – we can eagerly > initialize an `DoubleValues` that returns the constant and make it the return > value of all _*getDoubleValues()*_ calls. > 2) _*DoubleValuesSource.constant(DoubleSupplier doubleSupplier)*_ – For lazy > evaluation if the constant takes some time to compute and user expects the > returned DVS will not be used in all code path. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13971) Velocity custom template RCE vulnerability
[ https://issues.apache.org/jira/browse/SOLR-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135869#comment-17135869 ] Szilard Antal commented on SOLR-13971: -- [~ichattopadhyaya] thanks for the quick information. > Velocity custom template RCE vulnerability > -- > > Key: SOLR-13971 > URL: https://issues.apache.org/jira/browse/SOLR-13971 > Project: Solr > Issue Type: Bug >Affects Versions: 5.0, 5.5.5, 6.0, 6.6.5, 7.0, 7.7, 8.0, 8.3 >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya >Priority: Blocker > Fix For: 7.7.3, 8.4 > > Attachments: SOLR-13971.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > We need to disable this. There is a zero day attack in the wild. 41 stars on > this github project: > # https://github.com/jas502n/solr_rce > # https://gist.github.com/s00py/a1ba36a3689fa13759ff910e179fc133 > We need to disable this in a way that cannot be re-enabled using the Config > API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mikemccand commented on pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents
mikemccand commented on pull request #1351: URL: https://github.com/apache/lucene-solr/pull/1351#issuecomment-644139389 Do we expect the nightly benchmarks (`luceneutil`) to move from this? Does it speed up any non-relevance (e.g. field'd) sort by default? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13971) Velocity custom template RCE vulnerability
[ https://issues.apache.org/jira/browse/SOLR-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135851#comment-17135851 ] Ishan Chattopadhyaya commented on SOLR-13971: - [~sziszo], unfortunately not. We currently support just 6.6, 7.7 and 8.x releases. > Velocity custom template RCE vulnerability > -- > > Key: SOLR-13971 > URL: https://issues.apache.org/jira/browse/SOLR-13971 > Project: Solr > Issue Type: Bug >Affects Versions: 5.0, 5.5.5, 6.0, 6.6.5, 7.0, 7.7, 8.0, 8.3 >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya >Priority: Blocker > Fix For: 7.7.3, 8.4 > > Attachments: SOLR-13971.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > We need to disable this. There is a zero day attack in the wild. 41 stars on > this github project: > # https://github.com/jas502n/solr_rce > # https://gist.github.com/s00py/a1ba36a3689fa13759ff910e179fc133 > We need to disable this in a way that cannot be re-enabled using the Config > API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13971) Velocity custom template RCE vulnerability
[ https://issues.apache.org/jira/browse/SOLR-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135847#comment-17135847 ] Szilard Antal edited comment on SOLR-13971 at 6/15/20, 1:24 PM: [~ichattopadhyaya] do you plan to backport this fix to 5.5.5? was (Author: sziszo): [~ichattopadhyaya] are you planning to backport this fix to 5.5.5? > Velocity custom template RCE vulnerability > -- > > Key: SOLR-13971 > URL: https://issues.apache.org/jira/browse/SOLR-13971 > Project: Solr > Issue Type: Bug >Affects Versions: 5.0, 5.5.5, 6.0, 6.6.5, 7.0, 7.7, 8.0, 8.3 >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya >Priority: Blocker > Fix For: 7.7.3, 8.4 > > Attachments: SOLR-13971.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > We need to disable this. There is a zero day attack in the wild. 41 stars on > this github project: > # https://github.com/jas502n/solr_rce > # https://gist.github.com/s00py/a1ba36a3689fa13759ff910e179fc133 > We need to disable this in a way that cannot be re-enabled using the Config > API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13971) Velocity custom template RCE vulnerability
[ https://issues.apache.org/jira/browse/SOLR-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135847#comment-17135847 ] Szilard Antal commented on SOLR-13971: -- [~ichattopadhyaya] are you planning to backport this fix to 5.5.5? > Velocity custom template RCE vulnerability > -- > > Key: SOLR-13971 > URL: https://issues.apache.org/jira/browse/SOLR-13971 > Project: Solr > Issue Type: Bug >Affects Versions: 5.0, 5.5.5, 6.0, 6.6.5, 7.0, 7.7, 8.0, 8.3 >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya >Priority: Blocker > Fix For: 7.7.3, 8.4 > > Attachments: SOLR-13971.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > We need to disable this. There is a zero day attack in the wild. 41 stars on > this github project: > # https://github.com/jas502n/solr_rce > # https://gist.github.com/s00py/a1ba36a3689fa13759ff910e179fc133 > We need to disable this in a way that cannot be re-enabled using the Config > API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14516) NPE during Realtime GET
[ https://issues.apache.org/jira/browse/SOLR-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135807#comment-17135807 ] Erick Erickson commented on SOLR-14516: --- See the e-mail I just sent about using your IDE well (in the badapple report). IntelliJ will highlight any method that could return null and is dereferenced. It's worth looking at these that when working on code to see if the inspection reveals something that should be guarded against or not. Now that there aren't any warnings in the code, more of the tools' warnings like this will actually mean something. > NPE during Realtime GET > --- > > Key: SOLR-14516 > URL: https://issues.apache.org/jira/browse/SOLR-14516 > Project: Solr > Issue Type: Bug >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: 8.6 > > > The exact reason is unknown. But the following is the stacktrace > > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException\n\tat > org.apache.solr.common.util.JsonTextWriter.writeStr(JsonTextWriter.java:83)\n\tat > org.apache.solr.schema.StrField.write(StrField.java:101)\n\tat > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:124)\n\tat > > org.apache.solr.response.JSONWriter.writeSolrDocument(JSONWriter.java:106)\n\tat > > org.apache.solr.response.TextResponseWriter.writeSolrDocumentList(TextResponseWriter.java:170)\n\tat > > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:147)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)\n\tat -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14516) NPE during Realtime GET
[ https://issues.apache.org/jira/browse/SOLR-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135801#comment-17135801 ] Ishan Chattopadhyaya commented on SOLR-14516: - The underlying reason for this that while populating a SolrInputDocument with docValues, the string docValues field gets a BytesRef (instead of CharSequence). The Field class is unable to get a string representation of the field containing BytesRef, and hence sends out a null to the response writers. > NPE during Realtime GET > --- > > Key: SOLR-14516 > URL: https://issues.apache.org/jira/browse/SOLR-14516 > Project: Solr > Issue Type: Bug >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: 8.6 > > > The exact reason is unknown. But the following is the stacktrace > > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException\n\tat > org.apache.solr.common.util.JsonTextWriter.writeStr(JsonTextWriter.java:83)\n\tat > org.apache.solr.schema.StrField.write(StrField.java:101)\n\tat > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:124)\n\tat > > org.apache.solr.response.JSONWriter.writeSolrDocument(JSONWriter.java:106)\n\tat > > org.apache.solr.response.TextResponseWriter.writeSolrDocumentList(TextResponseWriter.java:170)\n\tat > > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:147)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)\n\tat -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14516) NPE during Realtime GET
[ https://issues.apache.org/jira/browse/SOLR-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135797#comment-17135797 ] Ishan Chattopadhyaya commented on SOLR-14516: - Here is the way to reproduce it: {code} bin/solr -c curl "localhost:8983/solr/admin/collections?action=CREATE&name=coll1&numShards=1" curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field":{"name":"myfield","type":"string","stored":false,"docValues":true }}' http://localhost:8983/solr/coll1/schema curl -X POST -H 'Content-type:application/json' --data-binary '[{"id":1,"myfield":"abc"}]' http://localhost:8983/solr/coll1/update curl "http://localhost:8983/solr/coll1/get?id=1"; {code} If the last step is done quickly enough (before the autocommit kicks in), then we have an NPE (which you've suppressed here in this previous commit). > NPE during Realtime GET > --- > > Key: SOLR-14516 > URL: https://issues.apache.org/jira/browse/SOLR-14516 > Project: Solr > Issue Type: Bug >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: 8.6 > > > The exact reason is unknown. But the following is the stacktrace > > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException\n\tat > org.apache.solr.common.util.JsonTextWriter.writeStr(JsonTextWriter.java:83)\n\tat > org.apache.solr.schema.StrField.write(StrField.java:101)\n\tat > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:124)\n\tat > > org.apache.solr.response.JSONWriter.writeSolrDocument(JSONWriter.java:106)\n\tat > > org.apache.solr.response.TextResponseWriter.writeSolrDocumentList(TextResponseWriter.java:170)\n\tat > > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:147)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)\n\tat -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14516) NPE during Realtime GET
[ https://issues.apache.org/jira/browse/SOLR-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135794#comment-17135794 ] Ishan Chattopadhyaya commented on SOLR-14516: - This is just silently suppressing the NPE, without actually fixing the issue. 1. I'll post the fix for the actual issue shortly 2. I think if a null trickles through to the response writers (which it should not), we should at least log a WARN for this. Please revert the fix and lets do it afresh. > NPE during Realtime GET > --- > > Key: SOLR-14516 > URL: https://issues.apache.org/jira/browse/SOLR-14516 > Project: Solr > Issue Type: Bug >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: 8.6 > > > The exact reason is unknown. But the following is the stacktrace > > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException\n\tat > org.apache.solr.common.util.JsonTextWriter.writeStr(JsonTextWriter.java:83)\n\tat > org.apache.solr.schema.StrField.write(StrField.java:101)\n\tat > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:124)\n\tat > > org.apache.solr.response.JSONWriter.writeSolrDocument(JSONWriter.java:106)\n\tat > > org.apache.solr.response.TextResponseWriter.writeSolrDocumentList(TextResponseWriter.java:170)\n\tat > > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:147)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)\n\tat -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] MighTguY opened a new pull request #1578: SOLR-14570: Edismax round plugin
MighTguY opened a new pull request #1578: URL: https://github.com/apache/lucene-solr/pull/1578 # Description To make the rounding of the Match mode in Edismax, i.e if the mm.roundOff is true, it will round off the Edismax Match mode queries # Solution In this, we have enabled a flag, when this flag is true It will round off the minimum classes required instead of always selecting the floor value. # Tests Please describe the tests you've developed or run to confirm this patch implements the feature or solves the problem. I have run the test cases for the SolrPluginUtils.calculateMinShouldMatch tests are included in the testMinShouldMatchCalculatorWithRoundoff method # Checklist Please review the following and check all that apply: - [ ] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [ ] I have created a Jira issue and added the issue ID to my pull request title. - [ ] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [ ] I have developed this patch against the `master` branch. - [ ] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14570) Round Off in MM Edismax
Lucky Sharma created SOLR-14570: --- Summary: Round Off in MM Edismax Key: SOLR-14570 URL: https://issues.apache.org/jira/browse/SOLR-14570 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Lucky Sharma Currently in EdisMax, mm if we pass 75% and we have 3 tokens in the query, it always takes the floor value i.e 1. To support the round off values, i.e in the above case the minimum query should match = 2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mayya-sharipova commented on pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents
mayya-sharipova commented on pull request #1351: URL: https://github.com/apache/lucene-solr/pull/1351#issuecomment-644051324 @jpountz I am wondering if you have any further feedback for this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14516) NPE during Realtime GET
[ https://issues.apache.org/jira/browse/SOLR-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135654#comment-17135654 ] ASF subversion and git services commented on SOLR-14516: Commit fabc70474891f6f3e485e5a3475ebf4d05138fc5 in lucene-solr's branch refs/heads/master from Noble Paul [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fabc704 ] SOLR-14516: NPE in JsonTextWriter > NPE during Realtime GET > --- > > Key: SOLR-14516 > URL: https://issues.apache.org/jira/browse/SOLR-14516 > Project: Solr > Issue Type: Bug >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: 8.6 > > > The exact reason is unknown. But the following is the stacktrace > > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException\n\tat > org.apache.solr.common.util.JsonTextWriter.writeStr(JsonTextWriter.java:83)\n\tat > org.apache.solr.schema.StrField.write(StrField.java:101)\n\tat > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:124)\n\tat > > org.apache.solr.response.JSONWriter.writeSolrDocument(JSONWriter.java:106)\n\tat > > org.apache.solr.response.TextResponseWriter.writeSolrDocumentList(TextResponseWriter.java:170)\n\tat > > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:147)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)\n\tat > > org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)\n\tat -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy edited a comment on pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
janhoy edited a comment on pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#issuecomment-644006967 Ran the whole test suite and uncovered various tests that use "illegal" temp test folders, that now fail. That was expected. So the last commit ba0b544 addresses these tests: Give a way to whitelist all paths by setting `-Dsolr.allowPaths=*` Add a `CoreContainer.getAllowPaths()` method that tests use to allow individual folders (I like that better than letting tests set global sysprops) This also led to a small change in the path comparison - we now convert Path -> String -> Path to make sure paths are comparable, even Lucene's `FilterPath` class used in tests To review, the easiest is probably to just load the last commit ba0b544 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
janhoy commented on pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#issuecomment-644006967 Ran the whole test suite and uncovered various tests that use "illegal" temp test folders, that now fail. That was expected. So the last commit ba0b544 addresses these tests: * Give a way to whitelist all paths by setting `-Dsolr.allowPaths=*` * Add a `CoreContainer.getAllowPaths()` method that tests use to allow individual folders (I like that better than letting tests set global sysprops) * This also led to a small change in the path comparison - we now convert Path -> String -> Path to make sure paths are comparable, even Lucene's `FilterPath` class used in tests To review, the easiest is probably to just load the last commit ba0b544 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on a change in pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
janhoy commented on a change in pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#discussion_r440028825 ## File path: solr/core/src/java/org/apache/solr/core/SolrPaths.java ## @@ -128,4 +130,33 @@ private static void logOnceInfo(String key, String msg) { log.info(msg); } } + + /** + * Checks that the given path is relative to SOLR_HOME, SOLR_DATA_HOME, coreRootDirectory or one of the paths + * specified in solr.xml's allowPaths element. The following paths will fail validation + * + * Relative paths starting with .. + * Windows UNC paths (\\host\share\path) + * Absolute paths which are not below the list of allowed paths + * + * @param pathToAssert path to check + * @param allowPaths list of paths that should be allowed prefixes + * @throws SolrException if path is outside allowed paths + */ + public static void assertPathAllowed(Path pathToAssert, Set allowPaths) throws SolrException { +if (OS.isFamilyWindows() && pathToAssert.toString().startsWith("")) { Review comment: Anyone who have a Windows box to test this on? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org