[GitHub] [lucene-solr] megancarey opened a new pull request #1592: SOLR-14579 First pass at dismantling Utils
megancarey opened a new pull request #1592: URL: https://github.com/apache/lucene-solr/pull/1592 # Description Eliminating warnings due to static Functions in the Utils class within Solr. # Solution Removed a few Function variables from the Utils class which were adding little value and causing warnings throughout the code. # Tests I ran `ant test` from the solr directory, as all of the changes in this PR are limited to Solr. # Checklist Please review the following and check all that apply: - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [x] I have created a Jira issue and added the issue ID to my pull request title. - [x] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [x] I have developed this patch against the `master` branch. - [x] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley commented on pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
dsmiley commented on pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#issuecomment-645742371 @mkhludnev ? I recall you use Windows. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14575) Solr restore is failing when basic authentication is enabled
[ https://issues.apache.org/jira/browse/SOLR-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yaswanth updated SOLR-14575: Priority: Blocker (was: Major) > Solr restore is failing when basic authentication is enabled > > > Key: SOLR-14575 > URL: https://issues.apache.org/jira/browse/SOLR-14575 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore >Affects Versions: 8.2 >Reporter: Yaswanth >Priority: Blocker > > Hi Team, > I was testing backup / restore for solrcloud and its failing exactly when I > am trying to restore a successfully backed up collection. > I am using solr 8.2 with basic authentication enabled and then creating a 2 > replica collection. When I run the backup like > curl -u xxx:xxx -k > '[https://x.x.x.x:8080/solr/admin/collections?action=BACKUP=test=test=/solrdatabkup'|https://x.x.x.x:8080/solr/admin/collections?action=BACKUP=test=test=/solrdatabkup%27] > it worked fine and I do see a folder was created with the collection name > under /solrdatabackup > But now when I deleted the existing collection and then try running restore > api like > curl -u xxx:xxx -k > '[https://x.x.x.x:8080/solr/admin/collections?action=RESTORE=test=test=/solrdatabkup'|https://x.x.x.x:8080/solr/admin/collections?action=BACKUP=test=test=/solrdatabkup%27] > its throwing an error like > { > "responseHeader":{ > "status":500, > "QTime":457}, > "Operation restore caused > *exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: > ADDREPLICA failed to create replica",* > "exception":{ > "msg":"ADDREPLICA failed to create replica", > "rspCode":500}, > "error":{ > "metadata":[ > "error-class","org.apache.solr.common.SolrException", > "root-error-class","org.apache.solr.common.SolrException"], > "msg":"ADDREPLICA failed to create replica", > "trace":"org.apache.solr.common.SolrException: ADDREPLICA failed to create > replica\n\tat > org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat > > org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:280)\n\tat > > org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:252)\n\tat > > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat > > org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:820)\n\tat > org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:786)\n\tat > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:546)\n\tat > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423)\n\tat > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350)\n\tat > > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat > > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat > > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat > > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat > > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat > > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)\n\tat > > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)\n\tat > > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat > > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat > > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat > org.eclipse.jetty.server.Server.handle(Server.java:505)\n\tat >
[jira] [Commented] (LUCENE-9390) Kuromoji tokenizer discards tokens if they start with a punctuation character
[ https://issues.apache.org/jira/browse/LUCENE-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138948#comment-17138948 ] Kazuaki Hiraga commented on LUCENE-9390: Hello, I might have just remembered why [~cm] and I talked about this option. If my memory is correct, the reason was _*position* and *start / end offset*_. For example, we have a keyword *日本語と「記号」の話し*, and apply the tokenizer with _discardPunctuation=true_, the token position and offsets of tokens will be the following: ||Token||日本語||と||記号||の||話し|| |Offset|0,3|3,4|5,7|8,9|9,11| |Position|1|2|3|4|5| And the following are the results of tokenization that use *char filter* and *token filter.* Applying _PatternReplaceCharFilterFactory_ for removing some of punctuation before running the tokenizer. {noformat} {noformat} ||Token||日本語||と||記号||の||話し|| |Offset|0,3|3,5|5,8|8,9|9,11| |Position|1|2|3|4|5| Applying _PatternReplaceFilterFactory_ after applying the tokenizer. {noformat} {noformat} ||Token||日本語||と||記号||の||話し|| |Offset|0,3|3,4|5,7|8,9|9,11| |Position|1|2|4|6|7| I cannot remember what I wanted to do at that time, but it seems that the result of the former one that uses charFilter shows a reasonable result :) I might have been prefer the start/end offset that the tokenizer with _discardPunctuation=true_ generates, but there's no good use case in my mind, i think the removing this option is reasonable. > Kuromoji tokenizer discards tokens if they start with a punctuation character > - > > Key: LUCENE-9390 > URL: https://issues.apache.org/jira/browse/LUCENE-9390 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Time Spent: 40m > Remaining Estimate: 0h > > This issue was first raised in Elasticsearch > [here|https://github.com/elastic/elasticsearch/issues/57614] > The unidic dictionary that is used by the Kuromoji tokenizer contains entries > that mix punctuations and other characters. For instance the following entry: > _(株),1285,1285,3690,名詞,一般,*,*,*,*,(株),カブシキガイシャ,カブシキガイシャ_ > can be found in the Noun.csv file. > Today, tokens that start with punctuations are automatically removed by > default (discardPunctuation is true). I think the code was written this way > because we expect punctuations to be separated from normal tokens but there > are exceptions in the original dictionary. Maybe we should check the entire > token when discarding punctuations ? > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14567) Fix or suppress remaining warnings in solrj
[ https://issues.apache.org/jira/browse/SOLR-14567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138944#comment-17138944 ] Megan Carey commented on SOLR-14567: No worries, I figured it was worth a shot to ask :) I've filed this Jira to remove it, and I'll aim to pick it up next week: SOLR-14579. > Fix or suppress remaining warnings in solrj > --- > > Key: SOLR-14567 > URL: https://issues.apache.org/jira/browse/SOLR-14567 > Project: Solr > Issue Type: Sub-task >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Fix For: 8.6 > > > This is another place where the number of warnings per directory is getting > too small to do individually, so I'll do them all in a bunch. > Note: this will exclude autoscaling. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14579) Remove SolrJ 'Utils' generic map functions
[ https://issues.apache.org/jira/browse/SOLR-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Megan Carey updated SOLR-14579: --- Summary: Remove SolrJ 'Utils' generic map functions (was: Remove SolrJ module 'Utils' generic map functions) > Remove SolrJ 'Utils' generic map functions > -- > > Key: SOLR-14579 > URL: https://issues.apache.org/jira/browse/SOLR-14579 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0) >Reporter: Megan Carey >Priority: Minor > > Remove the map functions like `NEW_HASHMAP_FUN` from the Utils class in solrj > module to reduce warnings and improve code quality. > [https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Utils.java#L92] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14579) Remove SolrJ module 'Utils' generic map functions
Megan Carey created SOLR-14579: -- Summary: Remove SolrJ module 'Utils' generic map functions Key: SOLR-14579 URL: https://issues.apache.org/jira/browse/SOLR-14579 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Affects Versions: master (9.0) Reporter: Megan Carey Remove the map functions like `NEW_HASHMAP_FUN` from the Utils class in solrj module to reduce warnings and improve code quality. [https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Utils.java#L92] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14578) Confusing Name in the docs and Test of Auto Add Trigger
[ https://issues.apache.org/jira/browse/SOLR-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138939#comment-17138939 ] Marcus Eagan commented on SOLR-14578: - [https://github.com/apache/lucene-solr/pull/1591] > Confusing Name in the docs and Test of Auto Add Trigger > --- > > Key: SOLR-14578 > URL: https://issues.apache.org/jira/browse/SOLR-14578 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Affects Versions: master (9.0) >Reporter: Marcus Eagan >Priority: Trivial > Time Spent: 10m > Remaining Estimate: 0h > > In the autoscaling docs, the name of the names of two actions are the same > and it is confusing to users. > See: > {code:java} > { > "set-trigger": { > "name": ".auto_add_replicas", > "event": "nodeLost, > "waitFor": "5s", > "enabled": true, > "actions": [ > { > "name": "auto_add_replicas_plan", > "class": "solr.AutoAddReplicasPlanAction" > }, > { >"name": "auto_add_replicas_plan", // works?, but should be execute plan >"class": "solr.ExecutePlanAction" > } > ] > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] MarcusSorealheis opened a new pull request #1591: SOLR-14578: Update solrcloud-autoscaling-triggers.adoc and test
MarcusSorealheis opened a new pull request #1591: URL: https://github.com/apache/lucene-solr/pull/1591 # Description fix action name in doc and test. # Solution Change the execute action to be `execute_plan` # Tests Fixes existing test # Checklist Please review the following and check all that apply: - [ ] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [ ] I have created a Jira issue and added the issue ID to my pull request title. - [ ] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [ ] I have developed this patch against the `master` branch. - [ ] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14578) Confusing Name in the docs and Test of Auto Add Trigger
Marcus Eagan created SOLR-14578: --- Summary: Confusing Name in the docs and Test of Auto Add Trigger Key: SOLR-14578 URL: https://issues.apache.org/jira/browse/SOLR-14578 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: AutoScaling Affects Versions: master (9.0) Reporter: Marcus Eagan In the autoscaling docs, the name of the names of two actions are the same and it is confusing to users. See: {code:java} { "set-trigger": { "name": ".auto_add_replicas", "event": "nodeLost, "waitFor": "5s", "enabled": true, "actions": [ { "name": "auto_add_replicas_plan", "class": "solr.AutoAddReplicasPlanAction" }, { "name": "auto_add_replicas_plan", // works?, but should be execute plan "class": "solr.ExecutePlanAction" } ] } } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-11973) Fail compilation on warnings
[ https://issues.apache.org/jira/browse/SOLR-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-11973: -- Summary: Fail compilation on warnings (was: Fail compilation on precommit warnings) > Fail compilation on warnings > > > Key: SOLR-11973 > URL: https://issues.apache.org/jira/browse/SOLR-11973 > Project: Solr > Issue Type: Improvement > Components: Build >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > > Not quite sure whether this qualifies as something for Solr or Lucene > I'm working gradually on getting precommit lint warnings out of the code > base. I'd like to selectively fail a subtree once it's clean. I played around > a bit with Robert's suggestions on the dev list but couldn't quite get it to > work, then decided I needed to focus on one thing at a time. > See SOLR-10809 for the first clean directory Real Soon Now. > Bonus points would be working out how to fail on deprecation warnings when > building Solr too, although that's farther off in the future. > Assigning to myself, but anyone who knows the build ins and outs _please_ > feel free to take it! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14523) Enhance gradle logging calls validation: eliminate getMessage()
[ https://issues.apache.org/jira/browse/SOLR-14523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138931#comment-17138931 ] Erick Erickson commented on SOLR-14523: --- [~asalamon74] Just making sure I'm not dropping the ball on this and we're both waiting on each other. No need to reply if you haven't gotten to it yet. > Enhance gradle logging calls validation: eliminate getMessage() > --- > > Key: SOLR-14523 > URL: https://issues.apache.org/jira/browse/SOLR-14523 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andras Salamon >Assignee: Erick Erickson >Priority: Minor > Attachments: validation.patch > > > SOLR-14280 fixed a logging problem in SolrConfig by removing a few > getMessage() calls. We could enhance this solution by modifying gradle's > logging calls validation and forbid getMessage() calls during logging. We > should check the existing code and eliminate such calls. > It is possible to suppress the warning using {{//logok}}. > [~erickerickson] [~gerlowskija] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] gerlowskija commented on a change in pull request #1579: SOLR-14541: Ensure classes that implement equals implement hashCode or suppress warnings
gerlowskija commented on a change in pull request #1579: URL: https://github.com/apache/lucene-solr/pull/1579#discussion_r441907028 ## File path: solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/CloudSolrStream.java ## @@ -491,6 +490,11 @@ public boolean equals(Object o) { return this == o; } +@Override Review comment: As someone who has at least a passing understanding of this code, this and the other Streaming class implementations below gets my +1. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14532) Add iml file to gitignore
[ https://issues.apache.org/jira/browse/SOLR-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski resolved SOLR-14532. Fix Version/s: master (9.0) Resolution: Fixed Thanks for finding and fixing this Andras! Merged to master and closing out now. > Add iml file to gitignore > - > > Key: SOLR-14532 > URL: https://issues.apache.org/jira/browse/SOLR-14532 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andras Salamon >Assignee: Jason Gerlowski >Priority: Trivial > Fix For: master (9.0) > > Attachments: SOLR-14532.patch, SOLR-14532.patch > > > If I execute {{gradlew idea}} in my {{lucene-solr-upstream}} directory, it > will create three files in the root directory: > {noformat} > lucene-solr-upstream.iml > lucene-solr-upstream.ipr > lucene-solr-upstream.iws > {noformat} > Git will ignore the {{ipr}} and the {{iws}} file, but it lists the iml file > as a new file. We should also ignore that one. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Assigned] (SOLR-14532) Add iml file to gitignore
[ https://issues.apache.org/jira/browse/SOLR-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski reassigned SOLR-14532: -- Assignee: Jason Gerlowski > Add iml file to gitignore > - > > Key: SOLR-14532 > URL: https://issues.apache.org/jira/browse/SOLR-14532 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andras Salamon >Assignee: Jason Gerlowski >Priority: Trivial > Attachments: SOLR-14532.patch, SOLR-14532.patch > > > If I execute {{gradlew idea}} in my {{lucene-solr-upstream}} directory, it > will create three files in the root directory: > {noformat} > lucene-solr-upstream.iml > lucene-solr-upstream.ipr > lucene-solr-upstream.iws > {noformat} > Git will ignore the {{ipr}} and the {{iws}} file, but it lists the iml file > as a new file. We should also ignore that one. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14532) Add iml file to gitignore
[ https://issues.apache.org/jira/browse/SOLR-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138915#comment-17138915 ] ASF subversion and git services commented on SOLR-14532: Commit 0ea0358624c8eb4555f29d1469eca53e6fe2f3ba in lucene-solr's branch refs/heads/master from Jason Gerlowski [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0ea0358 ] SOLR-14532: Add *.iml files to gitignore Also clarifies our docs on importing the project into IntelliJ. > Add iml file to gitignore > - > > Key: SOLR-14532 > URL: https://issues.apache.org/jira/browse/SOLR-14532 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andras Salamon >Priority: Trivial > Attachments: SOLR-14532.patch, SOLR-14532.patch > > > If I execute {{gradlew idea}} in my {{lucene-solr-upstream}} directory, it > will create three files in the root directory: > {noformat} > lucene-solr-upstream.iml > lucene-solr-upstream.ipr > lucene-solr-upstream.iws > {noformat} > Git will ignore the {{ipr}} and the {{iws}} file, but it lists the iml file > as a new file. We should also ignore that one. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14577) NPE in terms query parser when field is not provided
[ https://issues.apache.org/jira/browse/SOLR-14577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomas Eduardo Fernandez Lobbe resolved SOLR-14577. -- Fix Version/s: 8.6 master (9.0) Assignee: Tomas Eduardo Fernandez Lobbe Resolution: Fixed > NPE in terms query parser when field is not provided > > > Key: SOLR-14577 > URL: https://issues.apache.org/jira/browse/SOLR-14577 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tomas Eduardo Fernandez Lobbe >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Minor > Fix For: master (9.0), 8.6 > > Time Spent: 20m > Remaining Estimate: 0h > > Should be a 400 BAD REQUEST instead -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14577) NPE in terms query parser when field is not provided
[ https://issues.apache.org/jira/browse/SOLR-14577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138903#comment-17138903 ] ASF subversion and git services commented on SOLR-14577: Commit bdcbf1019bc7773f2c08a4d31c7b6981765b2a46 in lucene-solr's branch refs/heads/branch_8x from Tomas Eduardo Fernandez Lobbe [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bdcbf10 ] SOLR-14577: Return BAD REQUEST when field is missing in terms QP (#1588) > NPE in terms query parser when field is not provided > > > Key: SOLR-14577 > URL: https://issues.apache.org/jira/browse/SOLR-14577 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tomas Eduardo Fernandez Lobbe >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Should be a 400 BAD REQUEST instead -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14507) Option to allow location override if solr.hdfs.home isn't set in backup repo
[ https://issues.apache.org/jira/browse/SOLR-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138898#comment-17138898 ] Lucene/Solr QA commented on SOLR-14507: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 51s{color} | {color:red} core in the patch failed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 81m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | solr.cloud.api.collections.TestLocalFSCloudBackupRestore | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-14507 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13005888/SOLR-14507-2.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene2-us-west.apache.org 4.4.0-170-generic #199-Ubuntu SMP Thu Nov 14 01:45:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / b01e249 | | ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 | | Default Java | LTS | | unit | https://builds.apache.org/job/PreCommit-SOLR-Build/765/artifact/out/patch-unit-solr_core.txt | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/765/testReport/ | | modules | C: solr/core U: solr/core | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/765/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Option to allow location override if solr.hdfs.home isn't set in backup repo > > > Key: SOLR-14507 > URL: https://issues.apache.org/jira/browse/SOLR-14507 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Reporter: Haley Reeve >Priority: Major > Attachments: SOLR-14507-2.patch, SOLR-14507.patch > > > The Solr backup/restore API has an optional parameter for specifying the > directory to backup to. However, the HdfsBackupRepository class doesn't use > this location when creating the HDFS Filesystem object. Instead it uses the > solr.hdfs.home setting configured in solr.xml. This functionally means that > the backup location, which can be passed to the API call dynamically, is > limited by the static home directory defined in solr.xml. This requirement > means that if the solr.hdfs.home path and backup location don't share the > same URI scheme and hostname, the backup will fail, even if the backup could > otherwise have been written to the specified location successfully. > This request is to allow the option of using the location setting to > initialize the filesystem object. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14577) NPE in terms query parser when field is not provided
[ https://issues.apache.org/jira/browse/SOLR-14577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138891#comment-17138891 ] ASF subversion and git services commented on SOLR-14577: Commit cfae052973a93756606c762c2b6cd7137499ab93 in lucene-solr's branch refs/heads/master from Tomas Eduardo Fernandez Lobbe [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cfae052 ] SOLR-14577: Return BAD REQUEST when field is missing in terms QP (#1588) > NPE in terms query parser when field is not provided > > > Key: SOLR-14577 > URL: https://issues.apache.org/jira/browse/SOLR-14577 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tomas Eduardo Fernandez Lobbe >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Should be a 400 BAD REQUEST instead -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe merged pull request #1588: SOLR-14577: Return BAD REQUEST when field is missing in terms QP
tflobbe merged pull request #1588: URL: https://github.com/apache/lucene-solr/pull/1588 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
janhoy commented on pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#issuecomment-645659284 I think this is approaching committable state. Appreciate if someone with a good Windows box would run the full test suite on Windows. But I think I'll anyway merge to master and let Jenkins work on it for a few rounds. Then I'll backport to 8.x branch in good time before 8.6 branch cut. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw commented on pull request #1552: LUCENE-8962: merge small segments on commit
s1monw commented on pull request #1552: URL: https://github.com/apache/lucene-solr/pull/1552#issuecomment-645658854 @msokolov @mikemccand @msfroh I merged https://github.com/apache/lucene-solr/pull/1585 and updated this PR to use it. I also went ahead and removed the IndexWriterEvents interface, cut over to use long instead of double as a config value for the time to wait and set the default to 0. I will let you folks look at it again. I am happy to help further. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
dsmiley commented on a change in pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#discussion_r441858120 ## File path: solr/core/src/java/org/apache/solr/core/CoreContainer.java ## @@ -1259,6 +1277,28 @@ public SolrCore create(String coreName, Path instancePath, Map p } } + /** + * Checks that the given path is relative to SOLR_HOME, SOLR_DATA_HOME, coreRootDirectory or one of the paths + * specified in solr.xml's allowPaths element. Delegates to {@link SolrPaths#assertPathAllowed(Path, Set)} + * @param pathToAssert path to check + * @throws SolrException if path is outside allowed paths + */ + public void assertPathAllowed(Path pathToAssert) throws SolrException { +SolrPaths.assertPathAllowed(pathToAssert, allowPaths); + } + + /** + * Return the file system paths that should be allowed for various API requests. Review comment: Javadoc is good; thanks. `com.google.common.annotations.VisibleForTesting` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on a change in pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
janhoy commented on a change in pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#discussion_r441858025 ## File path: solr/core/src/java/org/apache/solr/core/SolrPaths.java ## @@ -128,4 +130,35 @@ private static void logOnceInfo(String key, String msg) { log.info(msg); } } + + /** + * Checks that the given path is relative to SOLR_HOME, SOLR_DATA_HOME, coreRootDirectory or one of the paths + * specified in solr.xml's allowPaths element. The following paths will fail validation + * + * Relative paths starting with .. + * Windows UNC paths (\\host\share\path) + * Absolute paths which are not below the list of allowed paths + * + * @param pathToAssert path to check + * @param allowPaths list of paths that should be allowed prefixes + * @throws SolrException if path is outside allowed paths + */ + public static void assertPathAllowed(Path pathToAssert, Set allowPaths) throws SolrException { +if (OS.isFamilyWindows() && pathToAssert.toString().startsWith("")) { + throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, + "Path " + pathToAssert + " disallowed. UNC paths not supported. Please use drive letter instead."); +} +// Conversion Path -> String -> Path is to be able to compare against org.apache.lucene.mockfile.FilterPath instances +final Path path = Path.of(pathToAssert.toString()).normalize(); +if (path.startsWith("..")) { + throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, + "Path " + pathToAssert + " disallowed due to path traversal.."); +} +if (!path.isAbsolute()) return; // All relative paths are accepted +if (allowPaths.contains(Paths.get("_ALL_"))) return; // Catch-all path "*"/"_ALL_" will allow all other paths Review comment: This is the workaround I did after realizing that Windows `Path` class is not happy with `*` as a path. When parsing the value from solr.xml/sysprop, we detect `*` and store it as a Path `_ALL_`. Then in the assert method we check for that special path and skip further testing. Exception is UNC paths and `..` paths which are still rejected (should they?) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on a change in pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
janhoy commented on a change in pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#discussion_r441856227 ## File path: solr/core/src/java/org/apache/solr/core/CoreContainer.java ## @@ -1259,6 +1277,28 @@ public SolrCore create(String coreName, Path instancePath, Map p } } + /** + * Checks that the given path is relative to SOLR_HOME, SOLR_DATA_HOME, coreRootDirectory or one of the paths + * specified in solr.xml's allowPaths element. Delegates to {@link SolrPaths#assertPathAllowed(Path, Set)} + * @param pathToAssert path to check + * @throws SolrException if path is outside allowed paths + */ + public void assertPathAllowed(Path pathToAssert) throws SolrException { +SolrPaths.assertPathAllowed(pathToAssert, allowPaths); + } + + /** + * Return the file system paths that should be allowed for various API requests. Review comment: @dsmiley see JavaDoc. I was hoping to keep this method private. Don't we have a special annotation that will allow access from test scope even if the method is not public? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
janhoy commented on pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#issuecomment-645645529 All tests now passing on macOS and hopefully Windows (running tests now in a slow VirtualBox). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14391) Remove getDocSet's manual doc collection logic; remove ScoreFilter
[ https://issues.apache.org/jira/browse/SOLR-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138858#comment-17138858 ] David Smiley commented on SOLR-14391: - [~shalin] as 8.6 approaches; is this here a major concern (un-benchmarked change) that you think I should revert in 8.x? Other things seem more important so I haven't prioritized benchmarking this. Personally it seems low-risk to me. My objective in this issue was about tech-debt -- I'm eliminating uses of Filter bit by bit. [~ichattopadhyaya] I'm looking forward to seeing the benchmark suite you are workin on. > Remove getDocSet's manual doc collection logic; remove ScoreFilter > -- > > Key: SOLR-14391 > URL: https://issues.apache.org/jira/browse/SOLR-14391 > Project: Solr > Issue Type: Task >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: 8.6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > {{SolrIndexSearcher.getDocSet(List)}} calls getProcessedFilter and > then basically loops over doc IDs, passing them through the filter, and > passes them to the Collector. This logic is redundant with what Lucene > searcher.search(query,collector) will ultimately do in BulkScorer, and so I > propose we remove all that code and delegate to Lucene. > Also, the top of this method looks to see if any query implements the > "ScoreFilter" marker interface (only implemented by CollapsingPostFilter) and > if so delegates to {{getDocSetScore}} method instead. That method has an > implementation close to what I propose getDocSet be changed to; so it can be > removed along with this ScoreFilter interface > searcher.search(query,collector). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on a change in pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
janhoy commented on a change in pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#discussion_r441854963 ## File path: solr/core/src/java/org/apache/solr/core/SolrPaths.java ## @@ -128,4 +130,33 @@ private static void logOnceInfo(String key, String msg) { log.info(msg); } } + + /** + * Checks that the given path is relative to SOLR_HOME, SOLR_DATA_HOME, coreRootDirectory or one of the paths + * specified in solr.xml's allowPaths element. The following paths will fail validation + * + * Relative paths starting with .. + * Windows UNC paths (\\host\share\path) + * Absolute paths which are not below the list of allowed paths + * + * @param pathToAssert path to check + * @param allowPaths list of paths that should be allowed prefixes + * @throws SolrException if path is outside allowed paths + */ + public static void assertPathAllowed(Path pathToAssert, Set allowPaths) throws SolrException { +if (OS.isFamilyWindows() && pathToAssert.toString().startsWith("")) { Review comment: I have tested on Windows, validated that UNC is blocked, and modified the tests with separate ones running in Windows and non-Windows environments. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw opened a new pull request #1590: LUCENE-9408: Ensure OneMerge#mergeFinished is only called once
s1monw opened a new pull request #1590: URL: https://github.com/apache/lucene-solr/pull/1590 in the case of an exception it's possible that some OneMerge instances will be closed multiple times. This commit ensures that mergeFinished is really just called once instead of multiple times. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14574) Fix or suppress warnings in solr/core/src/test
[ https://issues.apache.org/jira/browse/SOLR-14574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138857#comment-17138857 ] ASF subversion and git services commented on SOLR-14574: Commit 102fc9d7e01966220927b90b13c72f3240890173 in lucene-solr's branch refs/heads/branch_8x from Erick Erickson [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=102fc9d ] SOLR-14574: Fix or suppress warnings in solr/core/src/test (part 1) > Fix or suppress warnings in solr/core/src/test > -- > > Key: SOLR-14574 > URL: https://issues.apache.org/jira/browse/SOLR-14574 > Project: Solr > Issue Type: Sub-task >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > > Just when I thought I was done I ran testClasses > I'm going to do this a little differently. Rather than do a directory at a > time, I'll just fix a bunch, push, fix a bunch more, push all on this Jira > until I'm done. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14574) Fix or suppress warnings in solr/core/src/test
[ https://issues.apache.org/jira/browse/SOLR-14574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138856#comment-17138856 ] ASF subversion and git services commented on SOLR-14574: Commit b01e249c9ec724b6df120a5d731020cfe4de3fce in lucene-solr's branch refs/heads/master from Erick Erickson [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b01e249 ] SOLR-14574: Fix or suppress warnings in solr/core/src/test (part 1) > Fix or suppress warnings in solr/core/src/test > -- > > Key: SOLR-14574 > URL: https://issues.apache.org/jira/browse/SOLR-14574 > Project: Solr > Issue Type: Sub-task >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > > Just when I thought I was done I ran testClasses > I'm going to do this a little differently. Rather than do a directory at a > time, I'll just fix a bunch, push, fix a bunch more, push all on this Jira > until I'm done. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9408) OneMerge#mergeFinish is called multiple times
[ https://issues.apache.org/jira/browse/LUCENE-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138847#comment-17138847 ] ASF subversion and git services commented on LUCENE-9408: - Commit 9524cc42338b9cd837d152205385c7093cc93c8a in lucene-solr's branch refs/heads/master from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9524cc4 ] LUCENE-9408: roll back only called once enforcement > OneMerge#mergeFinish is called multiple times > - > > Key: LUCENE-9408 > URL: https://issues.apache.org/jira/browse/LUCENE-9408 > Project: Lucene - Core > Issue Type: Bug >Reporter: Simon Willnauer >Priority: Minor > > After enforcing calling this method only once a random test caused > OneMerge#mergeFinished to be called multiple times. > {noformat} > 21:06:59[junit4] Suite: org.apache.lucene.index.TestIndexFileDeleter > 21:06:59[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexFileDeleter -Dtests.method=testExcInDeleteFile > -Dtests.seed=BCFF67862FF6529B -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=haw-US -Dtests.timezone=Etc/Zulu -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 > 21:06:59[junit4] FAILURE 0.04s J0 | > TestIndexFileDeleter.testExcInDeleteFile <<< > 21:06:59[junit4]> Throwable #1: java.lang.AssertionError > 21:06:59[junit4]> at > __randomizedtesting.SeedInfo.seed([BCFF67862FF6529B:518F81E5ACB0444E]:0) > 21:06:59[junit4]> at > org.apache.lucene.index.TestIndexFileDeleter.testExcInDeleteFile(TestIndexFileDeleter.java:525) > 21:06:59[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 21:06:59[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 21:06:59[junit4]> at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 21:06:59[junit4]> at > java.base/java.lang.reflect.Method.invoke(Method.java:566) > 21:06:59[junit4]> at > java.base/java.lang.Thread.run(Thread.java:834) > {noformat} > mergeFinished should only be called once. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9408) OneMerge#mergeFinish is called multiple times
[ https://issues.apache.org/jira/browse/LUCENE-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138844#comment-17138844 ] ASF subversion and git services commented on LUCENE-9408: - Commit e3d0c1c0e6565d857500d34b9829751e23d9cf68 in lucene-solr's branch refs/heads/branch_8x from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e3d0c1c ] LUCENE-9408: roll back only called once enforcement > OneMerge#mergeFinish is called multiple times > - > > Key: LUCENE-9408 > URL: https://issues.apache.org/jira/browse/LUCENE-9408 > Project: Lucene - Core > Issue Type: Bug >Reporter: Simon Willnauer >Priority: Minor > > After enforcing calling this method only once a random test caused > OneMerge#mergeFinished to be called multiple times. > {noformat} > 21:06:59[junit4] Suite: org.apache.lucene.index.TestIndexFileDeleter > 21:06:59[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexFileDeleter -Dtests.method=testExcInDeleteFile > -Dtests.seed=BCFF67862FF6529B -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=haw-US -Dtests.timezone=Etc/Zulu -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 > 21:06:59[junit4] FAILURE 0.04s J0 | > TestIndexFileDeleter.testExcInDeleteFile <<< > 21:06:59[junit4]> Throwable #1: java.lang.AssertionError > 21:06:59[junit4]> at > __randomizedtesting.SeedInfo.seed([BCFF67862FF6529B:518F81E5ACB0444E]:0) > 21:06:59[junit4]> at > org.apache.lucene.index.TestIndexFileDeleter.testExcInDeleteFile(TestIndexFileDeleter.java:525) > 21:06:59[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 21:06:59[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 21:06:59[junit4]> at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 21:06:59[junit4]> at > java.base/java.lang.reflect.Method.invoke(Method.java:566) > 21:06:59[junit4]> at > java.base/java.lang.Thread.run(Thread.java:834) > {noformat} > mergeFinished should only be called once. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9408) OneMerge#mergeFinish is called multiple times
Simon Willnauer created LUCENE-9408: --- Summary: OneMerge#mergeFinish is called multiple times Key: LUCENE-9408 URL: https://issues.apache.org/jira/browse/LUCENE-9408 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer After enforcing calling this method only once a random test caused OneMerge#mergeFinished to be called multiple times. {noformat} 21:06:59[junit4] Suite: org.apache.lucene.index.TestIndexFileDeleter 21:06:59[junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexFileDeleter -Dtests.method=testExcInDeleteFile -Dtests.seed=BCFF67862FF6529B -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=haw-US -Dtests.timezone=Etc/Zulu -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 21:06:59[junit4] FAILURE 0.04s J0 | TestIndexFileDeleter.testExcInDeleteFile <<< 21:06:59[junit4]> Throwable #1: java.lang.AssertionError 21:06:59[junit4]> at __randomizedtesting.SeedInfo.seed([BCFF67862FF6529B:518F81E5ACB0444E]:0) 21:06:59[junit4]> at org.apache.lucene.index.TestIndexFileDeleter.testExcInDeleteFile(TestIndexFileDeleter.java:525) 21:06:59[junit4]> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 21:06:59[junit4]> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 21:06:59[junit4]> at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 21:06:59[junit4]> at java.base/java.lang.reflect.Method.invoke(Method.java:566) 21:06:59[junit4]> at java.base/java.lang.Thread.run(Thread.java:834) {noformat} mergeFinished should only be called once. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mrsoong opened a new pull request #1589: SOLR-13195: added check for missing shards param in SearchHandler
mrsoong opened a new pull request #1589: URL: https://github.com/apache/lucene-solr/pull/1589 # Description The distributedProcess methods of the search pipeline never checks to see if a request has a shards parameter, eventhough a shards param is required if SOLR is not running with SolrCloud # Solution Added code in SearchHandler.getAndPrepShardHandler() to check for a shards param, and throw an exception if it's not defined. # Tests Added SearchHandlerTest.testDistribWithoutZk() which covers legacy distributed queries without a shards param # Checklist Please review the following and check all that apply: - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [x] I have created a Jira issue and added the issue ID to my pull request title. **(Existing issue)** - [x] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [x] I have developed this patch against the `master` branch. - [x] I have run `ant precommit` and the appropriate test suite. - [x] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14567) Fix or suppress remaining warnings in solrj
[ https://issues.apache.org/jira/browse/SOLR-14567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138836#comment-17138836 ] Erick Erickson commented on SOLR-14567: --- [~megancarey] At this point, I'm not fixing much of anything, just adding a zillion SuppressWarnings. When I started this there were over 8,000 unsuppressed warnings in the code, far to many to actually fix. Some trivial ones I _am_ fixing, things like adding the diamond operator to code like: {code:java} List blah = new ArrayList(); {code} but that's about as deep as I'm going at this point. My hope is to get the warnings suppressed, then start failing compilations when new code generates warnings so people either have to explicitly suppress them or find a better way. I expect that process to be ongoing forever, but at least there'll be a chance to stop getting worse. And that'll expose things like NEW_HASHMAP_FUN... As for NEW_HASHMAP_FUN itself, I confess I haven't looked at it at all. If you'd like to raise a Jira about taking it out that would be fine. I won't be getting to that kind of work in the foreseeable future, so someone would need to volunteer to do the actual surgery, although I'd be happy to push it to the code base. > Fix or suppress remaining warnings in solrj > --- > > Key: SOLR-14567 > URL: https://issues.apache.org/jira/browse/SOLR-14567 > Project: Solr > Issue Type: Sub-task >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Fix For: 8.6 > > > This is another place where the number of warnings per directory is getting > too small to do individually, so I'll do them all in a bunch. > Note: this will exclude autoscaling. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8962) Can we merge small segments during refresh, for faster searching?
[ https://issues.apache.org/jira/browse/LUCENE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138827#comment-17138827 ] ASF subversion and git services commented on LUCENE-8962: - Commit d65dcb43728dd6bb64393226e24576525328cecc in lucene-solr's branch refs/heads/branch_8x from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d65dcb4 ] LUCENE-8962: Allow waiting for all merges in a merge spec (#1585) This change adds infrastructure to allow straight forward waiting on one or more merges or an entire merge specification. This is a basis for LUCENE-8962. > Can we merge small segments during refresh, for faster searching? > - > > Key: LUCENE-8962 > URL: https://issues.apache.org/jira/browse/LUCENE-8962 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Reporter: Michael McCandless >Priority: Major > Fix For: 8.6 > > Attachments: LUCENE-8962_demo.png, failed-tests.patch > > Time Spent: 16h 50m > Remaining Estimate: 0h > > With near-real-time search we ask {{IndexWriter}} to write all in-memory > segments to disk and open an {{IndexReader}} to search them, and this is > typically a quick operation. > However, when you use many threads for concurrent indexing, {{IndexWriter}} > will accumulate write many small segments during {{refresh}} and this then > adds search-time cost as searching must visit all of these tiny segments. > The merge policy would normally quickly coalesce these small segments if > given a little time ... so, could we somehow improve {{IndexWriter'}}s > refresh to optionally kick off merge policy to merge segments below some > threshold before opening the near-real-time reader? It'd be a bit tricky > because while we are waiting for merges, indexing may continue, and new > segments may be flushed, but those new segments shouldn't be included in the > point-in-time segments returned by refresh ... > One could almost do this on top of Lucene today, with a custom merge policy, > and some hackity logic to have the merge policy target small segments just > written by refresh, but it's tricky to then open a near-real-time reader, > excluding newly flushed but including newly merged segments since the refresh > originally finished ... > I'm not yet sure how best to solve this, so I wanted to open an issue for > discussion! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8962) Can we merge small segments during refresh, for faster searching?
[ https://issues.apache.org/jira/browse/LUCENE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138826#comment-17138826 ] ASF subversion and git services commented on LUCENE-8962: - Commit d65dcb43728dd6bb64393226e24576525328cecc in lucene-solr's branch refs/heads/branch_8x from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d65dcb4 ] LUCENE-8962: Allow waiting for all merges in a merge spec (#1585) This change adds infrastructure to allow straight forward waiting on one or more merges or an entire merge specification. This is a basis for LUCENE-8962. > Can we merge small segments during refresh, for faster searching? > - > > Key: LUCENE-8962 > URL: https://issues.apache.org/jira/browse/LUCENE-8962 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Reporter: Michael McCandless >Priority: Major > Fix For: 8.6 > > Attachments: LUCENE-8962_demo.png, failed-tests.patch > > Time Spent: 16h 50m > Remaining Estimate: 0h > > With near-real-time search we ask {{IndexWriter}} to write all in-memory > segments to disk and open an {{IndexReader}} to search them, and this is > typically a quick operation. > However, when you use many threads for concurrent indexing, {{IndexWriter}} > will accumulate write many small segments during {{refresh}} and this then > adds search-time cost as searching must visit all of these tiny segments. > The merge policy would normally quickly coalesce these small segments if > given a little time ... so, could we somehow improve {{IndexWriter'}}s > refresh to optionally kick off merge policy to merge segments below some > threshold before opening the near-real-time reader? It'd be a bit tricky > because while we are waiting for merges, indexing may continue, and new > segments may be flushed, but those new segments shouldn't be included in the > point-in-time segments returned by refresh ... > One could almost do this on top of Lucene today, with a custom merge policy, > and some hackity logic to have the merge policy target small segments just > written by refresh, but it's tricky to then open a near-real-time reader, > excluding newly flushed but including newly merged segments since the refresh > originally finished ... > I'm not yet sure how best to solve this, so I wanted to open an issue for > discussion! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8962) Can we merge small segments during refresh, for faster searching?
[ https://issues.apache.org/jira/browse/LUCENE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138821#comment-17138821 ] ASF subversion and git services commented on LUCENE-8962: - Commit 59efe22ac29c95f9ba85b7214fcf5e30cc979222 in lucene-solr's branch refs/heads/master from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=59efe22 ] LUCENE-8962: Allow waiting for all merges in a merge spec (#1585) This change adds infrastructure to allow straight forward waiting on one or more merges or an entire merge specification. This is a basis for LUCENE-8962. > Can we merge small segments during refresh, for faster searching? > - > > Key: LUCENE-8962 > URL: https://issues.apache.org/jira/browse/LUCENE-8962 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Reporter: Michael McCandless >Priority: Major > Fix For: 8.6 > > Attachments: LUCENE-8962_demo.png, failed-tests.patch > > Time Spent: 16h 50m > Remaining Estimate: 0h > > With near-real-time search we ask {{IndexWriter}} to write all in-memory > segments to disk and open an {{IndexReader}} to search them, and this is > typically a quick operation. > However, when you use many threads for concurrent indexing, {{IndexWriter}} > will accumulate write many small segments during {{refresh}} and this then > adds search-time cost as searching must visit all of these tiny segments. > The merge policy would normally quickly coalesce these small segments if > given a little time ... so, could we somehow improve {{IndexWriter'}}s > refresh to optionally kick off merge policy to merge segments below some > threshold before opening the near-real-time reader? It'd be a bit tricky > because while we are waiting for merges, indexing may continue, and new > segments may be flushed, but those new segments shouldn't be included in the > point-in-time segments returned by refresh ... > One could almost do this on top of Lucene today, with a custom merge policy, > and some hackity logic to have the merge policy target small segments just > written by refresh, but it's tricky to then open a near-real-time reader, > excluding newly flushed but including newly merged segments since the refresh > originally finished ... > I'm not yet sure how best to solve this, so I wanted to open an issue for > discussion! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8962) Can we merge small segments during refresh, for faster searching?
[ https://issues.apache.org/jira/browse/LUCENE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138820#comment-17138820 ] ASF subversion and git services commented on LUCENE-8962: - Commit 59efe22ac29c95f9ba85b7214fcf5e30cc979222 in lucene-solr's branch refs/heads/master from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=59efe22 ] LUCENE-8962: Allow waiting for all merges in a merge spec (#1585) This change adds infrastructure to allow straight forward waiting on one or more merges or an entire merge specification. This is a basis for LUCENE-8962. > Can we merge small segments during refresh, for faster searching? > - > > Key: LUCENE-8962 > URL: https://issues.apache.org/jira/browse/LUCENE-8962 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Reporter: Michael McCandless >Priority: Major > Fix For: 8.6 > > Attachments: LUCENE-8962_demo.png, failed-tests.patch > > Time Spent: 16h 50m > Remaining Estimate: 0h > > With near-real-time search we ask {{IndexWriter}} to write all in-memory > segments to disk and open an {{IndexReader}} to search them, and this is > typically a quick operation. > However, when you use many threads for concurrent indexing, {{IndexWriter}} > will accumulate write many small segments during {{refresh}} and this then > adds search-time cost as searching must visit all of these tiny segments. > The merge policy would normally quickly coalesce these small segments if > given a little time ... so, could we somehow improve {{IndexWriter'}}s > refresh to optionally kick off merge policy to merge segments below some > threshold before opening the near-real-time reader? It'd be a bit tricky > because while we are waiting for merges, indexing may continue, and new > segments may be flushed, but those new segments shouldn't be included in the > point-in-time segments returned by refresh ... > One could almost do this on top of Lucene today, with a custom merge policy, > and some hackity logic to have the merge policy target small segments just > written by refresh, but it's tricky to then open a near-real-time reader, > excluding newly flushed but including newly merged segments since the refresh > originally finished ... > I'm not yet sure how best to solve this, so I wanted to open an issue for > discussion! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw merged pull request #1585: LUCENE-8962: Allow waiting for all merges in a merge spec
s1monw merged pull request #1585: URL: https://github.com/apache/lucene-solr/pull/1585 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] jimczi commented on a change in pull request #1541: RegExp - add case insensitive matching option
jimczi commented on a change in pull request #1541: URL: https://github.com/apache/lucene-solr/pull/1541#discussion_r441813998 ## File path: lucene/core/src/java/org/apache/lucene/util/automaton/RegExp.java ## @@ -743,6 +792,30 @@ private Automaton toAutomatonInternal(Map automata, } return a; } + private Automaton toCaseInsensitiveChar(int codepoint, int maxDeterminizedStates) { +Automaton case1 = Automata.makeChar(codepoint); +int altCase = Character.isLowerCase(codepoint) ? Character.toUpperCase(codepoint) : Character.toLowerCase(codepoint); +Automaton result; +if (altCase != codepoint) { + result = Operations.union(case1, Automata.makeChar(altCase)); + result = MinimizationOperations.minimize(result, maxDeterminizedStates); +} else { + result = case1; +} +return result; + } Review comment: good catch, +1 for ASCII only for now, I guess it was too ambitious to handle unicode in the first run This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9328) SortingGroupHead to reuse DocValues
[ https://issues.apache.org/jira/browse/LUCENE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138772#comment-17138772 ] Mikhail Khludnev commented on LUCENE-9328: -- Recently, the following wrong statement brought up in te mailing list {quote}From the DocValues documentation ([https://lucene.apache.org/solr/guide/8_3/docvalues.html]), it mentions that this approach promises to make lookups for faceting, sorting and grouping much faster. {quote} I suppose until it's resolved, I propose to remove grouping from the list of beneficiaries. WDYT? > SortingGroupHead to reuse DocValues > --- > > Key: LUCENE-9328 > URL: https://issues.apache.org/jira/browse/LUCENE-9328 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/grouping >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Minor > Attachments: LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch > > Time Spent: 1h 50m > Remaining Estimate: 0h > > That's why > https://issues.apache.org/jira/browse/LUCENE-7701?focusedCommentId=17084365=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17084365 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14567) Fix or suppress remaining warnings in solrj
[ https://issues.apache.org/jira/browse/SOLR-14567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138751#comment-17138751 ] Megan Carey commented on SOLR-14567: [~erickerickson] - is there any chance you're considering removing some of components of Utils.java in the SolrJ module? Some of the variables, like `NEW_HASHMAP_FUN`, are used in several places to satisfy Map.computeIfAbsent mapping function. Because `NEW_HASHMAP_FUN` provides such a generic mapping function, it requires @SuppressWarnings not only in its initialization, but also anywhere that it's used. Seems like a strange practice to me, but perhaps this is a commonplace solution? Class in question: [https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Utils.java] > Fix or suppress remaining warnings in solrj > --- > > Key: SOLR-14567 > URL: https://issues.apache.org/jira/browse/SOLR-14567 > Project: Solr > Issue Type: Sub-task >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Fix For: 8.6 > > > This is another place where the number of warnings per directory is getting > too small to do individually, so I'll do them all in a bunch. > Note: this will exclude autoscaling. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8574) ExpressionFunctionValues should cache per-hit value
[ https://issues.apache.org/jira/browse/LUCENE-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138710#comment-17138710 ] Haoyu Zhai commented on LUCENE-8574: Ah, yes will use boolean instead of NaN. I'm just verifying whether the patch works so quickly inserted few lines of code without much thinking. But how should we fix this issue correctly? Since the easy fix patch seems not solving the problem. > ExpressionFunctionValues should cache per-hit value > --- > > Key: LUCENE-8574 > URL: https://issues.apache.org/jira/browse/LUCENE-8574 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.5, 8.0 >Reporter: Michael McCandless >Assignee: Robert Muir >Priority: Major > Attachments: LUCENE-8574.patch, unit_test.patch > > Time Spent: 1h > Remaining Estimate: 0h > > The original version of {{ExpressionFunctionValues}} had a simple per-hit > cache, so that nested expressions that reference the same common variable > would compute the value for that variable the first time it was referenced > and then use that cached value for all subsequent invocations, within one > hit. I think it was accidentally removed in LUCENE-7609? > This is quite important if you have non-trivial expressions that reference > the same variable multiple times. > E.g. if I have these expressions: > {noformat} > x = c + d > c = b + 2 > d = b * 2{noformat} > Then evaluating x should only cause b's value to be computed once (for a > given hit), but today it's computed twice. The problem is combinatoric if b > then references another variable multiple times, etc. > I think to fix this we just need to restore the per-hit cache? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9322) Discussing a unified vectors format API
[ https://issues.apache.org/jira/browse/LUCENE-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138707#comment-17138707 ] Varun Thacker commented on LUCENE-9322: --- JDK {code:java} openjdk version "1.8.0_242" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_242-b08) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.242-b08, mixed mode) {code} This is my first time trying out JMH. I took the encoding approach we used in VectorField vs the encoding approach taken by DenseVectorField ( in SOLR-14397 ) and compared them The VectorField approach to encode is much faster than using Base64 encoding {code:java} @Benchmark public void testVectorFieldEncoding() { float[] vector = new float[512]; for (int i=0; i<512; i++) { vector[i] = i + i/1000f; } for (int i=0; i<10_000; i++) { ByteBuffer buffer = ByteBuffer.allocate(Float.BYTES * vector.length); buffer.asFloatBuffer().put(vector); buffer.array(); } } {code} JMH output {code:java} Result: 123.116 ±(99.9%) 2.671 ops/s [Average] Statistics: (min, avg, max) = (95.557, 123.116, 143.097), stdev = 11.310 Confidence interval (99.9%): [120.445, 125.787] # Run complete. Total time: 00:08:07 Benchmark Mode Samples Score Score error Units o.e.MyBenchmark.testVectorFieldEncoding thrpt 200 123.116 2.671 ops/s {code} {code:java} @Benchmark public void testBase64Encoding() { float[] vector = new float[512]; for (int i=0; i<512; i++) { vector[i] = i + i/1000f; } for (int i=0; i<10_000; i++) { ByteBuffer buffer = ByteBuffer.allocate(Float.BYTES * vector.length); for (float value : vector) { buffer.putFloat(value); } buffer.rewind(); java.util.Base64.getEncoder().encode(buffer).array(); } } {code} JMH output {code:java} Result: 35.069 ±(99.9%) 0.745 ops/s [Average] Statistics: (min, avg, max) = (25.792, 35.069, 41.335), stdev = 3.154 Confidence interval (99.9%): [34.324, 35.814] # Run complete. Total time: 00:08:06 Benchmark Mode Samples Score Score error Units o.e.MyBenchmark.testBase64Encoding thrpt 200 35.069 0.745 ops/s {code} > Discussing a unified vectors format API > --- > > Key: LUCENE-9322 > URL: https://issues.apache.org/jira/browse/LUCENE-9322 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Julie Tibshirani >Priority: Major > > Two different approximate nearest neighbor approaches are currently being > developed, one based on HNSW ([#LUCENE-9004]) and another based on coarse > quantization ([#LUCENE-9136]). Each prototype proposes to add a new format to > handle vectors. In LUCENE-9136 we discussed the possibility of a unified API > that could support both approaches. The two ANN strategies give different > trade-offs in terms of speed, memory, and complexity, and it’s likely that > we’ll want to support both. Vector search is also an active research area, > and it would be great to be able to prototype and incorporate new approaches > without introducing more formats. > To me it seems like a good time to begin discussing a unified API. The > prototype for coarse quantization > ([https://github.com/apache/lucene-solr/pull/1314]) could be ready to commit > soon (this depends on everyone's feedback of course). The approach is simple > and shows solid search performance, as seen > [here|https://github.com/apache/lucene-solr/pull/1314#issuecomment-608645326]. > I think this API discussion is an important step in moving that > implementation forward. > The goals of the API would be > # Support for storing and retrieving individual float vectors. > # Support for approximate nearest neighbor search -- given a query vector, > return the indexed vectors that are closest to it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe opened a new pull request #1588: SOLR-14577: Return BAD REQUEST when field is missing in terms QP
tflobbe opened a new pull request #1588: URL: https://github.com/apache/lucene-solr/pull/1588 It will currently throw a NPE, resulting in a server error. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14507) Option to allow location override if solr.hdfs.home isn't set in backup repo
[ https://issues.apache.org/jira/browse/SOLR-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haley Reeve updated SOLR-14507: --- Attachment: SOLR-14507-2.patch Status: Patch Available (was: Patch Available) > Option to allow location override if solr.hdfs.home isn't set in backup repo > > > Key: SOLR-14507 > URL: https://issues.apache.org/jira/browse/SOLR-14507 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Reporter: Haley Reeve >Priority: Major > Attachments: SOLR-14507-2.patch, SOLR-14507.patch > > > The Solr backup/restore API has an optional parameter for specifying the > directory to backup to. However, the HdfsBackupRepository class doesn't use > this location when creating the HDFS Filesystem object. Instead it uses the > solr.hdfs.home setting configured in solr.xml. This functionally means that > the backup location, which can be passed to the API call dynamically, is > limited by the static home directory defined in solr.xml. This requirement > means that if the solr.hdfs.home path and backup location don't share the > same URI scheme and hostname, the backup will fail, even if the backup could > otherwise have been written to the specified location successfully. > This request is to allow the option of using the location setting to > initialize the filesystem object. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14577) NPE in terms query parser when field is not provided
Tomas Eduardo Fernandez Lobbe created SOLR-14577: Summary: NPE in terms query parser when field is not provided Key: SOLR-14577 URL: https://issues.apache.org/jira/browse/SOLR-14577 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Tomas Eduardo Fernandez Lobbe Should be a 400 BAD REQUEST instead -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14507) Option to allow location override if solr.hdfs.home isn't set in backup repo
[ https://issues.apache.org/jira/browse/SOLR-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138636#comment-17138636 ] Haley Reeve commented on SOLR-14507: I've talked with [~krisden] about the approach, and modified my initial proposal. The new proposal adds an option setting to the HdfsBackupRepository config. This setting in "solr.hdfs.allow.location.override", and if set to true and "solr.hdfs.home" is not defined for the repo, the location will be used to initiate the HDFS Filesystem object. Enabling this setting will give a user initiating a backup a lot more leeway over where the backup data is written, so it should be used carefully and is disabled by default. > Option to allow location override if solr.hdfs.home isn't set in backup repo > > > Key: SOLR-14507 > URL: https://issues.apache.org/jira/browse/SOLR-14507 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Reporter: Haley Reeve >Priority: Major > Attachments: SOLR-14507.patch > > > The Solr backup/restore API has an optional parameter for specifying the > directory to backup to. However, the HdfsBackupRepository class doesn't use > this location when creating the HDFS Filesystem object. Instead it uses the > solr.hdfs.home setting configured in solr.xml. This functionally means that > the backup location, which can be passed to the API call dynamically, is > limited by the static home directory defined in solr.xml. This requirement > means that if the solr.hdfs.home path and backup location don't share the > same URI scheme and hostname, the backup will fail, even if the backup could > otherwise have been written to the specified location successfully. > This request is to allow the option of using the location setting to > initialize the filesystem object. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14507) Option to allow location override if solr.hdfs.home isn't set in backup repo
[ https://issues.apache.org/jira/browse/SOLR-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haley Reeve updated SOLR-14507: --- Description: The Solr backup/restore API has an optional parameter for specifying the directory to backup to. However, the HdfsBackupRepository class doesn't use this location when creating the HDFS Filesystem object. Instead it uses the solr.hdfs.home setting configured in solr.xml. This functionally means that the backup location, which can be passed to the API call dynamically, is limited by the static home directory defined in solr.xml. This requirement means that if the solr.hdfs.home path and backup location don't share the same URI scheme and hostname, the backup will fail, even if the backup could otherwise have been written to the specified location successfully. This request is to allow the option of using the location setting to initialize the filesystem object. was: The Solr backup/restore API has an optional parameter for specifying the directory to backup to. However, the HdfsBackupRepository class doesn't use this location when creating the HDFS Filesystem object. Instead it uses the solr.hdfs.home setting configured in solr.xml. This functionally means that the backup location, which can be passed to the API call dynamically, is limited by the static home directory defined in solr.xml. This requirement means that if the solr.hdfs.home path and backup location don't share the same URI scheme and hostname, the backup will fail, even if the backup could otherwise have been written to the specified location successfully. If we had the option to pass the solr.hdfs.home path as part of the API call, it would remove this limitation on the backup location. > Option to allow location override if solr.hdfs.home isn't set in backup repo > > > Key: SOLR-14507 > URL: https://issues.apache.org/jira/browse/SOLR-14507 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Reporter: Haley Reeve >Priority: Major > Attachments: SOLR-14507.patch > > > The Solr backup/restore API has an optional parameter for specifying the > directory to backup to. However, the HdfsBackupRepository class doesn't use > this location when creating the HDFS Filesystem object. Instead it uses the > solr.hdfs.home setting configured in solr.xml. This functionally means that > the backup location, which can be passed to the API call dynamically, is > limited by the static home directory defined in solr.xml. This requirement > means that if the solr.hdfs.home path and backup location don't share the > same URI scheme and hostname, the backup will fail, even if the backup could > otherwise have been written to the specified location successfully. > This request is to allow the option of using the location setting to > initialize the filesystem object. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14507) Option to allow location override if solr.hdfs.home isn't set in backup repo
[ https://issues.apache.org/jira/browse/SOLR-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haley Reeve updated SOLR-14507: --- Summary: Option to allow location override if solr.hdfs.home isn't set in backup repo (was: Option to pass solr.hdfs.home in API backup/restore calls) > Option to allow location override if solr.hdfs.home isn't set in backup repo > > > Key: SOLR-14507 > URL: https://issues.apache.org/jira/browse/SOLR-14507 > Project: Solr > Issue Type: Improvement > Components: Backup/Restore >Reporter: Haley Reeve >Priority: Major > Attachments: SOLR-14507.patch > > > The Solr backup/restore API has an optional parameter for specifying the > directory to backup to. However, the HdfsBackupRepository class doesn't use > this location when creating the HDFS Filesystem object. Instead it uses the > solr.hdfs.home setting configured in solr.xml. This functionally means that > the backup location, which can be passed to the API call dynamically, is > limited by the static home directory defined in solr.xml. This requirement > means that if the solr.hdfs.home path and backup location don't share the > same URI scheme and hostname, the backup will fail, even if the backup could > otherwise have been written to the specified location successfully. > If we had the option to pass the solr.hdfs.home path as part of the API call, > it would remove this limitation on the backup location. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] johtani commented on pull request #1577: LUCENE-9390: JapaneseTokenizer discards token that is all punctuation characters only
johtani commented on pull request #1577: URL: https://github.com/apache/lucene-solr/pull/1577#issuecomment-645467080 I added NBest test case. And also I changed registerNode. However, there is no difference between changing it or not changing it... Am I missing something test case? For NBest test case with discard punctuation, the tokenizer outputs a complicated token stream, so [I set `graphOffsetsAreCorrect` is `false`](https://github.com/apache/lucene-solr/blob/abf243c5cec331ec8419f0fd7c966dbce45f6b2d/lucene/analysis/kuromoji/src/test/org/apache/lucene/analysis/ja/TestJapaneseTokenizer.java#L967). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] johtani commented on a change in pull request #1577: LUCENE-9390: JapaneseTokenizer discards token that is all punctuation characters only
johtani commented on a change in pull request #1577: URL: https://github.com/apache/lucene-solr/pull/1577#discussion_r441647603 ## File path: lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/JapaneseTokenizer.java ## @@ -1917,4 +1917,15 @@ private static boolean isPunctuation(char ch) { return false; } } + + private static boolean isAllCharPunctuation(char[] ch, int offset, int length) { +boolean flag = true; +for (int i = offset; i < offset + length; i++) { + if (!isPunctuation(ch[i])) { +flag = false; +break; + } +} +return flag; Review comment: Fixed this. ## File path: lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/JapaneseTokenizer.java ## @@ -1917,4 +1917,15 @@ private static boolean isPunctuation(char ch) { return false; } } + + private static boolean isAllCharPunctuation(char[] ch, int offset, int length) { +boolean flag = true; +for (int i = offset; i < offset + length; i++) { + if (!isPunctuation(ch[i])) { +flag = false; Review comment: Fixed this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mikemccand commented on a change in pull request #1543: LUCENE-9378: Disable compression on binary values whose length is less than 32.
mikemccand commented on a change in pull request #1543: URL: https://github.com/apache/lucene-solr/pull/1543#discussion_r441635457 ## File path: lucene/core/src/java/org/apache/lucene/codecs/lucene80/Lucene80DocValuesProducer.java ## @@ -762,6 +764,97 @@ public BytesRef binaryValue() throws IOException { // Decompresses blocks of binary values to retrieve content class BinaryDecoder { +private final LongValues addresses; +private final IndexInput compressedData; +// Cache of last uncompressed block +private long lastBlockId = -1; +private final ByteBuffer deltas; +private int numBytes; +private int uncompressedBlockLength; +private int avgLength; +private final byte[] uncompressedBlock; +private final BytesRef uncompressedBytesRef; +private final int docsPerChunk; +private final int docsPerChunkShift; + +public BinaryDecoder(LongValues addresses, IndexInput compressedData, int biggestUncompressedBlockSize, int docsPerChunkShift) { + super(); + this.addresses = addresses; + this.compressedData = compressedData; + // pre-allocate a byte array large enough for the biggest uncompressed block needed. + this.uncompressedBlock = new byte[biggestUncompressedBlockSize]; + uncompressedBytesRef = new BytesRef(uncompressedBlock); + this.docsPerChunk = 1 << docsPerChunkShift; + this.docsPerChunkShift = docsPerChunkShift; + deltas = ByteBuffer.allocate((docsPerChunk + 1) * Integer.BYTES); + deltas.order(ByteOrder.LITTLE_ENDIAN); +} + +private void decodeBlock(int blockId) throws IOException { + long blockStartOffset = addresses.get(blockId); + compressedData.seek(blockStartOffset); + + final long token = compressedData.readVLong(); + uncompressedBlockLength = (int) (token >>> 4); + avgLength = uncompressedBlockLength >>> docsPerChunkShift; + numBytes = (int) (token & 0x0f); + switch (numBytes) { +case Integer.BYTES: + deltas.putInt(0, (int) 0); + compressedData.readBytes(deltas.array(), Integer.BYTES, docsPerChunk * Integer.BYTES); + break; +case Byte.BYTES: + compressedData.readBytes(deltas.array(), Byte.BYTES, docsPerChunk * Byte.BYTES); + break; +case 0: + break; +default: + throw new CorruptIndexException("Invalid number of bytes: " + numBytes, compressedData); + } + + if (uncompressedBlockLength == 0) { +uncompressedBytesRef.offset = 0; +uncompressedBytesRef.length = 0; + } else { +assert uncompressedBlockLength <= uncompressedBlock.length; +LZ4.decompress(compressedData, uncompressedBlockLength, uncompressedBlock); + } +} + +BytesRef decode(int docNumber) throws IOException { + int blockId = docNumber >> docsPerChunkShift; + int docInBlockId = docNumber % docsPerChunk; + assert docInBlockId < docsPerChunk; + + + // already read and uncompressed? + if (blockId != lastBlockId) { +decodeBlock(blockId); +lastBlockId = blockId; + } + + int startDelta = 0, endDelta = 0; + switch (numBytes) { +case Integer.BYTES: + startDelta = deltas.getInt(docInBlockId * Integer.BYTES); + endDelta = deltas.getInt((docInBlockId + 1) * Integer.BYTES); Review comment: Aha! Sneaky :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mikemccand commented on a change in pull request #1543: LUCENE-9378: Disable compression on binary values whose length is less than 32.
mikemccand commented on a change in pull request #1543: URL: https://github.com/apache/lucene-solr/pull/1543#discussion_r441634757 ## File path: lucene/core/src/java/org/apache/lucene/codecs/lucene80/Lucene80DocValuesConsumer.java ## @@ -404,32 +406,51 @@ private void flushData() throws IOException { // Write offset to this block to temporary offsets file totalChunks++; long thisBlockStartPointer = data.getFilePointer(); - -// Optimisation - check if all lengths are same Review comment: Ahhh OK thanks for the clarification. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14572) Ref Guide doesn't cover all SearchComponents on the Search Components page
[ https://issues.apache.org/jira/browse/SOLR-14572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138535#comment-17138535 ] ASF subversion and git services commented on SOLR-14572: Commit 207efbceeb2fbf977f62516d7dcd9cae4c9d4e67 in lucene-solr's branch refs/heads/master from Eric Pugh [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=207efbc ] SOLR-14572 document missing SearchComponents (#1581) * Add an example explaining how to use * fix up JavaDoc formatting * add missing SearchComponents that ship with Solr, and point to external site with components. * fix path * simplify page layout by consolidating to lists * add missing components that are documented elsewhere in refguide * try to get pathing to pass precommit * remove mention of solr.cool, in favour of a seperate PR that handles it differently > Ref Guide doesn't cover all SearchComponents on the Search Components page > -- > > Key: SOLR-14572 > URL: https://issues.apache.org/jira/browse/SOLR-14572 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 8.5.2 >Reporter: David Eric Pugh >Priority: Minor > Time Spent: 1h 40m > Remaining Estimate: 0h > > I went to > [https://lucene.apache.org/solr/guide/8_5/requesthandlers-and-searchcomponents-in-solrconfig.html] > to find details about the previously unknown to me {{ResponseLogComponent}}, > and it wasn't listed. Poking around, I saw that two more > {{SearchComponents}}, the {{PhrasesIdentificationComponent}} and > {{RealTimeGetComponent}} aren't mentioned. > I'd like to add a new section to the page to add an inventory of Solr > components that ship. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] epugh merged pull request #1581: SOLR-14572 document missing SearchComponents
epugh merged pull request #1581: URL: https://github.com/apache/lucene-solr/pull/1581 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-5894) Speed up high-cardinality facets with sparse counters
[ https://issues.apache.org/jira/browse/SOLR-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138482#comment-17138482 ] Toke Eskildsen commented on SOLR-5894: -- Caching of term counts has been shown by SOLR-2412 to help performance significantly for some distributed setups and is implemented by SOLR-13807. > Speed up high-cardinality facets with sparse counters > - > > Key: SOLR-5894 > URL: https://issues.apache.org/jira/browse/SOLR-5894 > Project: Solr > Issue Type: Improvement > Components: SearchComponents - other >Affects Versions: 4.7.1 >Reporter: Toke Eskildsen >Assignee: Toke Eskildsen >Priority: Minor > Labels: faceted-search, faceting, memory, performance > Attachments: SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, > SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, > SOLR-5894.patch, SOLR-5894.patch, SOLR-5894_test.zip, SOLR-5894_test.zip, > SOLR-5894_test.zip, SOLR-5894_test.zip, SOLR-5894_test.zip, > author_7M_tags_1852_logged_queries_warmed.png, > sparse_200docs_fc_cutoff_20140403-145412.png, > sparse_500docs_20140331-151918_multi.png, > sparse_500docs_20140331-151918_single.png, > sparse_5051docs_20140328-152807.png > > > Multiple performance enhancements to Solr String faceting. > * Sparse counters, switching the constant time overhead of extracting top-X > terms with time overhead linear to result set size > * Counter re-use for reduced garbage collection and lower per-call overhead > * Optional counter packing, trading speed for space > * Improved distribution count logic, greatly improving the performance of > distributed faceting > * In-segment threaded faceting > * Regexp based white- and black-listing of facet terms > * Heuristic faceting for large result sets > Currently implemented for Solr 4.10. Source, detailed description and > directly usable WAR at http://tokee.github.io/lucene-solr/ > This project has grown beyond a simple patch and will require a fair amount > of co-operation with a committer to get into Solr. Splitting into smaller > issues is a possibility. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw closed pull request #1576: Alternative approach to LUCENE-8962
s1monw closed pull request #1576: URL: https://github.com/apache/lucene-solr/pull/1576 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw commented on pull request #1576: Alternative approach to LUCENE-8962
s1monw commented on pull request #1576: URL: https://github.com/apache/lucene-solr/pull/1576#issuecomment-645405813 superseded by #1585 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw commented on pull request #1585: LUCENE-8962: Allow waiting for all merges in a merge spec
s1monw commented on pull request #1585: URL: https://github.com/apache/lucene-solr/pull/1585#issuecomment-645403443 @mikemccand @msokolov @msfroh @dsmiley I pushed a new commit to address your comments. thanks you! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw commented on a change in pull request #1585: LUCENE-8962: Allow waiting for all merges in a merge spec
s1monw commented on a change in pull request #1585: URL: https://github.com/apache/lucene-solr/pull/1585#discussion_r441581487 ## File path: lucene/core/src/java/org/apache/lucene/index/IndexWriter.java ## @@ -4289,7 +4287,7 @@ private synchronized void mergeFinish(MergePolicy.OneMerge merge) { @SuppressWarnings("try") private synchronized void closeMergeReaders(MergePolicy.OneMerge merge, boolean suppressExceptions) throws IOException { final boolean drop = suppressExceptions == false; -try (Closeable finalizer = merge::mergeFinished) { +try (Closeable finalizer = () -> merge.mergeFinished(suppressExceptions==false)) { Review comment: done ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -399,6 +427,23 @@ public String segString(Directory dir) { } return b.toString(); } + +/** + * Waits if necessary for at most the given time for all merges. + */ +boolean await(long timeout, TimeUnit unit) { + try { +CompletableFuture future = CompletableFuture.allOf(merges.stream() +.map(m -> m.completable).collect(Collectors.toList()).toArray(new CompletableFuture[0])); +future.get(timeout, unit); +return true; + } catch (InterruptedException e) { +Thread.interrupted(); Review comment: done ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -399,6 +427,23 @@ public String segString(Directory dir) { } return b.toString(); } + +/** + * Waits if necessary for at most the given time for all merges. + */ +boolean await(long timeout, TimeUnit unit) { + try { +CompletableFuture future = CompletableFuture.allOf(merges.stream() Review comment: done thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw commented on a change in pull request #1585: LUCENE-8962: Allow waiting for all merges in a merge spec
s1monw commented on a change in pull request #1585: URL: https://github.com/apache/lucene-solr/pull/1585#discussion_r441575828 ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -196,6 +200,7 @@ final void setMergeThread(Thread owner) { * * @lucene.experimental */ public static class OneMerge { Review comment: yeah I want to do that anyway This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mikemccand commented on a change in pull request #1585: LUCENE-8962: Allow waiting for all merges in a merge spec
mikemccand commented on a change in pull request #1585: URL: https://github.com/apache/lucene-solr/pull/1585#discussion_r441559509 ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -196,6 +200,7 @@ final void setMergeThread(Thread owner) { * * @lucene.experimental */ public static class OneMerge { +private final CompletableFuture completable = new CompletableFuture<>(); Review comment: Could we rename the variable to `completed` or maybe `mergeCompleted`? ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -196,6 +200,7 @@ final void setMergeThread(Thread owner) { * * @lucene.experimental */ public static class OneMerge { Review comment: Maybe we should (later, in dedicated PR) pull this out into its own java source? ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -362,6 +366,30 @@ public void checkAborted() throws MergeAbortedException { public OneMergeProgress getMergeProgress() { return mergeProgress; } + +/** + * Waits for this merge to be completed + * @return true if the merge finished within the specified timeout + */ +boolean await(long timeout, TimeUnit timeUnit) { + try { +completable.get(timeout, timeUnit); +return true; + } catch (InterruptedException e) { +Thread.interrupted(); +return false; + } catch (ExecutionException | TimeoutException e) { +return false; + } +} + +boolean isDone() { Review comment: Maybe add javadoc about the lack of thread safety here? I.e. this could return `false` and shortly thereafter it becomes `true`. ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -362,6 +366,30 @@ public void checkAborted() throws MergeAbortedException { public OneMergeProgress getMergeProgress() { return mergeProgress; } + +/** + * Waits for this merge to be completed + * @return true if the merge finished within the specified timeout + */ +boolean await(long timeout, TimeUnit timeUnit) { + try { +completable.get(timeout, timeUnit); +return true; + } catch (InterruptedException e) { +Thread.interrupted(); +return false; + } catch (ExecutionException | TimeoutException e) { +return false; + } +} + +boolean isDone() { + return completable.isDone(); +} + +boolean isCommitted() { Review comment: Maybe rename to `isCompleted`? `committed` is overloaded term -- could try to mean its files got `fsync'`d :) ## File path: lucene/core/src/java/org/apache/lucene/index/MergePolicy.java ## @@ -362,6 +366,30 @@ public void checkAborted() throws MergeAbortedException { public OneMergeProgress getMergeProgress() { return mergeProgress; } + +/** + * Waits for this merge to be completed + * @return true if the merge finished within the specified timeout + */ +boolean await(long timeout, TimeUnit timeUnit) { + try { +completable.get(timeout, timeUnit); +return true; + } catch (InterruptedException e) { +Thread.interrupted(); +return false; + } catch (ExecutionException | TimeoutException e) { +return false; + } +} + +boolean isDone() { + return completable.isDone(); +} + +boolean isCommitted() { + return completable.getNow(Boolean.FALSE); Review comment: Javadoc here too? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] epugh commented on a change in pull request #1581: SOLR-14572 document missing SearchComponents
epugh commented on a change in pull request #1581: URL: https://github.com/apache/lucene-solr/pull/1581#discussion_r441528031 ## File path: solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc ## @@ -169,3 +169,14 @@ Many of the other useful components are described in sections of this Guide for * `TermVectorComponent`, described in the section <>. * `QueryElevationComponent`, described in the section <>. * `TermsComponent`, described in the section <>. +* `RealTimeGetComponent`, described in the section <>. +* `ClusteringComponent`, described in the section <>. +* `SuggestComponent`, described in the section <>. +* `AnalyticsComponent`, described in the section <>. + +Other components that ship with Solr include: + +* `ResponseLogComponent`, used to record which documents are returned to the user via the Solr log, described in the {solr-javadocs}solr-core/org/apache/solr/handler/component/ResponseLogComponent.html[ResponseLogComponent] javadocs. +* `PhrasesIdentificationComponent`, used to identify & score "phrases" found in the input string, based on shingles in indexed fields, described in the {solr-javadocs}solr-core/org/apache/solr/handler/component/PhrasesIdentificationComponent.html[PhrasesIdentificationComponent] javadocs. + +Lastly, you may be interested in some other components created by the community and listed on the https://solr.cool/#searchcomponents[Solr Cool] website. Review comment: Makes sense. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14532) Add iml file to gitignore
[ https://issues.apache.org/jira/browse/SOLR-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138390#comment-17138390 ] Jason Gerlowski commented on SOLR-14532: Sure, will take a look this afternoon. > Add iml file to gitignore > - > > Key: SOLR-14532 > URL: https://issues.apache.org/jira/browse/SOLR-14532 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andras Salamon >Priority: Trivial > Attachments: SOLR-14532.patch, SOLR-14532.patch > > > If I execute {{gradlew idea}} in my {{lucene-solr-upstream}} directory, it > will create three files in the root directory: > {noformat} > lucene-solr-upstream.iml > lucene-solr-upstream.ipr > lucene-solr-upstream.iws > {noformat} > Git will ignore the {{ipr}} and the {{iws}} file, but it lists the iml file > as a new file. We should also ignore that one. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] danmfox commented on a change in pull request #1514: SOLR-13749: Change cross-collection join query syntax to {!join method=crossCollection ...}
danmfox commented on a change in pull request #1514: URL: https://github.com/apache/lucene-solr/pull/1514#discussion_r441494859 ## File path: solr/core/src/java/org/apache/solr/search/JoinQParserPlugin.java ## @@ -160,23 +176,40 @@ JoinParams parseJoin(QParser qparser) throws SyntaxError { } } + @Override + public void init(NamedList args) { +routerField = (String) args.get("routerField"); +solrUrlWhitelist = new HashSet<>(); +if (args.get("solrUrlWhitelist") != null) { + //noinspection unchecked + for (String s : (List) args.get("solrUrlWhitelist")) { +if (!StringUtils.isEmpty(s)) Review comment: Sounds good to me - I made this change while updating the property name. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14532) Add iml file to gitignore
[ https://issues.apache.org/jira/browse/SOLR-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138379#comment-17138379 ] Andras Salamon commented on SOLR-14532: --- Uploaded a new patch which also contains the help text change suggested by [~erickerickson]. Can you please check it [~gerlowskija]? > Add iml file to gitignore > - > > Key: SOLR-14532 > URL: https://issues.apache.org/jira/browse/SOLR-14532 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andras Salamon >Priority: Trivial > Attachments: SOLR-14532.patch, SOLR-14532.patch > > > If I execute {{gradlew idea}} in my {{lucene-solr-upstream}} directory, it > will create three files in the root directory: > {noformat} > lucene-solr-upstream.iml > lucene-solr-upstream.ipr > lucene-solr-upstream.iws > {noformat} > Git will ignore the {{ipr}} and the {{iws}} file, but it lists the iml file > as a new file. We should also ignore that one. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14532) Add iml file to gitignore
[ https://issues.apache.org/jira/browse/SOLR-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Salamon updated SOLR-14532: -- Attachment: SOLR-14532.patch > Add iml file to gitignore > - > > Key: SOLR-14532 > URL: https://issues.apache.org/jira/browse/SOLR-14532 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andras Salamon >Priority: Trivial > Attachments: SOLR-14532.patch, SOLR-14532.patch > > > If I execute {{gradlew idea}} in my {{lucene-solr-upstream}} directory, it > will create three files in the root directory: > {noformat} > lucene-solr-upstream.iml > lucene-solr-upstream.ipr > lucene-solr-upstream.iws > {noformat} > Git will ignore the {{ipr}} and the {{iws}} file, but it lists the iml file > as a new file. We should also ignore that one. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] someoneknown closed pull request #1587: Branch 8 5
someoneknown closed pull request #1587: URL: https://github.com/apache/lucene-solr/pull/1587 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] someoneknown opened a new pull request #1587: Branch 8 5
someoneknown opened a new pull request #1587: URL: https://github.com/apache/lucene-solr/pull/1587 # Description Please provide a short description of the changes you're making with this pull request. # Solution Please provide a short description of the approach taken to implement your solution. # Tests Please describe the tests you've developed or run to confirm this patch implements the feature or solves the problem. # Checklist Please review the following and check all that apply: - [ ] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [ ] I have created a Jira issue and added the issue ID to my pull request title. - [ ] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [ ] I have developed this patch against the `master` branch. - [ ] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8574) ExpressionFunctionValues should cache per-hit value
[ https://issues.apache.org/jira/browse/LUCENE-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138277#comment-17138277 ] Robert Muir commented on LUCENE-8574: - please, lets keep the boolean and not bring NaN into this. > ExpressionFunctionValues should cache per-hit value > --- > > Key: LUCENE-8574 > URL: https://issues.apache.org/jira/browse/LUCENE-8574 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.5, 8.0 >Reporter: Michael McCandless >Assignee: Robert Muir >Priority: Major > Attachments: LUCENE-8574.patch, unit_test.patch > > Time Spent: 1h > Remaining Estimate: 0h > > The original version of {{ExpressionFunctionValues}} had a simple per-hit > cache, so that nested expressions that reference the same common variable > would compute the value for that variable the first time it was referenced > and then use that cached value for all subsequent invocations, within one > hit. I think it was accidentally removed in LUCENE-7609? > This is quite important if you have non-trivial expressions that reference > the same variable multiple times. > E.g. if I have these expressions: > {noformat} > x = c + d > c = b + 2 > d = b * 2{noformat} > Then evaluating x should only cause b's value to be computed once (for a > given hit), but today it's computed twice. The problem is combinatoric if b > then references another variable multiple times, etc. > I think to fix this we just need to restore the per-hit cache? > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on pull request #1572: SOLR-14561 CoreAdminAPI's parameters instanceDir and dataDir are now validated
janhoy commented on pull request #1572: URL: https://github.com/apache/lucene-solr/pull/1572#issuecomment-645245313 The tests still don't pass on Windows, and I have found the reason. Will push a few more changes on friday. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9359) SegmentInfos.readCommit should verify checksums in case of error
[ https://issues.apache.org/jira/browse/LUCENE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138186#comment-17138186 ] ASF subversion and git services commented on LUCENE-9359: - Commit 745e13108cdda6177bf2c3ae02e4993a43d455d8 in lucene-solr's branch refs/heads/branch_8x from Adrien Grand [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=745e131 ] LUCENE-9359: Avoid test failures when the extra file is a dir. > SegmentInfos.readCommit should verify checksums in case of error > > > Key: LUCENE-9359 > URL: https://issues.apache.org/jira/browse/LUCENE-9359 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Fix For: 8.6 > > Time Spent: 20m > Remaining Estimate: 0h > > SegmentInfos.readCommit only calls checkFooter if reading the commit > succeeded. We should also call it in case of errors in order to be able to > distinguish hardware errors from Lucene bugs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9359) SegmentInfos.readCommit should verify checksums in case of error
[ https://issues.apache.org/jira/browse/LUCENE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138185#comment-17138185 ] ASF subversion and git services commented on LUCENE-9359: - Commit ea0ad3ec517d6d0944647ac497dbc014e6cac448 in lucene-solr's branch refs/heads/master from Adrien Grand [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ea0ad3e ] LUCENE-9359: Avoid test failures when the extra file is a dir. > SegmentInfos.readCommit should verify checksums in case of error > > > Key: LUCENE-9359 > URL: https://issues.apache.org/jira/browse/LUCENE-9359 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Fix For: 8.6 > > Time Spent: 20m > Remaining Estimate: 0h > > SegmentInfos.readCommit only calls checkFooter if reading the commit > succeeded. We should also call it in case of errors in order to be able to > distinguish hardware errors from Lucene bugs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org