[jira] [Resolved] (SOLR-14471) base replica selection strategy not applied to "last place" shards.preference matches
[ https://issues.apache.org/jira/browse/SOLR-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomas Eduardo Fernandez Lobbe resolved SOLR-14471. -- Fix Version/s: 8.6 master (9.0) Resolution: Fixed Merged. Thanks Michael! > base replica selection strategy not applied to "last place" shards.preference > matches > - > > Key: SOLR-14471 > URL: https://issues.apache.org/jira/browse/SOLR-14471 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: master (9.0), 8.3 >Reporter: Michael Gibney >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Minor > Fix For: master (9.0), 8.6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > When {{shards.preferences}} is specified, all inherently equivalent groups of > replicas should fall back to being sorted by the {{replica.base}} strategy > (either random or some variant of "stable"). This currently works for every > group of "equivalent" replicas, with the exception of "last place" matches. > This is easy to overlook, because usually it's the "first place" matches that > will be selected for the purpose of actually executing distributed requests; > but it's still a bug, and is especially problematic when "last place matches" > == "first place matches" – e.g. when {{shards.preference}} specified matches > _all_ available replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14471) base replica selection strategy not applied to "last place" shards.preference matches
[ https://issues.apache.org/jira/browse/SOLR-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107937#comment-17107937 ] ASF subversion and git services commented on SOLR-14471: Commit 43631e126e93308d034a2babd51765230f96f5e3 in lucene-solr's branch refs/heads/branch_8x from Michael Gibney [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=43631e1 ] SOLR-14471: Fix last-place replica after shards.preference rules (#1507) Properly apply base replica ordering to last-place shards.preference matches > base replica selection strategy not applied to "last place" shards.preference > matches > - > > Key: SOLR-14471 > URL: https://issues.apache.org/jira/browse/SOLR-14471 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: master (9.0), 8.3 >Reporter: Michael Gibney >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > When {{shards.preferences}} is specified, all inherently equivalent groups of > replicas should fall back to being sorted by the {{replica.base}} strategy > (either random or some variant of "stable"). This currently works for every > group of "equivalent" replicas, with the exception of "last place" matches. > This is easy to overlook, because usually it's the "first place" matches that > will be selected for the purpose of actually executing distributed requests; > but it's still a bug, and is especially problematic when "last place matches" > == "first place matches" – e.g. when {{shards.preference}} specified matches > _all_ available replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14471) base replica selection strategy not applied to "last place" shards.preference matches
[ https://issues.apache.org/jira/browse/SOLR-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107938#comment-17107938 ] ASF subversion and git services commented on SOLR-14471: Commit abaf16ea1bb2301e5b555f0bf1ea9d3bd7012761 in lucene-solr's branch refs/heads/branch_8x from Tomas Eduardo Fernandez Lobbe [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=abaf16e ] SOLR-14471: Add CHANGES entry > base replica selection strategy not applied to "last place" shards.preference > matches > - > > Key: SOLR-14471 > URL: https://issues.apache.org/jira/browse/SOLR-14471 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: master (9.0), 8.3 >Reporter: Michael Gibney >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > When {{shards.preferences}} is specified, all inherently equivalent groups of > replicas should fall back to being sorted by the {{replica.base}} strategy > (either random or some variant of "stable"). This currently works for every > group of "equivalent" replicas, with the exception of "last place" matches. > This is easy to overlook, because usually it's the "first place" matches that > will be selected for the purpose of actually executing distributed requests; > but it's still a bug, and is especially problematic when "last place matches" > == "first place matches" – e.g. when {{shards.preference}} specified matches > _all_ available replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14471) base replica selection strategy not applied to "last place" shards.preference matches
[ https://issues.apache.org/jira/browse/SOLR-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107931#comment-17107931 ] ASF subversion and git services commented on SOLR-14471: Commit 4e564079fb2a160624bb30bec5caf9992d6717bb in lucene-solr's branch refs/heads/master from Tomas Eduardo Fernandez Lobbe [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4e56407 ] SOLR-14471: Add CHANGES entry > base replica selection strategy not applied to "last place" shards.preference > matches > - > > Key: SOLR-14471 > URL: https://issues.apache.org/jira/browse/SOLR-14471 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: master (9.0), 8.3 >Reporter: Michael Gibney >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > When {{shards.preferences}} is specified, all inherently equivalent groups of > replicas should fall back to being sorted by the {{replica.base}} strategy > (either random or some variant of "stable"). This currently works for every > group of "equivalent" replicas, with the exception of "last place" matches. > This is easy to overlook, because usually it's the "first place" matches that > will be selected for the purpose of actually executing distributed requests; > but it's still a bug, and is especially problematic when "last place matches" > == "first place matches" – e.g. when {{shards.preference}} specified matches > _all_ available replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14471) base replica selection strategy not applied to "last place" shards.preference matches
[ https://issues.apache.org/jira/browse/SOLR-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107929#comment-17107929 ] ASF subversion and git services commented on SOLR-14471: Commit 54dca800a9432a72b93723d40fa4abc9a8e11f14 in lucene-solr's branch refs/heads/master from Michael Gibney [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=54dca80 ] SOLR-14471: Fix last-place replica after shards.preference rules (#1507) Properly apply base replica ordering to last-place shards.preference matches > base replica selection strategy not applied to "last place" shards.preference > matches > - > > Key: SOLR-14471 > URL: https://issues.apache.org/jira/browse/SOLR-14471 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: master (9.0), 8.3 >Reporter: Michael Gibney >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > When {{shards.preferences}} is specified, all inherently equivalent groups of > replicas should fall back to being sorted by the {{replica.base}} strategy > (either random or some variant of "stable"). This currently works for every > group of "equivalent" replicas, with the exception of "last place" matches. > This is easy to overlook, because usually it's the "first place" matches that > will be selected for the purpose of actually executing distributed requests; > but it's still a bug, and is especially problematic when "last place matches" > == "first place matches" – e.g. when {{shards.preference}} specified matches > _all_ available replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe merged pull request #1507: SOLR-14471: properly apply base replica ordering to last-place shards…
tflobbe merged pull request #1507: URL: https://github.com/apache/lucene-solr/pull/1507 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1517: SOLR-13289: Use the final collector's scoreMode
tflobbe commented on a change in pull request #1517: URL: https://github.com/apache/lucene-solr/pull/1517#discussion_r425557943 ## File path: solr/core/src/test/org/apache/solr/search/SolrIndexSearcherTest.java ## @@ -189,12 +206,95 @@ public void testMinExactHitsWithMaxScoreRequested() throws IOException { cmd.setMinExactHits(2); cmd.setFlags(SolrIndexSearcher.GET_SCORES); cmd.setQuery(new TermQuery(new Term("field1_s", "foo"))); - searcher.search(new QueryResult(), cmd); QueryResult qr = new QueryResult(); searcher.search(qr, cmd); assertMatchesGraterThan(NUM_DOCS, qr); assertNotEquals(Float.NaN, qr.getDocList().maxScore()); return null; }); } + + public void testMinExactWithFilters() throws Exception { + +h.getCore().withSearcher(searcher -> { + //Sanity Check - No Filter + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); + cmd.setQuery(new TermQuery(new Term("field4_t", "0"))); + QueryResult qr = new QueryResult(); + searcher.search(qr, cmd); + assertMatchesGraterThan(NUM_DOCS, qr); + return null; +}); + + +h.getCore().withSearcher(searcher -> { + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); + cmd.setQuery(new TermQuery(new Term("field4_t", "0"))); + FunctionRangeQuery filterQuery = new FunctionRangeQuery(new ValueSourceRangeFilter(new IntFieldSource("field3_i_dvo"), "19", "19", true, true)); + cmd.setFilterList(filterQuery); + filterQuery.setCache(false); + filterQuery.setCost(0); + assertNull(searcher.getProcessedFilter(null, cmd.getFilterList()).postFilter); + QueryResult qr = new QueryResult(); + searcher.search(qr, cmd); + assertMatchesEqual(1, qr); + return null; +}); + } + + public void testMinExactWithPostFilters() throws Exception { +h.getCore().withSearcher(searcher -> { + //Sanity Check - No Filter + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); + cmd.setQuery(new TermQuery(new Term("field4_t", "0"))); + QueryResult qr = new QueryResult(); + searcher.search(qr, cmd); + assertMatchesGraterThan(NUM_DOCS, qr); + return null; +}); + + +h.getCore().withSearcher(searcher -> { + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); + cmd.setQuery(new TermQuery(new Term("field4_t", "0"))); + FunctionRangeQuery filterQuery = new FunctionRangeQuery(new ValueSourceRangeFilter(new IntFieldSource("field3_i_dvo"), "19", "19", true, true)); Review comment: I didn't know about that Jira. I can look at having a Mock PostFilter here instead This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1517: SOLR-13289: Use the final collector's scoreMode
tflobbe commented on a change in pull request #1517: URL: https://github.com/apache/lucene-solr/pull/1517#discussion_r425557741 ## File path: solr/core/src/test/org/apache/solr/search/SolrIndexSearcherTest.java ## @@ -189,12 +206,95 @@ public void testMinExactHitsWithMaxScoreRequested() throws IOException { cmd.setMinExactHits(2); cmd.setFlags(SolrIndexSearcher.GET_SCORES); cmd.setQuery(new TermQuery(new Term("field1_s", "foo"))); - searcher.search(new QueryResult(), cmd); QueryResult qr = new QueryResult(); searcher.search(qr, cmd); assertMatchesGraterThan(NUM_DOCS, qr); assertNotEquals(Float.NaN, qr.getDocList().maxScore()); return null; }); } + + public void testMinExactWithFilters() throws Exception { + +h.getCore().withSearcher(searcher -> { + //Sanity Check - No Filter + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); Review comment: just trying to make the test cover specifically what I was working on. It can definitely be done with a higher level test or integration test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14488) Making replica from leader configurable
Cao Manh Dat created SOLR-14488: --- Summary: Making replica from leader configurable Key: SOLR-14488 URL: https://issues.apache.org/jira/browse/SOLR-14488 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Cao Manh Dat Assignee: Cao Manh Dat Right now, users can't configure related parameters for replicating from leader process. Like {{commitReserveDuration}}, throttling, etc. The default 10s value of {{commitReserveDuration}} can making replicate from leader failed constantly. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107860#comment-17107860 ] Jay edited comment on SOLR-12057 at 5/15/20, 2:32 AM: -- [~sarkaramr...@gmail.com] Will this ticket be part of the future solr 8.x release? was (Author: jsp08): [~sarkaramr...@gmail.com] Will this ticket be part of the future solr 8.x release? > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > cdcr-fail-with-tlog-pull.patch, cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107860#comment-17107860 ] Jay commented on SOLR-12057: [~sarkaramr...@gmail.com] Will this ticket be part of the future solr 8.x release? > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > cdcr-fail-with-tlog-pull.patch, cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr];, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14471) base replica selection strategy not applied to "last place" shards.preference matches
[ https://issues.apache.org/jira/browse/SOLR-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107859#comment-17107859 ] Michael Gibney commented on SOLR-14471: --- Thanks for the feedback, [~tflobbe]. Is this something you'd be willing to commit, and/or would it make sense to loop anyone else in on this? Please let me know if there's anything else I can do to help. > base replica selection strategy not applied to "last place" shards.preference > matches > - > > Key: SOLR-14471 > URL: https://issues.apache.org/jira/browse/SOLR-14471 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: master (9.0), 8.3 >Reporter: Michael Gibney >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > When {{shards.preferences}} is specified, all inherently equivalent groups of > replicas should fall back to being sorted by the {{replica.base}} strategy > (either random or some variant of "stable"). This currently works for every > group of "equivalent" replicas, with the exception of "last place" matches. > This is easy to overlook, because usually it's the "first place" matches that > will be selected for the purpose of actually executing distributed requests; > but it's still a bug, and is especially problematic when "last place matches" > == "first place matches" – e.g. when {{shards.preference}} specified matches > _all_ available replicas. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1430: SOLR-13101: SHARED replica's distributed indexing
mbwaheed commented on a change in pull request #1430: URL: https://github.com/apache/lucene-solr/pull/1430#discussion_r425531924 ## File path: solr/core/src/test/org/apache/solr/store/shared/SharedCoreIndexingBatchProcessorTest.java ## @@ -0,0 +1,270 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.store.shared; + +import java.util.concurrent.locks.ReentrantReadWriteLock; + +import org.apache.solr.common.SolrException; +import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; +import org.apache.solr.common.cloud.ZkStateReader; +import org.apache.solr.core.CoreContainer; +import org.apache.solr.core.SolrCore; +import org.apache.solr.store.blob.process.CorePuller; +import org.apache.solr.store.blob.process.CorePusher; +import org.apache.solr.store.shared.metadata.SharedShardMetadataController; +import org.junit.After; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; +import org.mockito.Mockito; + +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyBoolean; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.verify; + +/** + * Unit tests for {@link SharedCoreIndexingBatchProcessor} + */ +public class SharedCoreIndexingBatchProcessorTest extends SolrCloudSharedStoreTestCase { + + private static final String COLLECTION_NAME = "sharedCollection"; + private static final String SHARD_NAME = "shard1"; + + private SolrCore core; + private CorePuller corePuller; + private CorePusher corePusher; + private ReentrantReadWriteLock corePullLock; + private SharedCoreIndexingBatchProcessor processor; + + @BeforeClass + public static void setupCluster() throws Exception { +assumeWorkingMockito(); +setupCluster(1); + } + + @Before + public void setupTest() throws Exception { +assertEquals("wrong number of nodes", 1, cluster.getJettySolrRunners().size()); +CoreContainer cc = cluster.getJettySolrRunner(0).getCoreContainer(); + +int maxShardsPerNode = 1; +int numReplicas = 1; +setupSharedCollectionWithShardNames(COLLECTION_NAME, maxShardsPerNode, numReplicas, SHARD_NAME); +DocCollection collection = cluster.getSolrClient().getZkStateReader().getClusterState().getCollection(COLLECTION_NAME); + +assertEquals("wrong number of replicas", 1, collection.getReplicas().size()); +core = cc.getCore(collection.getReplicas().get(0).getCoreName()); + +assertNotNull("core is null", core); + +corePuller = Mockito.spy(new CorePuller()); +corePusher = Mockito.spy(new CorePusher()); +processor = new SharedCoreIndexingBatchProcessor(core, core.getCoreContainer().getZkController().getClusterState()) { + @Override + protected CorePuller getCorePuller() { +return corePuller; + } + + @Override + protected CorePusher getCorePusher() { +return corePusher; + } +}; +processor = Mockito.spy(processor); +corePullLock = core.getCoreContainer().getSharedStoreManager().getSharedCoreConcurrencyController().getCorePullLock( +COLLECTION_NAME, SHARD_NAME, core.getName()); + } + + @After + public void teardownTest() throws Exception { +if (core != null) { + core.close(); +} +if (processor != null) { + processor.close(); + assertEquals("read lock count is wrong", 0, corePullLock.getReadLockCount()); +} +if (cluster != null) { + cluster.deleteAllCollections(); +} + } + + /** + * Tests that first add/delete starts an indexing batch. + */ + @Test + public void testAddOrDeleteStart() throws Exception { +verify(processor, never()).startIndexingBatch(); +processAddOrDelete(); +verify(processor).startIndexingBatch(); + } + + /** + * Tests that two adds/deletes only start an indexing batch once. + */ + @Test + public void testTwoAddOrDeleteOnlyStartOnce() throws Exception { +verify(processor, never()).startIndexingBatch(); +processAddOrDelete(); +verify(processor).startIndexingBatch(); +processAddOrDelete(); +verify(processor).startIndexingBatch();
[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1430: SOLR-13101: SHARED replica's distributed indexing
mbwaheed commented on a change in pull request #1430: URL: https://github.com/apache/lucene-solr/pull/1430#discussion_r425526455 ## File path: solr/core/src/test/org/apache/solr/store/shared/SharedCoreConcurrencyTest.java ## @@ -595,7 +595,7 @@ private void configureTestSharedConcurrencyControllerForProcess( public void recordState(String collectionName, String shardName, String coreName, SharedCoreStage stage) { super.recordState(collectionName, shardName, coreName, stage); ConcurrentLinkedQueue coreConcurrencyStages = coreConcurrencyStagesMap.computeIfAbsent(coreName, k -> new ConcurrentLinkedQueue<>()); -coreConcurrencyStages.add(Thread.currentThread().getId() + "." + stage.name()); +coreConcurrencyStages.add(Thread.currentThread().getName() + "." + stage.name()); Review comment: Sorry this is some what irrelevant. Looking through one of test run logs I realized in logging if thread has a name then that is logged e.g. puller threads. This change is only to help make debugging of this test easier. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1517: SOLR-13289: Use the final collector's scoreMode
dsmiley commented on a change in pull request #1517: URL: https://github.com/apache/lucene-solr/pull/1517#discussion_r425521045 ## File path: solr/core/src/test/org/apache/solr/search/SolrIndexSearcherTest.java ## @@ -189,12 +206,95 @@ public void testMinExactHitsWithMaxScoreRequested() throws IOException { cmd.setMinExactHits(2); cmd.setFlags(SolrIndexSearcher.GET_SCORES); cmd.setQuery(new TermQuery(new Term("field1_s", "foo"))); - searcher.search(new QueryResult(), cmd); QueryResult qr = new QueryResult(); searcher.search(qr, cmd); assertMatchesGraterThan(NUM_DOCS, qr); assertNotEquals(Float.NaN, qr.getDocList().maxScore()); return null; }); } + + public void testMinExactWithFilters() throws Exception { + +h.getCore().withSearcher(searcher -> { + //Sanity Check - No Filter + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); + cmd.setQuery(new TermQuery(new Term("field4_t", "0"))); + QueryResult qr = new QueryResult(); + searcher.search(qr, cmd); + assertMatchesGraterThan(NUM_DOCS, qr); + return null; +}); + + +h.getCore().withSearcher(searcher -> { + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); + cmd.setQuery(new TermQuery(new Term("field4_t", "0"))); + FunctionRangeQuery filterQuery = new FunctionRangeQuery(new ValueSourceRangeFilter(new IntFieldSource("field3_i_dvo"), "19", "19", true, true)); + cmd.setFilterList(filterQuery); + filterQuery.setCache(false); + filterQuery.setCost(0); + assertNull(searcher.getProcessedFilter(null, cmd.getFilterList()).postFilter); + QueryResult qr = new QueryResult(); + searcher.search(qr, cmd); + assertMatchesEqual(1, qr); + return null; +}); + } + + public void testMinExactWithPostFilters() throws Exception { +h.getCore().withSearcher(searcher -> { + //Sanity Check - No Filter + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); + cmd.setQuery(new TermQuery(new Term("field4_t", "0"))); + QueryResult qr = new QueryResult(); + searcher.search(qr, cmd); + assertMatchesGraterThan(NUM_DOCS, qr); + return null; +}); + + +h.getCore().withSearcher(searcher -> { + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); + cmd.setQuery(new TermQuery(new Term("field4_t", "0"))); + FunctionRangeQuery filterQuery = new FunctionRangeQuery(new ValueSourceRangeFilter(new IntFieldSource("field3_i_dvo"), "19", "19", true, true)); Review comment: FYI today FunctionRangeQuery implements PostFilter but it soon won't: https://issues.apache.org/jira/browse/SOLR-14164 ## File path: solr/core/src/test/org/apache/solr/search/SolrIndexSearcherTest.java ## @@ -189,12 +206,95 @@ public void testMinExactHitsWithMaxScoreRequested() throws IOException { cmd.setMinExactHits(2); cmd.setFlags(SolrIndexSearcher.GET_SCORES); cmd.setQuery(new TermQuery(new Term("field1_s", "foo"))); - searcher.search(new QueryResult(), cmd); QueryResult qr = new QueryResult(); searcher.search(qr, cmd); assertMatchesGraterThan(NUM_DOCS, qr); assertNotEquals(Float.NaN, qr.getDocList().maxScore()); return null; }); } + + public void testMinExactWithFilters() throws Exception { + +h.getCore().withSearcher(searcher -> { + //Sanity Check - No Filter + QueryCommand cmd = new QueryCommand(); + cmd.setMinExactHits(1); + cmd.setLen(1); + cmd.setFlags(SolrIndexSearcher.NO_CHECK_QCACHE | SolrIndexSearcher.NO_SET_QCACHE); Review comment: Curious; why are you writing tests with this low-level way vs testMinExactHitsDisabledByCollapse which I wrote in a more common higher level style that is more succinct? Is it only for this NO_CHECK_QCACHE distinction? Can that be done simply by disabling the cache? Not a big deal but just want to know your point of view. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail:
[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1430: SOLR-13101: SHARED replica's distributed indexing
mbwaheed commented on a change in pull request #1430: URL: https://github.com/apache/lucene-solr/pull/1430#discussion_r425524750 ## File path: solr/core/src/java/org/apache/solr/update/processor/DistributedZkUpdateProcessor.java ## @@ -1465,4 +1318,30 @@ private void zkCheck() { throw new SolrException(SolrException.ErrorCode.SERVICE_UNAVAILABLE, "Cannot talk to ZooKeeper - Updates are disabled."); } + + private boolean isSharedCoreAddOrDeleteGoingToBeIndexedLocally() { +// forwardToLeader: if true, then the update is going to be forwarded to its rightful leader. +// The doc being added or deleted might not even belongs to the current core's (req.getCore()) shard. +// isLeader: if true, then the current core (req.getCore()) is the leader of the shard to which the doc being added or deleted belongs to. +// For SHARED replicas only leader replicas do local indexing. Follower SHARED replicas do not do any local +// indexing and there only job is to forward the updates to the leader replica. +// isSubShardLeader: if true, then the current core (req.getCore()) is the leader of a sub shard being built. +// Sub shard leaders only buffer the updates locally and apply them towards the end of a successful Review comment: yes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1430: SOLR-13101: SHARED replica's distributed indexing
mbwaheed commented on a change in pull request #1430: URL: https://github.com/apache/lucene-solr/pull/1430#discussion_r425524682 ## File path: solr/core/src/java/org/apache/solr/update/processor/DistributedZkUpdateProcessor.java ## @@ -1465,4 +1318,30 @@ private void zkCheck() { throw new SolrException(SolrException.ErrorCode.SERVICE_UNAVAILABLE, "Cannot talk to ZooKeeper - Updates are disabled."); } + + private boolean isSharedCoreAddOrDeleteGoingToBeIndexedLocally() { +// forwardToLeader: if true, then the update is going to be forwarded to its rightful leader. +// The doc being added or deleted might not even belongs to the current core's (req.getCore()) shard. +// isLeader: if true, then the current core (req.getCore()) is the leader of the shard to which the doc being added or deleted belongs to. +// For SHARED replicas only leader replicas do local indexing. Follower SHARED replicas do not do any local +// indexing and there only job is to forward the updates to the leader replica. Review comment: correct. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14487) collapse parser treats a zero field value as non-existent
David Smiley created SOLR-14487: --- Summary: collapse parser treats a zero field value as non-existent Key: SOLR-14487 URL: https://issues.apache.org/jira/browse/SOLR-14487 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: David Smiley When collapsing on an integer field (maybe others too?), a zero value is considered the same as the document having no value. I found this when trying to write a test [solr/core/src/test/org/apache/solr/search/TestCollapseQParserPlugin.java|https://github.com/apache/lucene-solr/pull/1517/files#diff-641a4dc7b08b4730c071153e28ad9d62] method "testMinExactHitsDisabledByCollapse" but was able to work around it using nullPolicy=expand. It's clearly not a general fix but only helped for that particular query/circumstance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1430: SOLR-13101: SHARED replica's distributed indexing
mbwaheed commented on a change in pull request #1430: URL: https://github.com/apache/lucene-solr/pull/1430#discussion_r425522694 ## File path: solr/core/src/java/org/apache/solr/update/processor/DistributedZkUpdateProcessor.java ## @@ -184,6 +186,30 @@ public void processCommit(CommitUpdateCommand cmd) throws IOException { updateCommand = cmd; +// 1. SHARED replica has a hard requirement of processing each indexing batch with a hard commit(either explicit or +// implicit, HttpSolrCall#addCommitIfAbsent) because that is how, at the end of an indexing batch, synchronous push +// to shared store gets hold of the segment files on local disk. SHARED replica also does not support the notion of soft commit. +// Therefore unlike NRT replica type we do not need to broadcast commit to the leaders of all the shards of a collection. Review comment: Correct. Former is not needed because isolated commit is a no-op for SHARED replica and later is not supported because SHARED replica has a different plan around opening of searchers https://issues.apache.org/jira/browse/SOLR-14339 I have updated the comments. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1430: SOLR-13101: SHARED replica's distributed indexing
mbwaheed commented on a change in pull request #1430: URL: https://github.com/apache/lucene-solr/pull/1430#discussion_r425517306 ## File path: solr/core/src/java/org/apache/solr/update/processor/DistributedZkUpdateProcessor.java ## @@ -184,6 +186,30 @@ public void processCommit(CommitUpdateCommand cmd) throws IOException { updateCommand = cmd; +// 1. SHARED replica has a hard requirement of processing each indexing batch with a hard commit(either explicit or +// implicit, HttpSolrCall#addCommitIfAbsent) because that is how, at the end of an indexing batch, synchronous push +// to shared store gets hold of the segment files on local disk. SHARED replica also does not support the notion of soft commit. +// Therefore unlike NRT replica type we do not need to broadcast commit to the leaders of all the shards of a collection. +// +// 2. isLeader is computed fresh each time an AddUpdateCommand/DeleteUpdateCommand belonging to the indexing +// batch is processed. And finally it is recomputed in this method. It is possible that at the beginning of a batch +// this replica was a leader and did process some AddUpdateCommand/DeleteUpdateCommand. But before reaching this +// method lost the leadership. In that case we will still like to process the commit otherwise the indexing batch can +// succeed without pushing the changes to the shared store (data loss). Therefore, we are not restricting the Review comment: SHARED replica does not need leadership as such because it relies on the optimistic concurrency when writing to metadataSuffix znode. As long as a replica can match the metadataSuffix version it started indexing with, it will be correct. If the new leader has started indexing and have pushed before this replica then this replica will fail. But if it pushes before the new leader can push then the batch on new leader will fail. In both cases indexing will be correct. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1430: SOLR-13101: SHARED replica's distributed indexing
mbwaheed commented on a change in pull request #1430: URL: https://github.com/apache/lucene-solr/pull/1430#discussion_r425514716 ## File path: solr/core/src/java/org/apache/solr/update/processor/DistributedZkUpdateProcessor.java ## @@ -114,16 +104,29 @@ private RollupRequestReplicationTracker rollupReplicationTracker; private LeaderRequestReplicationTracker leaderReplicationTracker; + /** + * For {@link Replica.Type#SHARED} replica, it is necessary that we pull from the shared store at the start of + * an indexing batch (if the core is stale). And we push to the shared store at the end of a successfully committed + * indexing batch (we ensure that each batch has a hard commit). Details can be found in + * {@link org.apache.solr.store.shared.SharedCoreConcurrencyController}. + * In other words, we would like to call {@link SharedCoreIndexingBatchProcessor#startIndexingBatch()} at the start of + * an indexing batch and {@link SharedCoreIndexingBatchProcessor#finishIndexingBatch()} at the end of a successfully + * committed indexing batch. + * For that, we rely on first {@link #processAdd(AddUpdateCommand)} or {@link #processCommit(CommitUpdateCommand)} Review comment: Thanks for catching. Actually processCommit was incorrectly mentioned in place of processDelete. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1430: SOLR-13101: SHARED replica's distributed indexing
mbwaheed commented on a change in pull request #1430: URL: https://github.com/apache/lucene-solr/pull/1430#discussion_r425514283 ## File path: solr/core/src/java/org/apache/solr/update/processor/DistributedZkUpdateProcessor.java ## @@ -114,16 +104,29 @@ private RollupRequestReplicationTracker rollupReplicationTracker; private LeaderRequestReplicationTracker leaderReplicationTracker; + /** + * For {@link Replica.Type#SHARED} replica, it is necessary that we pull from the shared store at the start of + * an indexing batch (if the core is stale). And we push to the shared store at the end of a successfully committed + * indexing batch (we ensure that each batch has a hard commit). Details can be found in + * {@link org.apache.solr.store.shared.SharedCoreConcurrencyController}. + * In other words, we would like to call {@link SharedCoreIndexingBatchProcessor#startIndexingBatch()} at the start of Review comment: I never liked this place to run batch start/finish logic. I have added a TODO. The doc also refers addOrDeleteGoingToBeIndexedLocally and hardCommitCompletedLocally few lines later. startIndexingBatch and finishIndexingBatch are mentioned first because they are the real reason for whole logic. They are not public because they don't need to be. But if/when we find another better place to start/finish a batch we will likely make them public and directly call them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9328) SortingGroupHead to reuse DocValues
[ https://issues.apache.org/jira/browse/LUCENE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107817#comment-17107817 ] Lucene/Solr QA commented on LUCENE-9328: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 9s{color} | {color:green} core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s{color} | {color:green} grouping in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s{color} | {color:green} join in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s{color} | {color:green} queries in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} test-framework in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 46m 7s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | LUCENE-9328 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13002982/LUCENE-9328.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 5eea9758c90 | | ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 | | Default Java | LTS | | Test Results | https://builds.apache.org/job/PreCommit-LUCENE-Build/274/testReport/ | | modules | C: lucene/core lucene/grouping lucene/join lucene/queries lucene/test-framework solr/core U: . | | Console output | https://builds.apache.org/job/PreCommit-LUCENE-Build/274/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > SortingGroupHead to reuse DocValues > --- > > Key: LUCENE-9328 > URL: https://issues.apache.org/jira/browse/LUCENE-9328 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/grouping >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Minor > Attachments: LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch > > Time Spent: 1h 50m > Remaining Estimate: 0h > > That's why > https://issues.apache.org/jira/browse/LUCENE-7701?focusedCommentId=17084365=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17084365 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14477) relatedness() values can be wrong when using 'prefix'
[ https://issues.apache.org/jira/browse/SOLR-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris M. Hostetter updated SOLR-14477: -- Attachment: SOLR-14477.patch Status: Open (was: Open) patch updated with TestCloudJSONFacetSKG inspired by work being done in SOLR-13132... * beef up index to include single valued fields that also get randomized facet testing * refactor test's (randomizable) {{TermFacet}} data struct to be a little more flexible, making it easier to add new options * add randomization of "prefix" option * for good measure, also add in randomization of: ** perSeg ** prelim_sort So far this all seems to be beasting well ... but because TestCloudJSONFacetSKG works by firing off "verification" queries to check the results for each bucket, the test has some limitations in the indexes and option combos it creates – so before calling this issue "done" i also want to incorporate some of the other test work in progress in SOLR-13132 to randomize comparisons of the results for the same request when only the facet "method" option is varied, to give myself a little more piece of mind that this fix is complete. > relatedness() values can be wrong when using 'prefix' > - > > Key: SOLR-14477 > URL: https://issues.apache.org/jira/browse/SOLR-14477 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Chris M. Hostetter >Assignee: Chris M. Hostetter >Priority: Major > Attachments: SOLR-14477.patch, SOLR-14477.patch > > > Another {{relatedness()}} bug found in json facet's while working on > increased test coverage for SOLR-13132. > if the {{prefix}} option is used when doing a terms facet, then the > {{relatedess()}} calculations can be wrong in some situations -- most notably > when using {{limit:-1}} but i'm pretty sure the bug also impacts the code > paths where the (first) {{sort}} (or {{prelim_sort}} is computed against the > {{relatedness()}} values. > Real world impacts of this bug should be relatively low since i can't really > think of any practical usecases for using {{relatedness()}} in conjunction > with {{prefix}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9321) Port documentation task to gradle
[ https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107802#comment-17107802 ] Tomoko Uchida commented on LUCENE-9321: --- Yes. I come up with the idea as I saw how the task works. Let me just try it - I'm not a designer though, could make it better than vanilla html. (I won't make it in time for merging the branch, so will create a separate patch.) > Port documentation task to gradle > - > > Key: LUCENE-9321 > URL: https://issues.apache.org/jira/browse/LUCENE-9321 > Project: Lucene - Core > Issue Type: Sub-task > Components: general/build >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Major > Fix For: master (9.0) > > Attachments: screenshot-1.png > > Time Spent: 6h 40m > Remaining Estimate: 0h > > This is a placeholder issue for porting ant "documentation" task to gradle. > The generated documents should be able to be published on lucene.apache.org > web site on "as-is" basis. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] megancarey commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session
megancarey commented on a change in pull request #1504: URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r425424095 ## File path: solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java ## @@ -382,45 +383,55 @@ static MapWriter loggingInfo(Policy policy, SolrCloudManager cloudManager, Sugge } public enum Status { -NULL, -//it is just created and not yet used or all operations on it has been completed fully -UNUSED, -COMPUTING, EXECUTING +COMPUTING, // A command is actively using and modifying the session to compute placements +EXECUTING // A command is not done yet processing its changes but no longer uses the session } /** - * This class stores a session for sharing purpose. If a process creates a session to - * compute operations, - * 1) see if there is a session that is available in the cache, - * 2) if yes, check if it is expired - * 3) if it is expired, create a new session - * 4) if it is not expired, borrow it - * 5) after computing operations put it back in the cache + * This class stores sessions for sharing purposes. If a process requirees a session to Review comment: Minor: "requirees" -> requires ## File path: solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java ## @@ -429,87 +440,124 @@ private void release(SessionWrapper sessionWrapper) { * The session can be used by others while the caller is performing operations */ private void returnSession(SessionWrapper sessionWrapper) { - TimeSource timeSource = sessionWrapper.session != null ? sessionWrapper.session.cloudManager.getTimeSource() : TimeSource.NANO_TIME; + boolean present; synchronized (lockObj) { sessionWrapper.status = Status.EXECUTING; -if (log.isDebugEnabled()) { - log.debug("returnSession, curr-time {} sessionWrapper.createTime {}, this.sessionWrapper.createTime {} " - , time(timeSource, MILLISECONDS), - sessionWrapper.createTime, - this.sessionWrapper.createTime); -} -if (sessionWrapper.createTime == this.sessionWrapper.createTime) { - //this session was used for computing new operations and this can now be used for other - // computing - this.sessionWrapper = sessionWrapper; +present = sessionWrapperSet.contains(sessionWrapper); - //one thread who is waiting for this need to be notified. - lockObj.notify(); -} else { - log.debug("create time NOT SAME {} ", SessionWrapper.DEFAULT_INSTANCE.createTime); - //else just ignore it -} +// wake up single thread waiting for a session return (ok if not woken up, wait is short) +lockObj.notify(); } + // Logging + if (present) { +if (log.isDebugEnabled()) { + log.debug("returnSession {}", sessionWrapper.getCreateTime()); +} + } else { +log.warn("returning unknown session {} ", sessionWrapper.getCreateTime()); + } } -public SessionWrapper get(SolrCloudManager cloudManager) throws IOException, InterruptedException { +public SessionWrapper get(SolrCloudManager cloudManager, boolean allowWait) throws IOException, InterruptedException { TimeSource timeSource = cloudManager.getTimeSource(); + long oldestUpdateTimeNs = TimeUnit.SECONDS.convert(timeSource.getTimeNs(), TimeUnit.NANOSECONDS) - SESSION_EXPIRY; + int zkVersion = cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion(); + synchronized (lockObj) { -if (sessionWrapper.status == Status.NULL || -sessionWrapper.zkVersion != cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion() || -TimeUnit.SECONDS.convert(timeSource.getTimeNs() - sessionWrapper.lastUpdateTime, TimeUnit.NANOSECONDS) > SESSION_EXPIRY) { - //no session available or the session is expired +// If nothing in the cache can possibly work, create a new session +if (!hasNonExpiredSession(zkVersion, oldestUpdateTimeNs)) { return createSession(cloudManager); -} else { +} + +// Try to find a session available right away +SessionWrapper sw = getAvailableSession(zkVersion, oldestUpdateTimeNs); + +if (sw != null) { + if (log.isDebugEnabled()) { +log.debug("reusing session {}", sw.getCreateTime()); + } + return sw; +} else if (allowWait) { + // No session available, but if we wait a bit, maybe one can become available + // wait 1 to 10 secs in case a session is returned. Random to spread wakeup otherwise sessions not reused + long waitForMs = (long) (Math.random() * 9 * 1000 + 1000); + + if (log.isDebugEnabled()) { +log.debug("No sessions
[GitHub] [lucene-solr] madrob commented on pull request #1518: SOLR-14482: Fix compile-time warnings in solr/core/search/facet
madrob commented on pull request #1518: URL: https://github.com/apache/lucene-solr/pull/1518#issuecomment-628941414 Uhh... @ErickErickson I think you're missing your changes? There's nothing there under "Files Changed" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] uschindler commented on pull request #1477: LUCENE-9321: Port markdown task to Gradle
uschindler commented on pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#issuecomment-628929672 I pushed a first mockup using Groovy's `SimpleTemplateEngine`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix
[ https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein resolved SOLR-14478. --- Resolution: Resolved > Allow the diff Stream Evaluator to operate on the rows of a matrix > -- > > Key: SOLR-14478 > URL: https://issues.apache.org/jira/browse/SOLR-14478 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14478.patch, Screen Shot 2020-05-14 at 8.54.07 > AM.png > > > Currently the *diff* function performs *serial differencing* on a numeric > vector. This ticket will allow the diff function to perform serial > differencing on all the rows of a *matrix*. This will make it easy to perform > *correlations* on a matrix of *differenced time series vectors* using math > expressions. > A screen shot is attached with *diff* working on a matrix of time series > data. The effect is powerful. It removes the trend from a matrix of time > series vectors in one simple function call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14407) Handle shards.purpose in the postlogs tool
[ https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein resolved SOLR-14407. --- Resolution: Resolved > Handle shards.purpose in the postlogs tool > --- > > Key: SOLR-14407 > URL: https://issues.apache.org/jira/browse/SOLR-14407 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14407.patch > > > This ticket will add the *purpose_ss* field to query type log records that > have a *shards.purpose* request parameter. This can be used to gather timing > and count information for the different parts of the distributed search. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] ErickErickson opened a new pull request #1518: SOLR-14482: Fix compile-time warnings in solr/core/search/facet
ErickErickson opened a new pull request #1518: URL: https://github.com/apache/lucene-solr/pull/1518 The facet code has a _lot_ of classes declared in a file with a different name. This tries to fix all the warnings in solr/core/search/facet. gradlew check passes. Here for comments. Plus one log message I noticed that I'd checked in when I was debugging. Make any comments by EOD Friday, I'll be pushing this over the weekend otherwise after verifying that it all works under ant precommit/test too and checking it over one more time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe opened a new pull request #1517: SOLR-13289: Use the final collector's scoreMode
tflobbe opened a new pull request #1517: URL: https://github.com/apache/lucene-solr/pull/1517 Fixes a bug @dsmiley pointed out to in [SOLR-13289](https://issues.apache.org/jira/browse/SOLR-13289?focusedCommentId=17103601=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17103601) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9328) SortingGroupHead to reuse DocValues
[ https://issues.apache.org/jira/browse/LUCENE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107668#comment-17107668 ] Mikhail Khludnev commented on LUCENE-9328: -- Finally, after sneaking [~romseygeek] hack for {{leafContext}} my patch passed {{TestGroupingSearch}}. I'm too jealous to choose between two approaches. I feel like my approach is fragile to ValueSource and Comparators invoking {{dv.advance()}}. [^LUCENE-9328.patch] enforces the to use {{advanceExact()}}. Following this idea might lead to extreme or revoking {{advance()}} from DocValues in favor of {{advanceExact()}}. Can we have more opinions/votes/ideas here? > SortingGroupHead to reuse DocValues > --- > > Key: LUCENE-9328 > URL: https://issues.apache.org/jira/browse/LUCENE-9328 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/grouping >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Minor > Attachments: LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch > > Time Spent: 1h 50m > Remaining Estimate: 0h > > That's why > https://issues.apache.org/jira/browse/LUCENE-7701?focusedCommentId=17084365=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17084365 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length
[ https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107662#comment-17107662 ] Michael McCandless commented on LUCENE-9365: If that really fixes this corner case, then +1 to remove that minor optimization. Actually, +1 to remove the optimization entirely, even if it doesn't fix this corner case -- it's optimizing a rare corner case anyways, which is a bad tradeoff (added code complexity for rare gains). > Fuzzy query has a false negative when prefix length == search term length > -- > > Key: LUCENE-9365 > URL: https://issues.apache.org/jira/browse/LUCENE-9365 > Project: Lucene - Core > Issue Type: Bug > Components: core/query/scoring >Reporter: Mark Harwood >Priority: Major > > When using FuzzyQuery the search string `bba` does not match doc value `bbab` > with an edit distance of 1 and prefix length of 3. > In FuzzyQuery an automaton is created for the "suffix" part of the search > string which in this case is an empty string. > In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of > the following form : > {code:java} > searchString + "?" > {code} > .. where there's an appropriate number of ? characters according to the edit > distance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9371) Make RegExp internal state more visible to support more rendering formats
[ https://issues.apache.org/jira/browse/LUCENE-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107659#comment-17107659 ] Michael McCandless commented on LUCENE-9371: +1 > Make RegExp internal state more visible to support more rendering formats > - > > Key: LUCENE-9371 > URL: https://issues.apache.org/jira/browse/LUCENE-9371 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Reporter: Mark Harwood >Assignee: Mark Harwood >Priority: Minor > > This is a proposal to open up read-only access to the internal state of > RegExp objects. > The RegExp parser provides a useful parsed object model for regular > expressions. Today it offers three rendering functions: > 1) To Automaton (for query execution) > 2) To string (for machine-readable regular expressions) > 3) To StringTree (for debug purposes) > There are at least 2 other rendering functions that would be useful: > a) To "Explain" format (like the plain-English descriptions used in [regex > debugging tools|https://regex101.com/r/2DUzac/1]) > b) To Query (queries used to accelerate regex searches by providing an > approximation of the search terms and [hitting an ngram > index|https://github.com/wikimedia/search-extra/blob/master/docs/source_regex.md]) > To support these and other renderings/transformations it would be useful to > open read-only access to the fields held in RegExp objects - either through > making them public finals or offering getter access methods. This would free > the RegExp class from having to support all possible transformations. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9328) SortingGroupHead to reuse DocValues
[ https://issues.apache.org/jira/browse/LUCENE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated LUCENE-9328: - Attachment: LUCENE-9328.patch Status: Patch Available (was: Patch Available) > SortingGroupHead to reuse DocValues > --- > > Key: LUCENE-9328 > URL: https://issues.apache.org/jira/browse/LUCENE-9328 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/grouping >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Minor > Attachments: LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, > LUCENE-9328.patch > > Time Spent: 1h 50m > Remaining Estimate: 0h > > That's why > https://issues.apache.org/jira/browse/LUCENE-7701?focusedCommentId=17084365=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17084365 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1510: SOLR-14473: Improve Overseer Javadoc
murblanc commented on a change in pull request #1510: URL: https://github.com/apache/lucene-solr/pull/1510#discussion_r425412381 ## File path: solr/core/src/java/org/apache/solr/cloud/ZkDistributedQueue.java ## @@ -51,9 +51,16 @@ import org.slf4j.LoggerFactory; /** - * A ZK-based distributed queue. Optimized for single-consumer, + * A ZK-based distributed queue. Optimized for single-consumer, * multiple-producer: if there are multiple consumers on the same ZK queue, - * the results should be correct but inefficient + * the results should be correct but inefficient. + * + * This implementation (with help from subclass {@link OverseerTaskQueue}) is used for the + * /overseer/collection-queue-work queue used for Collection and Config Set API calls to the Overseer. + * + * In order to enqueue a message into this queue, a {@link CreateMode#EPHEMERAL_SEQUENTIAL} response node is created Review comment: I'm not the only one not to know about these tags: ant precommit doesn't know either... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14486) Autoscaling simulation framework should stop using /clusterstate.json
[ https://issues.apache.org/jira/browse/SOLR-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107653#comment-17107653 ] Ilan Ginzburg commented on SOLR-14486: -- Thanks [~ab] for creating this. I will keep the sim changes in SOLR-12823 to minimal (compile, but not necessarily have tests passing). If by then you have fixed this one then perfect, otherwise I'll get back to you to discuss options before I submit a PR there. Were you thinking of replacing the clusterstate.json version by some usage of the ZooKeeper transaction id (zxid), or do you see an implementation where individual state.json version numbers can be used? > Autoscaling simulation framework should stop using /clusterstate.json > - > > Key: SOLR-14486 > URL: https://issues.apache.org/jira/browse/SOLR-14486 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > > Spin-off from SOLR-12823. > In theory the simulation framework doesn't actually use this file, but > {{SimClusterStateProvider}} relies on its versioning to keep its internal > data structures in sync. This should be changed to use individual > DocCollection / state.json znodeVersion instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13289) Support for BlockMax WAND
[ https://issues.apache.org/jira/browse/SOLR-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107615#comment-17107615 ] Gregg Donovan commented on SOLR-13289: -- We've been using [ExternalFileField|https://lucene.apache.org/solr/guide/8_5/working-with-external-files-and-processes.html#the-externalfilefield-type] for non-index ranking signals. Is it possible to use WAND with ExternalFileField, as is? Or would ExternalFileField need to be changed to provide max impacts per block? FeatureField could work, but ExternalFileField is quite useful for changing ranking signals without requiring a reindex. > Support for BlockMax WAND > - > > Key: SOLR-13289 > URL: https://issues.apache.org/jira/browse/SOLR-13289 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Tomas Eduardo Fernandez Lobbe >Priority: Major > Attachments: SOLR-13289.patch, SOLR-13289.patch > > Time Spent: 3h 40m > Remaining Estimate: 0h > > LUCENE-8135 introduced BlockMax WAND as a major speed improvement. Need to > expose this via Solr. When enabled, the numFound returned will not be exact. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] uschindler commented on a change in pull request #1477: LUCENE-9321: Port markdown task to Gradle
uschindler commented on a change in pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#discussion_r425384992 ## File path: gradle/documentation/markdown.gradle ## @@ -0,0 +1,190 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +import com.vladsch.flexmark.ast.Heading; +import com.vladsch.flexmark.ext.abbreviation.AbbreviationExtension; +import com.vladsch.flexmark.ext.attributes.AttributesExtension; +import com.vladsch.flexmark.ext.autolink.AutolinkExtension; +import com.vladsch.flexmark.html.HtmlRenderer; +import com.vladsch.flexmark.parser.Parser; +import com.vladsch.flexmark.parser.ParserEmulationProfile; +import com.vladsch.flexmark.util.ast.Document; +import com.vladsch.flexmark.util.data.MutableDataSet; +import com.vladsch.flexmark.util.sequence.Escaping; + +buildscript { + repositories { +mavenCentral() + } + + dependencies { +classpath "com.vladsch.flexmark:flexmark:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-abbreviation:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-attributes:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-autolink:${scriptDepVersions['flexmark']}" + } +} + +def getListOfProjectsAsMarkdown = { prefix -> + def projects = allprojects.findAll{ it.path.startsWith(prefix) && it.tasks.findByName('renderSiteJavadoc') } +.sort(false, Comparator.comparing{ (it.name != 'core') as Boolean } + .thenComparing(Comparator.comparing{ (it.name == 'test-framework') as Boolean }) + .thenComparing(Comparator.comparing{ it.path })); + return projects.collect{ project -> +def text = "**[${project.path.substring(prefix.length()).replace(':','-')}](${project.relativeDocPath}/index.html):** ${project.description}" +if (project.name == 'core') { + text = text.concat(' {style="font-size:larger; margin-bottom:.5em"}') +} +return '* ' + text; + }.join('\n') +} + +configure(subprojects.findAll { it.path == ':lucene' || it.path == ':solr' }) { + task markdownToHtml(type: Copy) { +filteringCharset = 'UTF-8' +includeEmptyDirs = false +into project.docroot +rename(/\.md$/, '.html') +filter(MarkdownFilter) + } +} + +configure(project(':lucene')) { + markdownToHtml { +from('.') { + include 'MIGRATE.md' + include 'JRE_VERSION_MIGRATION.md' + include 'SYSTEM_REQUIREMENTS.md' +} + } + + task createDocumentationIndex { +def outputFile = file("${project.docroot}/index.html"); +def defaultCodecFile = project(':lucene:core').file('src/java/org/apache/lucene/codecs/Codec.java') + +inputs.file(defaultCodecFile) +outputs.file(outputFile) + +doLast { + // static Codec defaultCodec = LOADER. lookup( "LuceneXXX" ) ; + def regex = ~/\bdefaultCodec\s*=\s*LOADER\s*\.\s*lookup\s*\(\s*"([^"]+)"\s*\)\s*;/ + def matcher = regex.matcher(defaultCodecFile.getText('UTF-8')) + if (!matcher.find()) { +throw GradleException("Cannot determine default codec from file ${defaultCodecFile}") + } + def defaultCodecPackage = matcher.group(1).toLowerCase(Locale.ROOT) + def markdown = """ Review comment: Cool. So I will place the file next to the legacy XSL file used by Ant. Once Ant is retired, the latter can go away. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on a change in pull request #1477: LUCENE-9321: Port markdown task to Gradle
dweiss commented on a change in pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#discussion_r425375832 ## File path: gradle/documentation/markdown.gradle ## @@ -0,0 +1,190 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +import com.vladsch.flexmark.ast.Heading; +import com.vladsch.flexmark.ext.abbreviation.AbbreviationExtension; +import com.vladsch.flexmark.ext.attributes.AttributesExtension; +import com.vladsch.flexmark.ext.autolink.AutolinkExtension; +import com.vladsch.flexmark.html.HtmlRenderer; +import com.vladsch.flexmark.parser.Parser; +import com.vladsch.flexmark.parser.ParserEmulationProfile; +import com.vladsch.flexmark.util.ast.Document; +import com.vladsch.flexmark.util.data.MutableDataSet; +import com.vladsch.flexmark.util.sequence.Escaping; + +buildscript { + repositories { +mavenCentral() + } + + dependencies { +classpath "com.vladsch.flexmark:flexmark:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-abbreviation:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-attributes:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-autolink:${scriptDepVersions['flexmark']}" + } +} + +def getListOfProjectsAsMarkdown = { prefix -> + def projects = allprojects.findAll{ it.path.startsWith(prefix) && it.tasks.findByName('renderSiteJavadoc') } +.sort(false, Comparator.comparing{ (it.name != 'core') as Boolean } + .thenComparing(Comparator.comparing{ (it.name == 'test-framework') as Boolean }) + .thenComparing(Comparator.comparing{ it.path })); + return projects.collect{ project -> +def text = "**[${project.path.substring(prefix.length()).replace(':','-')}](${project.relativeDocPath}/index.html):** ${project.description}" +if (project.name == 'core') { + text = text.concat(' {style="font-size:larger; margin-bottom:.5em"}') +} +return '* ' + text; + }.join('\n') +} + +configure(subprojects.findAll { it.path == ':lucene' || it.path == ':solr' }) { + task markdownToHtml(type: Copy) { +filteringCharset = 'UTF-8' +includeEmptyDirs = false +into project.docroot +rename(/\.md$/, '.html') +filter(MarkdownFilter) + } +} + +configure(project(':lucene')) { + markdownToHtml { +from('.') { + include 'MIGRATE.md' + include 'JRE_VERSION_MIGRATION.md' + include 'SYSTEM_REQUIREMENTS.md' +} + } + + task createDocumentationIndex { +def outputFile = file("${project.docroot}/index.html"); +def defaultCodecFile = project(':lucene:core').file('src/java/org/apache/lucene/codecs/Codec.java') + +inputs.file(defaultCodecFile) +outputs.file(outputFile) + +doLast { + // static Codec defaultCodec = LOADER. lookup( "LuceneXXX" ) ; + def regex = ~/\bdefaultCodec\s*=\s*LOADER\s*\.\s*lookup\s*\(\s*"([^"]+)"\s*\)\s*;/ + def matcher = regex.matcher(defaultCodecFile.getText('UTF-8')) + if (!matcher.find()) { +throw GradleException("Cannot determine default codec from file ${defaultCodecFile}") + } + def defaultCodecPackage = matcher.group(1).toLowerCase(Locale.ROOT) + def markdown = """ Review comment: https://docs.groovy-lang.org/latest/html/api/groovy/text/SimpleTemplateEngine.html Should work? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-12823) remove clusterstate.json in Lucene/Solr 8.0
[ https://issues.apache.org/jira/browse/SOLR-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107591#comment-17107591 ] Andrzej Bialecki commented on SOLR-12823: - [~murblanc] I created SOLR-14486 to fix the /clusterstate.json issue in the simulation API. > remove clusterstate.json in Lucene/Solr 8.0 > --- > > Key: SOLR-12823 > URL: https://issues.apache.org/jira/browse/SOLR-12823 > Project: Solr > Issue Type: Task >Reporter: Varun Thacker >Priority: Major > > clusterstate.json is an artifact of a pre 5.0 Solr release. We should remove > that in 8.0 > It stays empty unless you explicitly ask to create the collection with the > old "stateFormat" and there is no reason for one to create a collection with > the old stateFormat. > We should also remove the "stateFormat" argument in create collection > We should also remove MIGRATESTATEVERSION as well > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14486) Autoscaling simulation framework should stop using /clusterstate.json
Andrzej Bialecki created SOLR-14486: --- Summary: Autoscaling simulation framework should stop using /clusterstate.json Key: SOLR-14486 URL: https://issues.apache.org/jira/browse/SOLR-14486 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: AutoScaling Reporter: Andrzej Bialecki Assignee: Andrzej Bialecki Spin-off from SOLR-12823. In theory the simulation framework doesn't actually use this file, but {{SimClusterStateProvider}} relies on its versioning to keep its internal data structures in sync. This should be changed to use individual DocCollection / state.json znodeVersion instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on pull request #1477: LUCENE-9321: Port markdown task to Gradle
dweiss commented on pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#issuecomment-628834560 I don't mind either way. Can be per-project gradle file. Seems to fit there nicely. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] uschindler commented on a change in pull request #1477: LUCENE-9321: Port markdown task to Gradle
uschindler commented on a change in pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#discussion_r425371129 ## File path: gradle/documentation/markdown.gradle ## @@ -0,0 +1,190 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +import com.vladsch.flexmark.ast.Heading; +import com.vladsch.flexmark.ext.abbreviation.AbbreviationExtension; +import com.vladsch.flexmark.ext.attributes.AttributesExtension; +import com.vladsch.flexmark.ext.autolink.AutolinkExtension; +import com.vladsch.flexmark.html.HtmlRenderer; +import com.vladsch.flexmark.parser.Parser; +import com.vladsch.flexmark.parser.ParserEmulationProfile; +import com.vladsch.flexmark.util.ast.Document; +import com.vladsch.flexmark.util.data.MutableDataSet; +import com.vladsch.flexmark.util.sequence.Escaping; + +buildscript { + repositories { +mavenCentral() + } + + dependencies { +classpath "com.vladsch.flexmark:flexmark:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-abbreviation:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-attributes:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-autolink:${scriptDepVersions['flexmark']}" + } +} + +def getListOfProjectsAsMarkdown = { prefix -> + def projects = allprojects.findAll{ it.path.startsWith(prefix) && it.tasks.findByName('renderSiteJavadoc') } +.sort(false, Comparator.comparing{ (it.name != 'core') as Boolean } + .thenComparing(Comparator.comparing{ (it.name == 'test-framework') as Boolean }) + .thenComparing(Comparator.comparing{ it.path })); + return projects.collect{ project -> +def text = "**[${project.path.substring(prefix.length()).replace(':','-')}](${project.relativeDocPath}/index.html):** ${project.description}" +if (project.name == 'core') { + text = text.concat(' {style="font-size:larger; margin-bottom:.5em"}') +} +return '* ' + text; + }.join('\n') +} + +configure(subprojects.findAll { it.path == ':lucene' || it.path == ':solr' }) { + task markdownToHtml(type: Copy) { +filteringCharset = 'UTF-8' +includeEmptyDirs = false +into project.docroot +rename(/\.md$/, '.html') +filter(MarkdownFilter) + } +} + +configure(project(':lucene')) { + markdownToHtml { +from('.') { + include 'MIGRATE.md' + include 'JRE_VERSION_MIGRATION.md' + include 'SYSTEM_REQUIREMENTS.md' +} + } + + task createDocumentationIndex { +def outputFile = file("${project.docroot}/index.html"); +def defaultCodecFile = project(':lucene:core').file('src/java/org/apache/lucene/codecs/Codec.java') + +inputs.file(defaultCodecFile) +outputs.file(outputFile) + +doLast { + // static Codec defaultCodec = LOADER. lookup( "LuceneXXX" ) ; + def regex = ~/\bdefaultCodec\s*=\s*LOADER\s*\.\s*lookup\s*\(\s*"([^"]+)"\s*\)\s*;/ + def matcher = regex.matcher(defaultCodecFile.getText('UTF-8')) + if (!matcher.find()) { +throw GradleException("Cannot determine default codec from file ${defaultCodecFile}") + } + def defaultCodecPackage = matcher.group(1).toLowerCase(Locale.ROOT) + def markdown = """ Review comment: That was my question. Can we include it in a way so the groovy variables are parsed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-9321) Port documentation task to gradle
[ https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107579#comment-17107579 ] Uwe Schindler edited comment on LUCENE-9321 at 5/14/20, 7:11 PM: - See markdown task. It extracts first H1 heading as title and adds a {{}}. Just add a link in the same way: https://github.com/apache/lucene-solr/pull/1477/files#diff-c8b1706c3090e3bda0a1ea9c915f0c5aR183 was (Author: thetaphi): bq. See markdown task. It extracts first H1 heading as title and adds a {{}}. Just add a link in the same way: https://github.com/apache/lucene-solr/pull/1477/files#diff-c8b1706c3090e3bda0a1ea9c915f0c5aR183 > Port documentation task to gradle > - > > Key: LUCENE-9321 > URL: https://issues.apache.org/jira/browse/LUCENE-9321 > Project: Lucene - Core > Issue Type: Sub-task > Components: general/build >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Major > Fix For: master (9.0) > > Attachments: screenshot-1.png > > Time Spent: 5h 50m > Remaining Estimate: 0h > > This is a placeholder issue for porting ant "documentation" task to gradle. > The generated documents should be able to be published on lucene.apache.org > web site on "as-is" basis. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9321) Port documentation task to gradle
[ https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107579#comment-17107579 ] Uwe Schindler commented on LUCENE-9321: --- bq. See markdown task. It extracts first H1 heading as title and adds a {{}}. Just add a link in the same way: https://github.com/apache/lucene-solr/pull/1477/files#diff-c8b1706c3090e3bda0a1ea9c915f0c5aR183 > Port documentation task to gradle > - > > Key: LUCENE-9321 > URL: https://issues.apache.org/jira/browse/LUCENE-9321 > Project: Lucene - Core > Issue Type: Sub-task > Components: general/build >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Major > Fix For: master (9.0) > > Attachments: screenshot-1.png > > Time Spent: 5h 50m > Remaining Estimate: 0h > > This is a placeholder issue for porting ant "documentation" task to gradle. > The generated documents should be able to be published on lucene.apache.org > web site on "as-is" basis. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] uschindler commented on pull request #1477: LUCENE-9321: Port markdown task to Gradle
uschindler commented on pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#issuecomment-628833244 > I guess this description will come in handy for maven artifact (pom) as well. Yes, there it's public and searchable in e.g., search.maven.org This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] uschindler commented on pull request #1477: LUCENE-9321: Port markdown task to Gradle
uschindler commented on pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#issuecomment-628832740 The title and metadata of a project should IMHO be part of project's gradle file. You just add a `description = "..."` next to the dependencies, an done. Is this OK to you, or do you want them central? In Ant this information is part of the root element of the `build.xml`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] madrob commented on a change in pull request #1509: SOLR-10810: Examine precommit lint WARNINGs in non-test code
madrob commented on a change in pull request #1509: URL: https://github.com/apache/lucene-solr/pull/1509#discussion_r425360430 ## File path: solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java ## @@ -82,7 +82,7 @@ public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp) throw * * @throws Exception When analysis fails. */ - protected abstract NamedList doAnalysis(SolrQueryRequest req) throws Exception; + protected abstract NamedList doAnalysis(SolrQueryRequest req) throws Exception; Review comment: I think best practice is to use `NamedList` as the return type, and `NamedList` as the argument type in methods, but I can't find a reference for it right now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on pull request #1477: LUCENE-9321: Port markdown task to Gradle
dweiss commented on pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#issuecomment-628820984 bq. Should I simple add a project.description to all .gradle files, so it's consistent with Ant? Hmm... What do you mean by adding project.description? We don't need to have per-project build.gradle file (for example if there are no dependencies). If you're thinking of a project property to hold the description then this could be configured from a single file too -- whatever is more convenient, really. I guess this description will come in handy for maven artifact (pom) as well. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on a change in pull request #1477: LUCENE-9321: Port markdown task to Gradle
dweiss commented on a change in pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#discussion_r425355108 ## File path: gradle/documentation/markdown.gradle ## @@ -0,0 +1,190 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +import com.vladsch.flexmark.ast.Heading; +import com.vladsch.flexmark.ext.abbreviation.AbbreviationExtension; +import com.vladsch.flexmark.ext.attributes.AttributesExtension; +import com.vladsch.flexmark.ext.autolink.AutolinkExtension; +import com.vladsch.flexmark.html.HtmlRenderer; +import com.vladsch.flexmark.parser.Parser; +import com.vladsch.flexmark.parser.ParserEmulationProfile; +import com.vladsch.flexmark.util.ast.Document; +import com.vladsch.flexmark.util.data.MutableDataSet; +import com.vladsch.flexmark.util.sequence.Escaping; + +buildscript { + repositories { +mavenCentral() + } + + dependencies { +classpath "com.vladsch.flexmark:flexmark:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-abbreviation:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-attributes:${scriptDepVersions['flexmark']}" +classpath "com.vladsch.flexmark:flexmark-ext-autolink:${scriptDepVersions['flexmark']}" + } +} + +def getListOfProjectsAsMarkdown = { prefix -> + def projects = allprojects.findAll{ it.path.startsWith(prefix) && it.tasks.findByName('renderSiteJavadoc') } +.sort(false, Comparator.comparing{ (it.name != 'core') as Boolean } + .thenComparing(Comparator.comparing{ (it.name == 'test-framework') as Boolean }) + .thenComparing(Comparator.comparing{ it.path })); + return projects.collect{ project -> +def text = "**[${project.path.substring(prefix.length()).replace(':','-')}](${project.relativeDocPath}/index.html):** ${project.description}" +if (project.name == 'core') { + text = text.concat(' {style="font-size:larger; margin-bottom:.5em"}') +} +return '* ' + text; + }.join('\n') +} + +configure(subprojects.findAll { it.path == ':lucene' || it.path == ':solr' }) { + task markdownToHtml(type: Copy) { +filteringCharset = 'UTF-8' +includeEmptyDirs = false +into project.docroot +rename(/\.md$/, '.html') +filter(MarkdownFilter) + } +} + +configure(project(':lucene')) { + markdownToHtml { +from('.') { + include 'MIGRATE.md' + include 'JRE_VERSION_MIGRATION.md' + include 'SYSTEM_REQUIREMENTS.md' +} + } + + task createDocumentationIndex { +def outputFile = file("${project.docroot}/index.html"); +def defaultCodecFile = project(':lucene:core').file('src/java/org/apache/lucene/codecs/Codec.java') + +inputs.file(defaultCodecFile) +outputs.file(outputFile) + +doLast { + // static Codec defaultCodec = LOADER. lookup( "LuceneXXX" ) ; + def regex = ~/\bdefaultCodec\s*=\s*LOADER\s*\.\s*lookup\s*\(\s*"([^"]+)"\s*\)\s*;/ + def matcher = regex.matcher(defaultCodecFile.getText('UTF-8')) + if (!matcher.find()) { +throw GradleException("Cannot determine default codec from file ${defaultCodecFile}") + } + def defaultCodecPackage = matcher.group(1).toLowerCase(Locale.ROOT) + def markdown = """ Review comment: Should we move it to an external file? Then we can have a proper suffix and editor support. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13132) Improve JSON "terms" facet performance when sorted by relatedness
[ https://issues.apache.org/jira/browse/SOLR-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107549#comment-17107549 ] Chris M. Hostetter commented on SOLR-13132: --- {quote}I did the work on this branch rather than at SOLR-14467 because the testing built out for this issue (SOLR-13132) was helpful, and it was more generally helpful to compare consistency... {quote} FWIW: I'm probabbly putting both of these issues on my backburner to focus on SOLR-14477 since that's not only an existing bug but compared to "allBuckets" seems like more of a "real world" situation someone _might_ encounter using {{relatedness()}} ... and as part of that issue i'm going to try to refactor some of the test improvements we've been doing here into master. (both interms of "compare the output of diff processors" but also to make it less cumbersome to add more randomized options/facets in TestCloudJSONFacetSKG. > Improve JSON "terms" facet performance when sorted by relatedness > -- > > Key: SOLR-13132 > URL: https://issues.apache.org/jira/browse/SOLR-13132 > Project: Solr > Issue Type: Improvement > Components: Facet Module >Affects Versions: 7.4, master (9.0) >Reporter: Michael Gibney >Priority: Major > Attachments: SOLR-13132-with-cache-01.patch, > SOLR-13132-with-cache.patch, SOLR-13132.patch, SOLR-13132_testSweep.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > When sorting buckets by {{relatedness}}, JSON "terms" facet must calculate > {{relatedness}} for every term. > The current implementation uses a standard uninverted approach (either > {{docValues}} or {{UnInvertedField}}) to get facet counts over the domain > base docSet, and then uses that initial pass as a pre-filter for a > second-pass, inverted approach of fetching docSets for each relevant term > (i.e., {{count > minCount}}?) and calculating intersection size of those sets > with the domain base docSet. > Over high-cardinality fields, the overhead of per-term docSet creation and > set intersection operations increases request latency to the point where > relatedness sort may not be usable in practice (for my use case, even after > applying the patch for SOLR-13108, for a field with ~220k unique terms per > core, QTime for high-cardinality domain docSets were, e.g.: cardinality > 1816684=9000ms, cardinality 5032902=18000ms). > The attached patch brings the above example QTimes down to a manageable > ~300ms and ~250ms respectively. The approach calculates uninverted facet > counts over domain base, foreground, and background docSets in parallel in a > single pass. This allows us to take advantage of the efficiencies built into > the standard uninverted {{FacetFieldProcessorByArray[DV|UIF]}}), and avoids > the per-term docSet creation and set intersection overhead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1510: SOLR-14473: Improve Overseer Javadoc
murblanc commented on a change in pull request #1510: URL: https://github.com/apache/lucene-solr/pull/1510#discussion_r425337695 ## File path: solr/solrj/src/java/org/apache/solr/common/params/CollectionParams.java ## @@ -70,6 +70,19 @@ public boolean isHigherOrEqual(LockLevel that) { } } + /** + * (Mostly) Collection API actions that can be sent by nodes to the Overseer over the /overseer/collection-queue-work + * ZooKeeper queue. + * + * Some of these actions are also used over the cluster state update queue at /overseer/queue, and really Review comment: Now that I try to change it, I remember I tried initially but there's no class visibility from here to Overseer. Sometimes I wish Javadocs were more permissive we are not linkers, we are humans. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1510: SOLR-14473: Improve Overseer Javadoc
murblanc commented on a change in pull request #1510: URL: https://github.com/apache/lucene-solr/pull/1510#discussion_r425336000 ## File path: solr/core/src/java/org/apache/solr/cloud/ZkDistributedQueue.java ## @@ -51,9 +51,16 @@ import org.slf4j.LoggerFactory; /** - * A ZK-based distributed queue. Optimized for single-consumer, + * A ZK-based distributed queue. Optimized for single-consumer, * multiple-producer: if there are multiple consumers on the same ZK queue, - * the results should be correct but inefficient + * the results should be correct but inefficient. + * + * This implementation (with help from subclass {@link OverseerTaskQueue}) is used for the Review comment: There's already a link to this class from OverseerTaskQueue's javadoc. Leaving it as is. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9321) Port documentation task to gradle
[ https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107528#comment-17107528 ] Tomoko Uchida commented on LUCENE-9321: --- It's a bit off-topic... can we apply a minimal CSS to the index html (to adjust margin, font weights, and so on). Would it be possible to insert tag when generating the html file ? > Port documentation task to gradle > - > > Key: LUCENE-9321 > URL: https://issues.apache.org/jira/browse/LUCENE-9321 > Project: Lucene - Core > Issue Type: Sub-task > Components: general/build >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Major > Fix For: master (9.0) > > Attachments: screenshot-1.png > > Time Spent: 5h 10m > Remaining Estimate: 0h > > This is a placeholder issue for porting ant "documentation" task to gradle. > The generated documents should be able to be published on lucene.apache.org > web site on "as-is" basis. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14467) inconsistent server errors combining relatedness() with allBuckets:true
[ https://issues.apache.org/jira/browse/SOLR-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107525#comment-17107525 ] Chris M. Hostetter commented on SOLR-14467: --- {quote}I noticed that the test patch attached here only removes the TestCloudJSONFacetSKG.java file from master, and doesn't add any tests? {quote} Hmm, yeah - not sure how i botched that .. IIRC my "trivial patch to TestCloudJSONFacetSKG" to demonstrate the various errors was simply adding a hardcoded {{allBuckets:true,}} to the JSON output string in the {{TermFacet}} class. {quote}My initial intuition is that having stats for allBuckets represent the union across all buckets (whether returned or not) {quote} That is hte only interpretation i have ever had/imagined for how allBuckets is ment to work, and jives with how it behaves for count and "simple" stats... {noformat} $ bin/solr -e techproducts ... $ curl http://localhost:8983/solr/techproducts/query -d 'q=*:*=0=true= { "categories": { "type": "terms", "field": "cat", "limit": 1, allBuckets: true, facet : { sum : "sum(price)" } } }' { "response":{"numFound":32,"start":0,"numFoundExact":true,"docs":[] }, "facets":{ "count":32, "categories":{ "allBuckets":{ "count":37, "sum":8563.560066223145}, "buckets":[{ "val":"electronics", "count":12, "sum":2772.3200187683105}]}}} {noformat} {quote}...but I think that would mean preventing any deferral of stats when allBuckets==true, which is not currently done. But I think that approach (if chosen) would be pretty straightforward: if allBuckets==true, instead of creating any otherAccs, simply add all accs to collectAcc using MultiAcc? {quote} H I'm really not sure either way – but IIUC the whole point of SpecialSlotAcc is that it *does/can* do collection against all of the {{otherAccs}} using the {{otherAccsSlot}}. I suspsect the key bug here is just in how the {{slotContext}} works when dealing with these "special" slots ... SpecialSlotAcc may just need to ensure it passes down it's own slot context that ignores the slot# it get's passed and instead wraps/hides usage of SpecialSlotAcc.otherAccsSlot/SpecialSlotAcc.collectAccSlot as needed? (which i gather may be along the lines of what you've recently tried adding in your recent SOLR-13132 commits? ... haven't dug in there yet). If that's really the underlying problem then it won't really matter if MultiAcc is used everytime we have {{allBuckets:true}} situation, because we'll still need that "slot mapping" logic for the special slot. Ultimately though, I still don't have a good conceptual grasp of what the "correct" {{relatedness()}} stats _should be_ for the {{allBuckets}} situation – relatedness values are fundamentally about the "domain" of the entire bucket realteive to the foreground/background sets, so doesn't really make sense to try and "merge" the values from multiple buckets. Ultimately it may not matter _what_ value we compute for {{relatedness()}} in an {{allBuckets:true}} bucket (My gut says we should just using the {{base}} DocSet/domain for the entire facet as the slotContext in SpecialSlotAcc, ... but i can imagine there might be other interpretations) as long as we don't "fail" with an error if someone tries. > inconsistent server errors combining relatedness() with allBuckets:true > --- > > Key: SOLR-14467 > URL: https://issues.apache.org/jira/browse/SOLR-14467 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Chris M. Hostetter >Priority: Major > Attachments: SOLR-14467_test.patch > > > While working on randomized testing for SOLR-13132 i discovered a variety of > different ways that JSON Faceting's "allBuckets" option can fail when > combined with the "relatedness()" function. > I haven't found a trivial way to manual reproduce this, but i have been able > to trigger the failures with a trivial patch to {{TestCloudJSONFacetSKG}} > which i will attach. > Based on the nature of the failures it looks like it may have something to do > with multiple segments of different sizes, and or resizing the SlotAccs ? > The relatedness() function doesn't have much (any?) existing tests in place > that leverage "allBuckets" so this is probably a bug that has always existed > -- it's possible it may be excessively cumbersome to fix and we might > nee/wnat to just document that incompatibility and add some code to try and > detect if the user combines these options and if so fail with a 400 error? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1510: SOLR-14473: Improve Overseer Javadoc
murblanc commented on a change in pull request #1510: URL: https://github.com/apache/lucene-solr/pull/1510#discussion_r425332349 ## File path: solr/core/src/java/org/apache/solr/cloud/ZkDistributedQueue.java ## @@ -51,9 +51,16 @@ import org.slf4j.LoggerFactory; /** - * A ZK-based distributed queue. Optimized for single-consumer, + * A ZK-based distributed queue. Optimized for single-consumer, * multiple-producer: if there are multiple consumers on the same ZK queue, - * the results should be correct but inefficient + * the results should be correct but inefficient. + * + * This implementation (with help from subclass {@link OverseerTaskQueue}) is used for the + * /overseer/collection-queue-work queue used for Collection and Config Set API calls to the Overseer. + * + * In order to enqueue a message into this queue, a {@link CreateMode#EPHEMERAL_SEQUENTIAL} response node is created Review comment: Thanks, didn't know about these tags. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107467#comment-17107467 ] Lucene/Solr QA commented on SOLR-14485: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 46m 35s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-14485 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13002958/SOLR-14485-01.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 08360a2997f | | ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 | | Default Java | LTS | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/748/testReport/ | | modules | C: solr/core U: solr/core | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/748/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-14485-01.patch, SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this
[GitHub] [lucene-solr] uschindler commented on a change in pull request #1477: LUCENE-9321: Port markdown task to Gradle
uschindler commented on a change in pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#discussion_r425286729 ## File path: gradle/documentation/documentation.gradle ## @@ -34,4 +36,11 @@ configure(subprojects.findAll { it.path == ':lucene' || it.path == ':solr' }) { ext { docroot = "${project.buildDir}/documentation" } + + task copyDocumentationAssets(type: Copy) { +includeEmptyDirs = false +from('site/html') // lucene Review comment: I did this for the main task. At this place we can also do it, but that's less to maintain. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14426) forbidden api error during precommit DateMathFunction
[ https://issues.apache.org/jira/browse/SOLR-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107452#comment-17107452 ] Houston Putman commented on SOLR-14426: --- I agree that the static inner classes is a good fix for the analytics contrib changes. Since those should only be used within the same files as the classes are defined, if my memory is correct. > forbidden api error during precommit DateMathFunction > - > > Key: SOLR-14426 > URL: https://issues.apache.org/jira/browse/SOLR-14426 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: master (9.0) > > Time Spent: 50m > Remaining Estimate: 0h > > When running `./gradlew precommit` I'll occasionally see > {code} > * What went wrong: > Execution failed for task ':solr:contrib:analytics:forbiddenApisMain'. > > de.thetaphi.forbiddenapis.ForbiddenApiException: Check for forbidden API > > calls failed while scanning class > > 'org.apache.solr.analytics.function.mapping.DateMathFunction' > > (DateMathFunction.java): java.lang.ClassNotFoundException: > > org.apache.solr.analytics.function.mapping.DateMathValueFunction (while > > looking up details about referenced class > > 'org.apache.solr.analytics.function.mapping.DateMathValueFunction') > {code} > `./gradlew clean` fixes this, but I don't understand what or why this > happens. Feels like a gradle issue? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9371) Make RegExp internal state more visible to support more rendering formats
Mark Harwood created LUCENE-9371: Summary: Make RegExp internal state more visible to support more rendering formats Key: LUCENE-9371 URL: https://issues.apache.org/jira/browse/LUCENE-9371 Project: Lucene - Core Issue Type: Improvement Components: core/search Reporter: Mark Harwood Assignee: Mark Harwood This is a proposal to open up read-only access to the internal state of RegExp objects. The RegExp parser provides a useful parsed object model for regular expressions. Today it offers three rendering functions: 1) To Automaton (for query execution) 2) To string (for machine-readable regular expressions) 3) To StringTree (for debug purposes) There are at least 2 other rendering functions that would be useful: a) To "Explain" format (like the plain-English descriptions used in [regex debugging tools|https://regex101.com/r/2DUzac/1]) b) To Query (queries used to accelerate regex searches by providing an approximation of the search terms and [hitting an ngram index|https://github.com/wikimedia/search-extra/blob/master/docs/source_regex.md]) To support these and other renderings/transformations it would be useful to open read-only access to the fields held in RegExp objects - either through making them public finals or offering getter access methods. This would free the RegExp class from having to support all possible transformations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] uschindler commented on pull request #1477: LUCENE-9321: Port markdown task to Gradle
uschindler commented on pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#issuecomment-628742753 Hi, I added a first implemntation of `index.html`: 8dca5bb (Lucene only) It works quite well: - I added a method that collects all sub-projects with a specific path prefix (`:lucene:`) that have javadocs - The list is then formatted as markdown (identical to the XSL). The only missing thing is the project description (we have it in Ant, but it's misisng in Gradle) - The top part of the documentation was also converted to Markdown and is using project variabales with version numbers and the default codec - default codec is extarcted like with ant, just simpler - inputs/output of task are trivial Open questions: - Should I simple add a `project.description` to all `.gradle` files, so it's consistent with Ant? - Where to place the 'templated' markdown source code. The XSL is currently a separate file. But as markdown contains Groovy `${...}` I would like to leave it in the gradle file. Here is how it looks: ![image](https://user-images.githubusercontent.com/1005388/81959536-e5111800-960f-11ea-8522-31e8f9cc662b.png) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107385#comment-17107385 ] Andras Salamon commented on SOLR-14485: --- Uploaded a new patch (using the same filename) > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-14485-01.patch, SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 6. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 408) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 7. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 743) > [ecj-lint] throw new IOException("currently only one listener can be set > per trigger. Trigger name: " + trigger); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'listener' is not closed at this location > -- > [ecj-lint] 8. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 952) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 9. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 991) > [ecj-lint] SimScenario scenario = new SimScenario(); > [ecj-lint] > [ecj-lint] Resource leak: 'scenario' is never closed > -- > [ecj-lint] 1. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java > (at line 264) > [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new > HttpShardHandlerFactory().getShardHandler(), updateShardHandler, > "/admin/cores", > [ecj-lint] > ^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ZkNodePropsTest.java > (at line 48) > [ecj-lint] new JavaBinCodec().marshal(zkProps.getProperties(), baos); > [ecj-lint] ^^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 3. WARNING in >
[jira] [Updated] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Salamon updated SOLR-14485: -- Attachment: SOLR-14485-01.patch > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-14485-01.patch, SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 6. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 408) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 7. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 743) > [ecj-lint] throw new IOException("currently only one listener can be set > per trigger. Trigger name: " + trigger); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'listener' is not closed at this location > -- > [ecj-lint] 8. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 952) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 9. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 991) > [ecj-lint] SimScenario scenario = new SimScenario(); > [ecj-lint] > [ecj-lint] Resource leak: 'scenario' is never closed > -- > [ecj-lint] 1. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java > (at line 264) > [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new > HttpShardHandlerFactory().getShardHandler(), updateShardHandler, > "/admin/cores", > [ecj-lint] > ^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ZkNodePropsTest.java > (at line 48) > [ecj-lint] new JavaBinCodec().marshal(zkProps.getProperties(), baos); > [ecj-lint] ^^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 3. WARNING in >
[jira] [Commented] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix
[ https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107384#comment-17107384 ] ASF subversion and git services commented on SOLR-14478: Commit afda69c01706073a581680808d99b5fa4951c13a in lucene-solr's branch refs/heads/branch_8x from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=afda69c ] SOLR-14407, SOLR-14478: Update CHANGES.txt > Allow the diff Stream Evaluator to operate on the rows of a matrix > -- > > Key: SOLR-14478 > URL: https://issues.apache.org/jira/browse/SOLR-14478 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14478.patch, Screen Shot 2020-05-14 at 8.54.07 > AM.png > > > Currently the *diff* function performs *serial differencing* on a numeric > vector. This ticket will allow the diff function to perform serial > differencing on all the rows of a *matrix*. This will make it easy to perform > *correlations* on a matrix of *differenced time series vectors* using math > expressions. > A screen shot is attached with *diff* working on a matrix of time series > data. The effect is powerful. It removes the trend from a matrix of time > series vectors in one simple function call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14407) Handle shards.purpose in the postlogs tool
[ https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107383#comment-17107383 ] ASF subversion and git services commented on SOLR-14407: Commit afda69c01706073a581680808d99b5fa4951c13a in lucene-solr's branch refs/heads/branch_8x from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=afda69c ] SOLR-14407, SOLR-14478: Update CHANGES.txt > Handle shards.purpose in the postlogs tool > --- > > Key: SOLR-14407 > URL: https://issues.apache.org/jira/browse/SOLR-14407 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14407.patch > > > This ticket will add the *purpose_ss* field to query type log records that > have a *shards.purpose* request parameter. This can be used to gather timing > and count information for the different parts of the distributed search. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14407) Handle shards.purpose in the postlogs tool
[ https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107380#comment-17107380 ] ASF subversion and git services commented on SOLR-14407: Commit 08360a2997f434b7bcd591b230e4f424454e1a07 in lucene-solr's branch refs/heads/master from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=08360a2 ] SOLR-14407, SOLR-14478: Update CHANGES.txt > Handle shards.purpose in the postlogs tool > --- > > Key: SOLR-14407 > URL: https://issues.apache.org/jira/browse/SOLR-14407 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14407.patch > > > This ticket will add the *purpose_ss* field to query type log records that > have a *shards.purpose* request parameter. This can be used to gather timing > and count information for the different parts of the distributed search. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix
[ https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107381#comment-17107381 ] ASF subversion and git services commented on SOLR-14478: Commit 08360a2997f434b7bcd591b230e4f424454e1a07 in lucene-solr's branch refs/heads/master from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=08360a2 ] SOLR-14407, SOLR-14478: Update CHANGES.txt > Allow the diff Stream Evaluator to operate on the rows of a matrix > -- > > Key: SOLR-14478 > URL: https://issues.apache.org/jira/browse/SOLR-14478 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14478.patch, Screen Shot 2020-05-14 at 8.54.07 > AM.png > > > Currently the *diff* function performs *serial differencing* on a numeric > vector. This ticket will allow the diff function to perform serial > differencing on all the rows of a *matrix*. This will make it easy to perform > *correlations* on a matrix of *differenced time series vectors* using math > expressions. > A screen shot is attached with *diff* working on a matrix of time series > data. The effect is powerful. It removes the trend from a matrix of time > series vectors in one simple function call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107369#comment-17107369 ] Andras Salamon commented on SOLR-14485: --- I also prefer try-with-resources, on the other hand I don't like creating big new try-with-resources blocks, when one single close() can solve the problem. It's definitely a bit personal preference. In this jira, I'd say in SimScenario it's easier to just add snapshotCloudManager.close, on the other hand it would be better to use try-with-resource in RecoveryStrategy and SyncStrategy. I'll upload a new patch soon. > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 6. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 408) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 7. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 743) > [ecj-lint] throw new IOException("currently only one listener can be set > per trigger. Trigger name: " + trigger); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'listener' is not closed at this location > -- > [ecj-lint] 8. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 952) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 9. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 991) > [ecj-lint] SimScenario scenario = new SimScenario(); > [ecj-lint] > [ecj-lint] Resource leak: 'scenario' is never closed > -- > [ecj-lint] 1. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java > (at line 264) > [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new > HttpShardHandlerFactory().getShardHandler(), updateShardHandler, > "/admin/cores", > [ecj-lint] > ^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 2. WARNING in >
[jira] [Updated] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix
[ https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-14478: -- Fix Version/s: 8.6 > Allow the diff Stream Evaluator to operate on the rows of a matrix > -- > > Key: SOLR-14478 > URL: https://issues.apache.org/jira/browse/SOLR-14478 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14478.patch, Screen Shot 2020-05-14 at 8.54.07 > AM.png > > > Currently the *diff* function performs *serial differencing* on a numeric > vector. This ticket will allow the diff function to perform serial > differencing on all the rows of a *matrix*. This will make it easy to perform > *correlations* on a matrix of *differenced time series vectors* using math > expressions. > A screen shot is attached with *diff* working on a matrix of time series > data. The effect is powerful. It removes the trend from a matrix of time > series vectors in one simple function call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14407) Handle shards.purpose in the postlogs tool
[ https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-14407: -- Fix Version/s: 8.6 > Handle shards.purpose in the postlogs tool > --- > > Key: SOLR-14407 > URL: https://issues.apache.org/jira/browse/SOLR-14407 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14407.patch > > > This ticket will add the *purpose_ss* field to query type log records that > have a *shards.purpose* request parameter. This can be used to gather timing > and count information for the different parts of the distributed search. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14407) Handle shards.purpose in the postlogs tool
[ https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107358#comment-17107358 ] ASF subversion and git services commented on SOLR-14407: Commit 8e699e1fb03c213c8673b903815ac414517eefab in lucene-solr's branch refs/heads/branch_8x from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8e699e1 ] SOLR-14407: Handle shards.purpose in the postlogs tool > Handle shards.purpose in the postlogs tool > --- > > Key: SOLR-14407 > URL: https://issues.apache.org/jira/browse/SOLR-14407 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-14407.patch > > > This ticket will add the *purpose_ss* field to query type log records that > have a *shards.purpose* request parameter. This can be used to gather timing > and count information for the different parts of the distributed search. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix
[ https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107359#comment-17107359 ] ASF subversion and git services commented on SOLR-14478: Commit 081f1ec530eec63f9bc29aa7b57eb08c602f2d73 in lucene-solr's branch refs/heads/branch_8x from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=081f1ec ] SOLR-14478: Allow the diff Stream Evaluator to operate on the rows of a matrix > Allow the diff Stream Evaluator to operate on the rows of a matrix > -- > > Key: SOLR-14478 > URL: https://issues.apache.org/jira/browse/SOLR-14478 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-14478.patch, Screen Shot 2020-05-14 at 8.54.07 > AM.png > > > Currently the *diff* function performs *serial differencing* on a numeric > vector. This ticket will allow the diff function to perform serial > differencing on all the rows of a *matrix*. This will make it easy to perform > *correlations* on a matrix of *differenced time series vectors* using math > expressions. > A screen shot is attached with *diff* working on a matrix of time series > data. The effect is powerful. It removes the trend from a matrix of time > series vectors in one simple function call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9370) RegExpQuery should error for inappropriate use of \ character in input
[ https://issues.apache.org/jira/browse/LUCENE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107356#comment-17107356 ] Mark Harwood commented on LUCENE-9370: -- PR [here|https://github.com/apache/lucene-solr/pull/1516] which also addresses a backslash bug introduced in Lucene-9336. > RegExpQuery should error for inappropriate use of \ character in input > -- > > Key: LUCENE-9370 > URL: https://issues.apache.org/jira/browse/LUCENE-9370 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: master (9.0) >Reporter: Mark Harwood >Priority: Minor > > The RegExp class is too lenient in parsing user input which can confuse or > mislead users and cause backwards compatibility issues as we enhance regex > support. > In normal regular expression syntax the backslash is used to: > * escape a reserved character like \. > * use certain unreserved characters in a shorthand context e.g. \d means > digits [0-9] > > The leniency bug in RegExp is that it adds an extra rule to this list - any > backslashed characters that don't satisfy the above rules are taken > literally. For example, there's no reason to put a backslash in front of the > letter "p" but we accept \p as the letter p. > Java's Pattern class will throw a parse exception given a meaningless > backslash like \p. > We should too. > In [Lucene-9336|https://issues.apache.org/jira/browse/LUCENE-9336] we added > support for commonly supported regex expressions like `\d`. Sadly this is a > breaking change because of the leniency that has allowed \d to be accepted as > the letter d without an exception. Users were likely silently missing results > they were hoping for and we made a BWC problem for ourselves in filling in > the gaps. > I propose we do like other RegEx parsers and error on inappropriate use of > backslashes. > This will be another breaking change so should target 9.0 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] markharwood opened a new pull request #1516: Lucene 9370: Remove leniency on illegal backslashes in RegExp query
markharwood opened a new pull request #1516: URL: https://github.com/apache/lucene-solr/pull/1516 This prevents illegal backslash syntax from user searches being accepted now and throws an error instead. This echoes the [Java Pattern policy](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#bs) which is designed to allow future expansion to the regex syntax without conflict. It also corrects any misconceptions users might have about predefined character classes we do not support. This PR also fixes a bug introduced in Lucene-9336 where searches for `\\` would crash. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14329) Add support to choose field for expand component from multiple collapse groups
[ https://issues.apache.org/jira/browse/SOLR-14329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107348#comment-17107348 ] Cassandra Targett commented on SOLR-14329: -- [~munendrasn], this issue changed the behavior of the {{expand.field}} parameter, but it seems that parameter is not documented at all. AFAIK, there is no reason for that gap besides just an oversight. Would you be able to add something for the Ref Guide for the parameter associated with this change for 8.6? > Add support to choose field for expand component from multiple collapse groups > -- > > Key: SOLR-14329 > URL: https://issues.apache.org/jira/browse/SOLR-14329 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Munendra S N >Assignee: Munendra S N >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14329.patch > > > SOLR-14073, Fixed NPE issue when multiple collapse groups are specified. > ExpandComponent could be used with Collapse but expand only supports single > field. So, There should be way to choose collapse group for expand. Low cost > collapse group should be given higher priority. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107339#comment-17107339 ] Erick Erickson commented on SOLR-14485: --- Actually, I had time today and did a quick review and I have a question: Why are some of the changes calling close() and others use try-with-resources? I think it's more robust to use try-with-resources and wondered what criteria you use for one over the other. Take RecoveryStrategy. I see that it would take a little bit of code re-arranging to use try-with-resources, but as long as the changes are trivial I think it's worth it. It's a judgement call to be sure. > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 6. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 408) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 7. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 743) > [ecj-lint] throw new IOException("currently only one listener can be set > per trigger. Trigger name: " + trigger); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'listener' is not closed at this location > -- > [ecj-lint] 8. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 952) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 9. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 991) > [ecj-lint] SimScenario scenario = new SimScenario(); > [ecj-lint] > [ecj-lint] Resource leak: 'scenario' is never closed > -- > [ecj-lint] 1. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java > (at line 264) > [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new > HttpShardHandlerFactory().getShardHandler(), updateShardHandler, > "/admin/cores", > [ecj-lint] > ^ > [ecj-lint] Resource leak: '' is never
[jira] [Commented] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix
[ https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107325#comment-17107325 ] ASF subversion and git services commented on SOLR-14478: Commit f1db56afafa60332b5180e5ab591d232d115c721 in lucene-solr's branch refs/heads/master from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f1db56a ] SOLR-14478: Allow the diff Stream Evaluator to operate on the rows of a matrix > Allow the diff Stream Evaluator to operate on the rows of a matrix > -- > > Key: SOLR-14478 > URL: https://issues.apache.org/jira/browse/SOLR-14478 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-14478.patch, Screen Shot 2020-05-14 at 8.54.07 > AM.png > > > Currently the *diff* function performs *serial differencing* on a numeric > vector. This ticket will allow the diff function to perform serial > differencing on all the rows of a *matrix*. This will make it easy to perform > *correlations* on a matrix of *differenced time series vectors* using math > expressions. > A screen shot is attached with *diff* working on a matrix of time series > data. The effect is powerful. It removes the trend from a matrix of time > series vectors in one simple function call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14407) Handle shards.purpose in the postlogs tool
[ https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107324#comment-17107324 ] ASF subversion and git services commented on SOLR-14407: Commit fe2135963c250643cabb85864a74116fb38e57ed in lucene-solr's branch refs/heads/master from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fe21359 ] SOLR-14407: Handle shards.purpose in the postlogs tool > Handle shards.purpose in the postlogs tool > --- > > Key: SOLR-14407 > URL: https://issues.apache.org/jira/browse/SOLR-14407 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-14407.patch > > > This ticket will add the *purpose_ss* field to query type log records that > have a *shards.purpose* request parameter. This can be used to gather timing > and count information for the different parts of the distributed search. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14462) Autoscaling placement wrong with concurrent collection creations
[ https://issues.apache.org/jira/browse/SOLR-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107302#comment-17107302 ] Ilan Ginzburg commented on SOLR-14462: -- In existing (9.0 master) code as well as in the PR, when a new Session is required, it is created in PolicyHelper.createSession() called from PolicyHelper.get(). The session is therefore created while holding the lockObj lock! When SolrCloud has a large number of collections/shards/replicas, session creation can take a few seconds. Parallel session creation is therefore significantly delayed. It would be better to not hold the lock while creating the session. That lock should only be used to protect changes to SessionRef (and should be acquired after a Session got created to register that session with the SessionRef). > Autoscaling placement wrong with concurrent collection creations > > > Key: SOLR-14462 > URL: https://issues.apache.org/jira/browse/SOLR-14462 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Affects Versions: master (9.0) >Reporter: Ilan Ginzburg >Assignee: Noble Paul >Priority: Major > Attachments: PolicyHelperNewLogs.txt, policylogs.txt > > > Under concurrent collection creation, wrong Autoscaling placement decisions > can lead to severely unbalanced clusters. > Sequential creation of the same collections is handled correctly and the > cluster is balanced. > *TL;DR;* under high load, the way sessions that cache future changes to > Zookeeper are managed cause placement decisions of multiple concurrent > Collection API calls to ignore each other, be based on identical “initial” > cluster state, possibly leading to identical placement decisions and as a > consequence cluster imbalance. > *Some context first* for those less familiar with how Autoscaling deals with > cluster state change: a PolicyHelper.Session is created with a snapshot of > the Zookeeper cluster state and is used to track already decided but not yet > persisted to Zookeeper cluster state changes so that Collection API commands > can make the right placement decisions. > A Collection API command either uses an existing cached Session (that > includes changes computed by previous command(s)) or creates a new Session > initialized from the Zookeeper cluster state (i.e. with only state changes > already persisted). > When a Collection API command requires a Session - and one is needed for any > cluster state update computation - if one exists but is currently in use, the > command can wait up to 10 seconds. If the session becomes available, it is > reused. Otherwise, a new one is created. > The Session lifecycle is as follows: it is created in COMPUTING state by a > Collection API command and is initialized with a snapshot of cluster state > from Zookeeper (does not require a Zookeeper read, this is running on > Overseer that maintains a cache of cluster state). The command has exclusive > access to the Session and can change the state of the Session. When the > command is done changing the Session, the Session is “returned” and its state > changes to EXECUTING while the command continues to run to persist the state > to Zookeeper and interact with the nodes, but no longer interacts with the > Session. Another command can then grab a Session in EXECUTING state, change > its state to COMPUTING to compute new changes taking into account previous > changes. When all commands having used the session have completed their work, > the session is “released” and destroyed (at this stage, Zookeeper contains > all the state changes that were computed using that Session). > The issue arises when multiple Collection API commands are executed at once. > A first Session is created and commands start using it one by one. In a > simple 1 shard 1 replica collection creation test run with 100 parallel > Collection API requests (see debug logs from PolicyHelper in file > policy.logs), this Session update phase (Session in COMPUTING status in > SessionWrapper) takes about 250-300ms (MacBook Pro). > This means that about 40 commands can run by using in turn the same Session > (45 in the sample run). The commands that have been waiting for too long time > out after 10 seconds, more or less all at the same time (at the rate at which > they have been received by the OverseerCollectionMessageHandler, approx one > per 100ms in the sample run) and most/all independently decide to create a > new Session. These new Sessions are based on Zookeeper state, they might or > might not include some of the changes from the first 40 commands (depending > on if these commands got their changes written to Zookeeper by the time
[jira] [Commented] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107297#comment-17107297 ] Andras Salamon commented on SOLR-14485: --- Sure thing. I was following the Oozie naming convention, will change to the Solr naming convention. :) > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 6. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 408) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 7. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 743) > [ecj-lint] throw new IOException("currently only one listener can be set > per trigger. Trigger name: " + trigger); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'listener' is not closed at this location > -- > [ecj-lint] 8. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 952) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 9. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 991) > [ecj-lint] SimScenario scenario = new SimScenario(); > [ecj-lint] > [ecj-lint] Resource leak: 'scenario' is never closed > -- > [ecj-lint] 1. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java > (at line 264) > [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new > HttpShardHandlerFactory().getShardHandler(), updateShardHandler, > "/admin/cores", > [ecj-lint] > ^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ZkNodePropsTest.java > (at line 48) > [ecj-lint] new JavaBinCodec().marshal(zkProps.getProperties(), baos); > [ecj-lint] ^^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 3. WARNING in >
[GitHub] [lucene-solr] mocobeta commented on pull request #1488: LUCENE-9321: Refactor renderJavadoc to allow relative links with multiple Gradle tasks
mocobeta commented on pull request #1488: URL: https://github.com/apache/lucene-solr/pull/1488#issuecomment-628629711 > Also, thanks for eyeballing the outputs with your keen eye, Tomoko. If I correctly understand the meaning of the phrase, my fellows occasionally say similar things to me (in Japanese). Thanks. ~(=^‥^)/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix
[ https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-14478: -- Description: Currently the *diff* function performs *serial differencing* on a numeric vector. This ticket will allow the diff function to perform serial differencing on all the rows of a *matrix*. This will make it easy to perform *correlations* on a matrix of *differenced time series vectors* using math expressions. A screen shot is attached with *diff* working on a matrix of time series data. The effect is powerful. It removes the trend from a matrix of time series vectors in one simple function call. was: Currently the *diff* function performs *serial differencing* on a numeric vector. This ticket will allow the diff function to perform serial differencing on all the rows of a *matrix*. This will make it easy to perform *correlations* on a matrix of *differenced time series vectors* using math expressions. A screen shot is attached with *diff* working on a matrix of time series data. The effect is powerful. It removes the trend from a matrix of time series vectors in on simple function call. > Allow the diff Stream Evaluator to operate on the rows of a matrix > -- > > Key: SOLR-14478 > URL: https://issues.apache.org/jira/browse/SOLR-14478 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-14478.patch, Screen Shot 2020-05-14 at 8.54.07 > AM.png > > > Currently the *diff* function performs *serial differencing* on a numeric > vector. This ticket will allow the diff function to perform serial > differencing on all the rows of a *matrix*. This will make it easy to perform > *correlations* on a matrix of *differenced time series vectors* using math > expressions. > A screen shot is attached with *diff* working on a matrix of time series > data. The effect is powerful. It removes the trend from a matrix of time > series vectors in one simple function call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix
[ https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-14478: -- Description: Currently the *diff* function performs *serial differencing* on a numeric vector. This ticket will allow the diff function to perform serial differencing on all the rows of a *matrix*. This will make it easy to perform *correlations* on a matrix of *differenced time series vectors* using math expressions. A screen shot is attached with *diff* working on a matrix of time series data. The effect is powerful. It removes the trend from a matrix of time series vectors in on simple function call. was:Currently the *diff* function performs *serial differencing* on a numeric vector. This ticket will allow the diff function to perform serial differencing on all the rows of a *matrix*. This will make it easy to perform *correlations* on a matrix of *differenced time series vectors* using math expressions. > Allow the diff Stream Evaluator to operate on the rows of a matrix > -- > > Key: SOLR-14478 > URL: https://issues.apache.org/jira/browse/SOLR-14478 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-14478.patch, Screen Shot 2020-05-14 at 8.54.07 > AM.png > > > Currently the *diff* function performs *serial differencing* on a numeric > vector. This ticket will allow the diff function to perform serial > differencing on all the rows of a *matrix*. This will make it easy to perform > *correlations* on a matrix of *differenced time series vectors* using math > expressions. > A screen shot is attached with *diff* working on a matrix of time series > data. The effect is powerful. It removes the trend from a matrix of time > series vectors in on simple function call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107273#comment-17107273 ] Erick Erickson commented on SOLR-14485: --- I'll take a look today/tomorrow. A very minor nit: please just name the patches with the JIRA, *SOLR-14485.patch* for instance. The Jira system will gracefully handle multiple patches with the same name, graying out all the older copies but still listing them. That just makes it easier to know which one's the most recent without me having to think ;) > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 6. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 408) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 7. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 743) > [ecj-lint] throw new IOException("currently only one listener can be set > per trigger. Trigger name: " + trigger); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'listener' is not closed at this location > -- > [ecj-lint] 8. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 952) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 9. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 991) > [ecj-lint] SimScenario scenario = new SimScenario(); > [ecj-lint] > [ecj-lint] Resource leak: 'scenario' is never closed > -- > [ecj-lint] 1. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java > (at line 264) > [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new > HttpShardHandlerFactory().getShardHandler(), updateShardHandler, > "/admin/cores", > [ecj-lint] > ^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 2. WARNING in >
[jira] [Updated] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix
[ https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-14478: -- Attachment: Screen Shot 2020-05-14 at 8.54.07 AM.png > Allow the diff Stream Evaluator to operate on the rows of a matrix > -- > > Key: SOLR-14478 > URL: https://issues.apache.org/jira/browse/SOLR-14478 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-14478.patch, Screen Shot 2020-05-14 at 8.54.07 > AM.png > > > Currently the *diff* function performs *serial differencing* on a numeric > vector. This ticket will allow the diff function to perform serial > differencing on all the rows of a *matrix*. This will make it easy to perform > *correlations* on a matrix of *differenced time series vectors* using math > expressions. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Assigned] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-14485: - Assignee: Erick Erickson > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 6. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 408) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 7. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 743) > [ecj-lint] throw new IOException("currently only one listener can be set > per trigger. Trigger name: " + trigger); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'listener' is not closed at this location > -- > [ecj-lint] 8. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 952) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 9. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 991) > [ecj-lint] SimScenario scenario = new SimScenario(); > [ecj-lint] > [ecj-lint] Resource leak: 'scenario' is never closed > -- > [ecj-lint] 1. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java > (at line 264) > [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new > HttpShardHandlerFactory().getShardHandler(), updateShardHandler, > "/admin/cores", > [ecj-lint] > ^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ZkNodePropsTest.java > (at line 48) > [ecj-lint] new JavaBinCodec().marshal(zkProps.getProperties(), baos); > [ecj-lint] ^^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 3. WARNING in >
[jira] [Updated] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Salamon updated SOLR-14485: -- Attachment: SOLR-14485-01.patch > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Priority: Minor > Attachments: SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 6. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 408) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 7. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 743) > [ecj-lint] throw new IOException("currently only one listener can be set > per trigger. Trigger name: " + trigger); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'listener' is not closed at this location > -- > [ecj-lint] 8. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 952) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 9. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 991) > [ecj-lint] SimScenario scenario = new SimScenario(); > [ecj-lint] > [ecj-lint] Resource leak: 'scenario' is never closed > -- > [ecj-lint] 1. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java > (at line 264) > [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new > HttpShardHandlerFactory().getShardHandler(), updateShardHandler, > "/admin/cores", > [ecj-lint] > ^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ZkNodePropsTest.java > (at line 48) > [ecj-lint] new JavaBinCodec().marshal(zkProps.getProperties(), baos); > [ecj-lint] ^^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestSnapshotCloudManager.java > (at line 124) >
[jira] [Updated] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
[ https://issues.apache.org/jira/browse/SOLR-14485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Salamon updated SOLR-14485: -- Status: Patch Available (was: Open) > Fix or suppress 11 resource leak warnings in apache/solr/cloud > -- > > Key: SOLR-14485 > URL: https://issues.apache.org/jira/browse/SOLR-14485 > Project: Solr > Issue Type: Sub-task >Reporter: Andras Salamon >Priority: Minor > Attachments: SOLR-14485-01.patch > > > There are 11 warnings in apache/solr/cloud: > {noformat} > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java > (at line 644) > [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new > PeerSyncWithLeader(core, > [ecj-lint] ^^ > [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java > (at line 182) > [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, > core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, > peerSyncOnlyWithActive, false); > [ecj-lint] > [ecj-lint] Resource leak: 'peerSync' is never closed > -- > [ecj-lint] 4. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 793) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 5. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java > (at line 799) > [ecj-lint] throw new UnsupportedOperationException("must add at least 1 > node first"); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'queryRequest' is not closed at this location > -- > [ecj-lint] 6. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 408) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 7. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 743) > [ecj-lint] throw new IOException("currently only one listener can be set > per trigger. Trigger name: " + trigger); > [ecj-lint] > ^^ > [ecj-lint] Resource leak: 'listener' is not closed at this location > -- > [ecj-lint] 8. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 952) > [ecj-lint] SnapshotCloudManager snapshotCloudManager = new > SnapshotCloudManager(scenario.cluster, null); > [ecj-lint] > [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed > -- > [ecj-lint] 9. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java > (at line 991) > [ecj-lint] SimScenario scenario = new SimScenario(); > [ecj-lint] > [ecj-lint] Resource leak: 'scenario' is never closed > -- > [ecj-lint] 1. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java > (at line 264) > [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new > HttpShardHandlerFactory().getShardHandler(), updateShardHandler, > "/admin/cores", > [ecj-lint] > ^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 2. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ZkNodePropsTest.java > (at line 48) > [ecj-lint] new JavaBinCodec().marshal(zkProps.getProperties(), baos); > [ecj-lint] ^^ > [ecj-lint] Resource leak: '' is never closed > -- > [ecj-lint] 3. WARNING in > /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestSnapshotCloudManager.java > (at line 124) >
[jira] [Created] (SOLR-14485) Fix or suppress 11 resource leak warnings in apache/solr/cloud
Andras Salamon created SOLR-14485: - Summary: Fix or suppress 11 resource leak warnings in apache/solr/cloud Key: SOLR-14485 URL: https://issues.apache.org/jira/browse/SOLR-14485 Project: Solr Issue Type: Sub-task Reporter: Andras Salamon There are 11 warnings in apache/solr/cloud: {noformat} [ecj-lint] 2. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java (at line 644) [ecj-lint] PeerSyncWithLeader peerSyncWithLeader = new PeerSyncWithLeader(core, [ecj-lint]^^ [ecj-lint] Resource leak: 'peerSyncWithLeader' is never closed -- [ecj-lint] 3. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java (at line 182) [ecj-lint] PeerSync peerSync = new PeerSync(core, syncWith, core.getUpdateHandler().getUpdateLog().getNumRecordsToKeep(), true, peerSyncOnlyWithActive, false); [ecj-lint] [ecj-lint] Resource leak: 'peerSync' is never closed -- [ecj-lint] 4. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java (at line 793) [ecj-lint] throw new UnsupportedOperationException("must add at least 1 node first"); [ecj-lint] ^^ [ecj-lint] Resource leak: 'queryRequest' is not closed at this location -- [ecj-lint] 5. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java (at line 799) [ecj-lint] throw new UnsupportedOperationException("must add at least 1 node first"); [ecj-lint] ^^ [ecj-lint] Resource leak: 'queryRequest' is not closed at this location -- [ecj-lint] 6. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java (at line 408) [ecj-lint] SnapshotCloudManager snapshotCloudManager = new SnapshotCloudManager(scenario.cluster, null); [ecj-lint] [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed -- [ecj-lint] 7. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java (at line 743) [ecj-lint] throw new IOException("currently only one listener can be set per trigger. Trigger name: " + trigger); [ecj-lint] ^^ [ecj-lint] Resource leak: 'listener' is not closed at this location -- [ecj-lint] 8. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java (at line 952) [ecj-lint] SnapshotCloudManager snapshotCloudManager = new SnapshotCloudManager(scenario.cluster, null); [ecj-lint] [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed -- [ecj-lint] 9. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimScenario.java (at line 991) [ecj-lint] SimScenario scenario = new SimScenario(); [ecj-lint] [ecj-lint] Resource leak: 'scenario' is never closed -- [ecj-lint] 1. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java (at line 264) [ecj-lint] Overseer overseer = new Overseer((HttpShardHandler) new HttpShardHandlerFactory().getShardHandler(), updateShardHandler, "/admin/cores", [ecj-lint] ^ [ecj-lint] Resource leak: '' is never closed -- [ecj-lint] 2. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/ZkNodePropsTest.java (at line 48) [ecj-lint] new JavaBinCodec().marshal(zkProps.getProperties(), baos); [ecj-lint] ^^ [ecj-lint] Resource leak: '' is never closed -- [ecj-lint] 3. WARNING in /Users/andrassalamon/src/lucene-solr-upstream/solr/core/src/test/org/apache/solr/cloud/autoscaling/sim/TestSnapshotCloudManager.java (at line 124) [ecj-lint] SnapshotCloudManager snapshotCloudManager = new SnapshotCloudManager(realManager, null); [ecj-lint] [ecj-lint] Resource leak: 'snapshotCloudManager' is never closed {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail:
[GitHub] [lucene-solr] magibney commented on a change in pull request #1507: SOLR-14471: properly apply base replica ordering to last-place shards…
magibney commented on a change in pull request #1507: URL: https://github.com/apache/lucene-solr/pull/1507#discussion_r425099741 ## File path: solr/solrj/src/test/org/apache/solr/client/solrj/routing/RequestReplicaListTransformerGeneratorTest.java ## @@ -88,6 +88,19 @@ public void replicaTypeAndReplicaBase() { ) ); +// Add a PULL replica so that there's a tie for "last place" +replicas.add( +new Replica( +"node4", Review comment: Ah yes; fixed, thanks for catching that! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14484) NPE in ConcurrentUpdateHttp2SolrClient MDC logging
[ https://issues.apache.org/jira/browse/SOLR-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107235#comment-17107235 ] Lucene/Solr QA commented on SOLR-14484: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 42s{color} | {color:green} solrj in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-14484 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13002906/SOLR-14484-01.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 010168c57b3 | | ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 | | Default Java | LTS | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/747/testReport/ | | modules | C: solr/solrj U: solr/solrj | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/747/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > NPE in ConcurrentUpdateHttp2SolrClient MDC logging > -- > > Key: SOLR-14484 > URL: https://issues.apache.org/jira/browse/SOLR-14484 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.4.1 >Reporter: Andras Salamon >Priority: Minor > Attachments: SOLR-14484-01.patch > > > {{client.getBaseURL()}} can be null in {{ConcurrentUpdateHttp2SolrClient}} > which can cause problems in MDC logging. > We had the following error in the stacktrace. We were using Solr 8.4.1 from > lily hbase-indexer which still uses log4j 1.2: > {noformat} > Error from server at http://127.0.0.1:45895/solr/collection1: > java.lang.NullPointerException > at java.util.Hashtable.put(Hashtable.java:459) > at org.apache.log4j.MDC.put0(MDC.java:150) > at org.apache.log4j.MDC.put(MDC.java:85) > at org.slf4j.impl.Log4jMDCAdapter.put(Log4jMDCAdapter.java:67) > at org.slf4j.MDC.put(MDC.java:147) > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.addRunner(ConcurrentUpdateHttp2SolrClient.java:346) > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.waitForEmptyQueue(ConcurrentUpdateHttp2SolrClient.java:565) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] uschindler commented on pull request #1477: LUCENE-9321: Port markdown task to Gradle
uschindler commented on pull request #1477: URL: https://github.com/apache/lucene-solr/pull/1477#issuecomment-628571533 I can now proceed with this one as #1488 is done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9278) Make javadoc folder structure follow Gradle project path
[ https://issues.apache.org/jira/browse/LUCENE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107215#comment-17107215 ] ASF subversion and git services commented on LUCENE-9278: - Commit 010168c57b35e402da3d8776c03307af0785a3bd in lucene-solr's branch refs/heads/master from Uwe Schindler [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=010168c ] LUCENE-9321, LUCENE-9278: Refactor renderJavadoc to allow relative links with multiple Gradle tasks (#1488) This also automatically collects linked projects by its dependencies, so we don't need to maintain all inter-project javadocs links. Co-authored-by: Dawid Weiss > Make javadoc folder structure follow Gradle project path > > > Key: LUCENE-9278 > URL: https://issues.apache.org/jira/browse/LUCENE-9278 > Project: Lucene - Core > Issue Type: Task > Components: general/build >Reporter: Tomoko Uchida >Assignee: Tomoko Uchida >Priority: Major > Fix For: master (9.0) > > Time Spent: 7h 10m > Remaining Estimate: 0h > > Current javadoc folder structure is derived from Ant project name. e.g.: > [https://lucene.apache.org/core/8_4_1/analyzers-icu/index.html] > [https://lucene.apache.org/solr/8_4_1/solr-solrj/index.html] > For Gradle build, it should also follow gradle project structure (path) > instead of ant one, to keep things simple to manage [1]. Hence, it will look > like this: > [https://lucene.apache.org/core/9_0_0/analysis/icu/index.html] > [https://lucene.apache.org/solr/9_0_0/solr/solrj/index.html] > [1] The change was suggested at the conversation between Dawid Weiss and I on > a github pr: [https://github.com/apache/lucene-solr/pull/1304] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9321) Port documentation task to gradle
[ https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107214#comment-17107214 ] ASF subversion and git services commented on LUCENE-9321: - Commit 010168c57b35e402da3d8776c03307af0785a3bd in lucene-solr's branch refs/heads/master from Uwe Schindler [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=010168c ] LUCENE-9321, LUCENE-9278: Refactor renderJavadoc to allow relative links with multiple Gradle tasks (#1488) This also automatically collects linked projects by its dependencies, so we don't need to maintain all inter-project javadocs links. Co-authored-by: Dawid Weiss > Port documentation task to gradle > - > > Key: LUCENE-9321 > URL: https://issues.apache.org/jira/browse/LUCENE-9321 > Project: Lucene - Core > Issue Type: Sub-task > Components: general/build >Reporter: Tomoko Uchida >Assignee: Uwe Schindler >Priority: Major > Fix For: master (9.0) > > Attachments: screenshot-1.png > > Time Spent: 4.5h > Remaining Estimate: 0h > > This is a placeholder issue for porting ant "documentation" task to gradle. > The generated documents should be able to be published on lucene.apache.org > web site on "as-is" basis. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] uschindler merged pull request #1488: LUCENE-9321: Refactor renderJavadoc to allow relative links with multiple Gradle tasks
uschindler merged pull request #1488: URL: https://github.com/apache/lucene-solr/pull/1488 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9370) RegExpQuery should error for inappropriate use of \ character in input
[ https://issues.apache.org/jira/browse/LUCENE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107204#comment-17107204 ] Mark Harwood commented on LUCENE-9370: -- The [Java rules|https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#bs] for inappropriate backslashing are probably the ones to follow here: {quote}It is an error to use a backslash prior to any alphabetic character that does not denote an escaped construct; these are reserved for future extensions to the regular-expression language. A backslash may be used prior to a non-alphabetic character regardless of whether that character is part of an unescaped construct. {quote} > RegExpQuery should error for inappropriate use of \ character in input > -- > > Key: LUCENE-9370 > URL: https://issues.apache.org/jira/browse/LUCENE-9370 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: master (9.0) >Reporter: Mark Harwood >Priority: Minor > > The RegExp class is too lenient in parsing user input which can confuse or > mislead users and cause backwards compatibility issues as we enhance regex > support. > In normal regular expression syntax the backslash is used to: > * escape a reserved character like \. > * use certain unreserved characters in a shorthand context e.g. \d means > digits [0-9] > > The leniency bug in RegExp is that it adds an extra rule to this list - any > backslashed characters that don't satisfy the above rules are taken > literally. For example, there's no reason to put a backslash in front of the > letter "p" but we accept \p as the letter p. > Java's Pattern class will throw a parse exception given a meaningless > backslash like \p. > We should too. > In [Lucene-9336|https://issues.apache.org/jira/browse/LUCENE-9336] we added > support for commonly supported regex expressions like `\d`. Sadly this is a > breaking change because of the leniency that has allowed \d to be accepted as > the letter d without an exception. Users were likely silently missing results > they were hoping for and we made a BWC problem for ourselves in filling in > the gaps. > I propose we do like other RegEx parsers and error on inappropriate use of > backslashes. > This will be another breaking change so should target 9.0 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] uschindler commented on pull request #1488: LUCENE-9321: Refactor renderJavadoc to allow relative links with multiple Gradle tasks
uschindler commented on pull request #1488: URL: https://github.com/apache/lucene-solr/pull/1488#issuecomment-628558810 I just noticed we have no CHANGES entry for Gradle at all. We should maybe add one for all. So I will merge this without adding a new changes entry. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] markharwood merged pull request #1515: Lucene-9336: Changes.txt addition for RegExp enhancements
markharwood merged pull request #1515: URL: https://github.com/apache/lucene-solr/pull/1515 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org