I think this is assumed from the discussion above, but because the releaseWizard suggests a formal "FAILED" email:
This vote has FAILED. Reason for fail is: re-spinning the release for SOLR-17149. On Mon, Feb 5, 2024 at 1:55 PM Jason Gerlowski <gerlowsk...@gmail.com> wrote: > It's probably cleanest to create a new ticket of type="bug", and then link > back to SOLR-16879 - that way the "Fix Version" field can be unambiguous. > > On Mon, Feb 5, 2024 at 1:33 PM Pierre Salagnac <pierre.salag...@gmail.com> > wrote: > >> Thanks for your input. >> Unfortunately there is no parameter for that. This is hardcoded at 5. >> >> Yes, I'm already working on a fix. >> >> Sorry to hijack this thread for the 9.5 release... Is a new Jira required >> for such an issue? >> I'm unclear with this, since the regression was introduced in a version >> that is already released. >> >> >> Le lun. 5 févr. 2024 à 19:03, Jason Gerlowski <gerlowsk...@gmail.com> a >> écrit : >> >> > Interesting - 9.4 has been out there since October, I'm surprised no one >> > reported this earlier. But I guess there's always a lag for teams to >> > upgrade to new versions... >> > >> > Is the number of "expensive" tasks configurable, such that there's a >> > workaround for collections with many shards? Assuming not, this does >> sound >> > serious enough to "fail" the VOTE as it'd mean that backup/restore is >> > essentially broken for sufficiently large collections. >> > >> > In terms of SOLR-16879 - any chance you're willing to work on a fix >> Pierre? >> > >> > Best, >> > >> > Jason >> > >> > On Mon, Feb 5, 2024 at 12:52 PM Pierre Salagnac < >> pierre.salag...@gmail.com >> > > >> > wrote: >> > >> > > The regression was introduced in 9.4. >> > > >> > > Le lun. 5 févr. 2024 à 18:31, Pierre Salagnac < >> pierre.salag...@gmail.com >> > > >> > > a >> > > écrit : >> > > >> > > > Hi Jason, >> > > > >> > > > A regression was introduced in backup/restore for large collections. >> > This >> > > > was reported in a comment of SOLR-16879[1]. >> > > > Should this be considered as a blocker for 9.5 ? >> > > > >> > > > [1] >> > > > >> > > >> > >> https://issues.apache.org/jira/browse/SOLR-16879?focusedCommentId=17813066&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17813066 >> > > > >> > > > Le lun. 5 févr. 2024 à 15:44, Jason Gerlowski < >> gerlowsk...@apache.org> >> > a >> > > > écrit : >> > > > >> > > >> Please vote for release candidate 1 for Solr 9.5.0 >> > > >> >> > > >> The artifacts can be downloaded from: >> > > >> >> > > >> >> > > >> > >> https://dist.apache.org/repos/dist/dev/solr/solr-9.5.0-RC1-rev-1fb7d127fc064b0bab8435a431d71a44050e654b >> > > >> >> > > >> You can run the smoke tester directly with this command: >> > > >> >> > > >> python3 -u dev-tools/scripts/smokeTestRelease.py \ >> > > >> >> > > >> >> > > >> > >> https://dist.apache.org/repos/dist/dev/solr/solr-9.5.0-RC1-rev-1fb7d127fc064b0bab8435a431d71a44050e654b >> > > >> >> > > >> You can build a release-candidate of the official docker images >> (full >> > & >> > > >> slim) using the following command: >> > > >> >> > > >> SOLR_DOWNLOAD_SERVER= >> > > >> >> > > >> >> > > >> > >> https://dist.apache.org/repos/dist/dev/solr/solr-9.5.0-RC1-rev-1fb7d127fc064b0bab8435a431d71a44050e654b/solr >> > > >> && >> > > >> < >> > > >> > >> https://dist.apache.org/repos/dist/dev/solr/solr-9.5.0-RC1-rev-1fb7d127fc064b0bab8435a431d71a44050e654b/solr&& >> > > > >> > > >> \ >> > > >> docker build >> > > >> $SOLR_DOWNLOAD_SERVER/9.5.0/docker/Dockerfile.official-full \ >> > > >> --build-arg SOLR_DOWNLOAD_SERVER=$SOLR_DOWNLOAD_SERVER \ >> > > >> -t solr-rc:9.5.0-1 && \ >> > > >> docker build >> > > >> $SOLR_DOWNLOAD_SERVER/9.5.0/docker/Dockerfile.official-slim \ >> > > >> --build-arg SOLR_DOWNLOAD_SERVER=$SOLR_DOWNLOAD_SERVER \ >> > > >> -t solr-rc:9.5.0-1-slim >> > > >> >> > > >> The vote will be open for at least 72 hours i.e. until 2024-02-08 >> > 15:00 >> > > >> UTC. >> > > >> >> > > >> [ ] +1 approve >> > > >> [ ] +0 no opinion >> > > >> [ ] -1 disapprove (and reason why) >> > > >> >> > > >> Here is my +1 >> > > >> >> > > > >> > > >> > >> >