+1.  There have been a few times I've attempted to run the verification
scripts.  They have failed, but I was pretty confident it was a problem
with my environment mixing with the verification script and not a problem
in the software itself and I didn't take the time to debug the verification
script issues.  So even if there were a true issue I doubt the manual
verification process would help me catch it.

Also, most devs seem to be on fairly consistent development environments
(Ubuntu or Macbook).  So rather than spend time allowing many people to
verify Ubuntu works we could probably spend that time building extra CI
environments that  provide more coverage.

On Fri, Jan 19, 2024 at 1:49 PM Jacob Wujciak-Jens
<ja...@voltrondata.com.invalid> wrote:

> I concur, a minimally scoped verification script for the actual voting
> process without any binary verification etc. should be created. The ease in
> verifying a release will lower the burden to participate in the vote which
> is good for the community and will even be necessary if we ever want to
> increase release cadance as previously discussed.
>
> In my opinion it will also mean that the binaries are no longer part of the
> release, which will help in situations similar to the release of Python
> 3.12 just after 14.0.0 was released and lots of users were running into
> issues because there were no 14.0.0 wheels for 3.12.
>
> While it would still be nice to potentially make reproduction of CI errors
> easier by having better methods to restart a failed script, this is of much
> lower importance then improving the release process.
>
> Jacob
>
> On Fri, Jan 19, 2024 at 7:38 PM Andrew Lamb <al...@influxdata.com> wrote:
>
> > I would second this notion that manually running tests that are already
> > covered as part of CI as part of the release process is of (very) limited
> > value.
> >
> > While we do the same thing (compile and run some tests) as part of the
> Rust
> > release this has never caught any serious defect I am aware of and we
> only
> > run a subset of tests (e.g. not tests for integration with other arrow
> > versions)
> >
> > Reducing the burden for releases I think would benefit everyone.
> >
> > Andrew
> >
> > On Fri, Jan 19, 2024 at 1:08 PM Antoine Pitrou <anto...@python.org>
> wrote:
> >
> > >
> > > Well, if the main objective is to just follow the ASF Release
> > > guidelines, then our verification process can be simplified
> drastically.
> > >
> > > The ASF indeed just requires:
> > > """
> > > Every ASF release MUST contain one or more source packages, which MUST
> > > be sufficient for a user to build and test the release provided they
> > > have access to the appropriate platform and tools. A source release
> > > SHOULD not contain compiled code.
> > > """
> > >
> > > So, basically, if the source tarball is enough to compile Arrow on a
> > > single platform with a single set of tools, then we're ok. :-)
> > >
> > > The rest is just an additional burden that we voluntarily inflict to
> > > ourselves. *Ideally*, our continuous integration should be able to
> > > detect any build-time or runtime issue on supported platforms. There
> > > have been rare cases where some issues could be detected at release
> time
> > > thanks to the release verification script, but these are a tiny portion
> > > of all issues routinely detected in the form of CI failures. So, there
> > > doesn't seem to be a reason to believe that manual release verification
> > > is bringing significant benefits compared to regular CI.
> > >
> > > Regards
> > >
> > > Antoine.
> > >
> > >
> > > Le 19/01/2024 à 11:42, Raúl Cumplido a écrit :
> > > > Hi,
> > > >
> > > > One of the challenges we have when doing a release is verification
> and
> > > voting.
> > > >
> > > > Currently the Arrow verification process is quite long, tedious and
> > > error prone.
> > > >
> > > > I would like to use this email to get feedback and user requests in
> > > > order to improve the process.
> > > >
> > > > Several things already on my mind:
> > > >
> > > > One thing that is quite annoying is that any flaky failure makes us
> > > > restart the process and possibly requires downloading everything
> > > > again. It would be great to have some kind of retry mechanism that
> > > > allows us to keep going from where it failed and doesn't have to redo
> > > > the previous successful jobs.
> > > >
> > > > We do have a bunch of flags to do specific parts but that requires
> > > > knowledge and time to go over the different flags, etcetera so the UX
> > > > could be improved.
> > > >
> > > > Based on the ASF release policy [1] in order to cast a +1 vote we
> have
> > > > to validate the source code packages but it is not required to
> > > > validate binaries locally. Several binaries are currently tested
> using
> > > > docker images and they are already tested and validated on CI. Our
> > > > documentation for release verification points to perform binary
> > > > validation. I plan to update the documentation and move it to the
> > > > official docs instead of the wiki [2].
> > > >
> > > > I would appreciate input on the topic so we can improve the current
> > > process.
> > > >
> > > > Thanks everyone,
> > > > Raúl
> > > >
> > > > [1]
> https://www.apache.org/legal/release-policy.html#release-approval
> > > > [2]
> > >
> >
> https://cwiki.apache.org/confluence/display/ARROW/How+to+Verify+Release+Candidates
> > >
> >
>

Reply via email to