Please do!

On Tue, 1 Oct 2024 at 20:54, Wei-Chiu Chuang <weic...@apache.org> wrote:

> Hi I'm late to the party, but I'd like to build and test this release with
> Ozone and HBase.
>
> On Tue, Oct 1, 2024 at 2:12 AM Mukund Madhav Thakur
> <mtha...@cloudera.com.invalid> wrote:
>
> > Thanks @Dongjoon Hyun <dongjoon.h...@gmail.com> for trying out the RC
> and
> > finding out this bug. This has to be fixed.
> > It would be great if others can give the RC a try such that we know of
> any
> > issues earlier.
> >
> > Thanks
> > Mukund
> >
> > On Tue, Oct 1, 2024 at 2:21 AM Steve Loughran
> <ste...@cloudera.com.invalid
> > >
> > wrote:
> >
> > > ok, we will have to consider that a -1
> > >
> > > Interestingly we haven't seen that on any of our internal QE, maybe
> none
> > of
> > > the requests weren't overlapping.
> > >
> > > I was just looking towards an =0 because of
> > >
> > > https://issues.apache.org/jira/browse/HADOOP-19295
> > >
> > > *Unlike the v1 sdk, PUT/POST of data now shares the same timeout as all
> > > other requests, and on a slow network connection requests time out.
> > > Furthermore, large file uploads cn generate the same failure
> > > condition because the competing block uploads reduce the bandwidth for
> > the
> > > others.*
> > >
> > > I'll describe more on the JIRA -the fix is straightforward, have a much
> > > longer timeout, such as 15 minutes. It will mean that problems with
> other
> > > calls will not timeout for the same time.
> > >
> > > Note that In previous releases that request timeout *did not* apply to
> > the
> > > big upload. This has been reverted.
> > >
> > > This is not a regression between 3.4.0; it had the same problem just
> > nobody
> > > has noticed. That's what comes from doing a lot of the testing within
> AWS
> > > and other people doing the testing (me) not trying to upload files >
> > 1GB. I
> > > have now.
> > >
> > > Anyway, I do not consider that a -1 because it wasn't a regression and
> > it's
> > > straightforward to work around in a site configuration.
> > >
> > > Other than that, my findings were
> > > -Pnative breaks enforcer on macos (build only; fix is upgrade enforcer
> > > version)
> > >
> > > -native code probes on my ubuntu rasberry pi5 (don't laugh -this is the
> > > most powerful computer I personally own) wan about a missing link in
> the
> > > native checks.
> > >  I haven't yet set up openssl bindings for s3a and abfs to see if they
> > > actually work.
> > >
> > >   [hadoopq] 2024-09-27 19:52:16,544 WARN crypto.OpensslCipher: Failed
> to
> > > load OpenSSL Cipher.
> > >   [hadoopq] java.lang.UnsatisfiedLinkError: EVP_CIPHER_CTX_block_size
> > >   [hadoopq]     at
> org.apache.hadoop.crypto.OpensslCipher.initIDs(Native
> > > Method)
> > >   [hadoopq]     at
> > > org.apache.hadoop.crypto.OpensslCipher.<clinit>(OpensslCipher.java:90)
> > >   [hadoopq]     at
> > > org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.
> > >
> > > You're one looks like it is. Pity -but thank you for the testing. Give
> > it a
> > > couple more days to see if people report any other issues.
> > >
> > > Mukund has been doing all the work on this; I'll see how much I can do
> > > myself to share the joy.
> > >
> > > On Sun, 29 Sept 2024 at 06:24, Dongjoon Hyun <dongj...@apache.org>
> > wrote:
> > >
> > > > Unfortunately, it turns out to be a regression in addition to a
> > breaking
> > > > change.
> > > >
> > > > In short, HADOOP-19098 (or more) makes Hadoop 3.4.1 fails even when
> > users
> > > > give disjoint ranges.
> > > >
> > > > I filed a Hadoop JIRA issue and a PR. Please take a look at that.
> > > >
> > > > - HADOOP-19291. `CombinedFileRange.merge` should not convert disjoint
> > > > ranges into overlapped ones
> > > > - https://github.com/apache/hadoop/pull/7079
> > > >
> > > > I believe this is a Hadoop release blocker for both Apache ORC and
> > Apache
> > > > Parquet project perspective.
> > > >
> > > > Dongjoon.
> > > >
> > > > On 2024/09/29 03:16:18 Dongjoon Hyun wrote:
> > > > > Thank you for 3.4.1 RC2.
> > > > >
> > > > > HADOOP-19098 (Vector IO: consistent specified rejection of
> > overlapping
> > > > ranges) seems to be a hard breaking change at 3.4.1.
> > > > >
> > > > > Do you think we can have an option to handle the overlapping ranges
> > in
> > > > Hadoop layer instead of introducing a breaking change to the users at
> > the
> > > > maintenance release?
> > > > >
> > > > > Dongjoon.
> > > > >
> > > > > On 2024/09/25 20:13:48 Mukund Madhav Thakur wrote:
> > > > > > Apache Hadoop 3.4.1
> > > > > >
> > > > > >
> > > > > > With help from Steve I have put together a release candidate
> (RC2)
> > > for
> > > > > > Hadoop 3.4.1.
> > > > > >
> > > > > >
> > > > > > What we would like is for anyone who can to verify the tarballs,
> > > > especially
> > > > > >
> > > > > > anyone who can try the arm64 binaries as we want to include them
> > too.
> > > > > >
> > > > > >
> > > > > > The RC is available at:
> > > > > >
> > > > > > https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/
> > > > > >
> > > > > >
> > > > > > The git tag is release-3.4.1-RC2, commit
> > > > > > b3a4b582eeb729a0f48eca77121dd5e2983b2004
> > > > > >
> > > > > >
> > > > > > The maven artifacts are staged at
> > > > > >
> > > > > >
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1426
> > > > > >
> > > > > >
> > > > > > You can find my public key at:
> > > > > >
> > > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > > > >
> > > > > >
> > > > > > Change log
> > > > > >
> > > > > >
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/CHANGELOG.md
> > > > > >
> > > > > >
> > > > > > Release notes
> > > > > >
> > > > > >
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/RELEASENOTES.md
> > > > > >
> > > > > >
> > > > > > This is off branch-3.4.
> > > > > >
> > > > > >
> > > > > > Key changes include
> > > > > >
> > > > > >
> > > > > > * Bulk Delete API.
> > > https://issues.apache.org/jira/browse/HADOOP-18679
> > > > > >
> > > > > > * Fixes and enhancements in Vectored IO API.
> > > > > >
> > > > > > * Improvements in Hadoop Azure connector.
> > > > > >
> > > > > > * Fixes and improvements post upgrade to AWS V2 SDK in
> > S3AConnector.
> > > > > >
> > > > > > * This release includes Arm64 binaries. Please can anyone with
> > > > > >
> > > > > >   compatible systems validate these.
> > > > > >
> > > > > >
> > > > > > Note, because the arm64 binaries are built separately on a
> > different
> > > > > >
> > > > > > platform and JVM, their jar files may not match those of the x86
> > > > > >
> > > > > > release -and therefore the maven artifacts. I don't think this is
> > > > > >
> > > > > > an issue (the ASF actually releases source tarballs, the binaries
> > are
> > > > > >
> > > > > > there for help only, though with the maven repo that's a bit
> > > blurred).
> > > > > >
> > > > > >
> > > > > > The only way to be consistent would actually untar the
> x86.tar.gz,
> > > > > >
> > > > > > overwrite its binaries with the arm stuff, retar, sign and push
> out
> > > > > >
> > > > > > for the vote. Even automating that would be risky.
> > > > > >
> > > > > >
> > > > > > Please try the release and vote. The vote will run for 5 days.
> > > > > >
> > > > > >
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Mukund
> > > > > >
> > > > >
> > > > >
> ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > > > >
> > > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > > >
> > > >
> > >
> >
>

Reply via email to