Thank you for the information, Steve.

Dongjoon.

On 2024/09/25 09:40:25 Steve Loughran wrote:
> mukund was apparently building the next RC at the w/e before Anuj Modi @
> microsoft found a regression. There were some other problems they found
> related to scale testing (HADOOP-19279), so it's been some last-minute abfs
> stabilisation.
> 
> Assuming we can get the RC out this week, and all is good, we can start
> voting this week and if no problems surface, ship next week.
> 
> Dongjoon, you are free to build and test yourself, especially on java17.
> 
> I'm also thinking of doing a maintenance release of 3.3.x
> 
> On Tue, 24 Sept 2024 at 16:39, Dongjoon Hyun <dongj...@apache.org> wrote:
> 
> > Hi, is there any schedule to resume Apache Hadoop 3.4.1 release?
> >
> > Dongjoon.
> >
> > On 2024/08/16 15:17:28 Steve Loughran wrote:
> > > Afraid I have to say -1 to this iteration, but I promise I'll help
> > address
> > > the issues
> > >
> > > First, I've cherrypicked a few final changes from branch-3.4 in,
> > including
> > > this major one
> > >  HADOOP-19153. hadoop-common exports logback as a transitive dependency
> > > (#6999)
> > > This broke parquet hadoop-test runs without an explicit exclusion
> > > downstream -it'll help others upgrade.
> > >
> > > Second, the cherrypick branch test run showed parquet JDK
> > incompatibilities
> > >
> > >
> > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6997/1/testReport/
> > >
> > > java.lang.NoSuchMethodError:
> > > java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
> > > at
> > >
> > org.apache.hadoop.thirdparty.protobuf.IterableByteBufferInputStream.read(IterableByteBufferInputStream.java:143)
> > > at
> > >
> > org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.read(CodedInputStream.java:2080)
> > > at
> > >
> > org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.tryRefillBuffer(CodedInputStream.java:2831)
> > > at
> > >
> > org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.refillBuffer(CodedInputStream.java:2777)
> > > at
> > >
> > org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawByte(CodedInputStream.java:2859)
> > > at
> > >
> > org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64SlowPath(CodedInputStream.java:2648)
> > > at
> > >
> > org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64(CodedInputStream.java:2641)
> > > at
> > >
> > org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readSInt64(CodedInputStream.java:2497)
> > > at
> > >
> > org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:419)
> > > at
> > >
> > org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:397)
> > > at
> > >
> > org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:375)
> > > at
> > >
> > org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs.checkReport(TestBlockListAsLongs.java:156)
> > > at
> > >
> > org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs.testFuzz(TestBlockListAsLongs.java:139)
> > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > at
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > at
> > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > at
> > >
> > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > at
> > >
> > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > at
> > >
> > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > at
> > >
> > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> > > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > at
> > >
> > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> > >
> > > This is addressed by "HADOOP-19163. Use protobuf-java 3.25.3", which
> > bumped
> > > up the shaded version
> > > then modified hadoop dependencies to match.
> > >
> > >
> > > I'm going to release thirdparty jar 1.3.0 with the relevant updates,
> > > upgrade 3.4.1+ to use it once that's out (I'll have the pending PRs up)
> > >
> > > HADOOP-19252. Release Hadoop Third-Party 1.3.0
> > > https://issues.apache.org/jira/browse/HADOOP-19252
> > >
> > > The other thing we should all look at is making sure we are current with
> > > all dependency updates in trunk *without doing any last minute update of
> > > jar versions entirely*
> > >
> > > I've got a PR of the kafka update:
> > > https://github.com/apache/hadoop/pull/7000 ; will merge if yetus doesn't
> > > complain
> > >
> > >
> > >
> > >
> > > On Thu, 8 Aug 2024 at 19:06, Mukund Madhav Thakur
> > > <mtha...@cloudera.com.invalid> wrote:
> > >
> > > > Apache Hadoop 3.4.1
> > > >
> > > >
> > > >
> > > > I with help of Steve have put together a release candidate (RC1) for
> > Hadoop
> > > > 3.4.1.
> > > >
> > > >
> > > >
> > > > What we would like is for anyone who can to verify the tarballs,
> > especially
> > > >
> > > > anyone who can try the arm64 binaries as we want to include them too.
> > > >
> > > >
> > > >
> > > > The RC is available at:
> > > >
> > > > https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC1/
> > > >
> > > >
> > > >
> > > > The git tag is release-3.4.1-RC1, commit
> > > > 247daf0f827adc96a3847bb40e0fec3fc85f33bd
> > > >
> > > >
> > > >
> > > > The maven artifacts are staged at
> > > >
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1417
> > > >
> > > >
> > > >
> > > > You can find my public key at:
> > > >
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > >
> > > >
> > > > Change log
> > > >
> > > >
> > https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC1/CHANGELOG.md
> > > >
> > > >
> > > >
> > > > Release notes
> > > >
> > > >
> > > >
> > https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC1/RELEASENOTES.md
> > > >
> > > >
> > > >
> > > > This is off branch-3.4.
> > > >
> > > >
> > > >
> > > > Key changes include
> > > >
> > > >
> > > >
> > > > * Bulk Delete API. https://issues.apache.org/jira/browse/HADOOP-18679
> > > >
> > > > * Fixes and enhancements in Vectored IO API.
> > > >
> > > > * Improvements in Hadoop Azure connector.
> > > >
> > > > * Fixes and improvements post upgrade to AWS V2 SDK in S3AConnector.
> > > >
> > > > * This release includes Arm64 binaries. Please can anyone with
> > > >
> > > >   compatible systems validate these.
> > > >
> > > >
> > > >
> > > > Note, because the arm64 binaries are built separately on a different
> > > >
> > > > platform and JVM, their jar files may not match those of the x86
> > > >
> > > > release -and therefore the maven artifacts. I don't think this is
> > > >
> > > > an issue (the ASF actually releases source tarballs, the binaries are
> > > >
> > > > there for help only, though with the maven repo that's a bit blurred).
> > > >
> > > >
> > > >
> > > > The only way to be consistent would actually untar the x86.tar.gz,
> > > >
> > > > overwrite its binaries with the arm stuff, retar, sign and push out
> > > >
> > > > for the vote. Even automating that would be risky.
> > > >
> > > >
> > > >
> > > > As this is just a first try to get this out, there might be issues,
> > Please
> > > > try the release and let me know. Also let me know if you would like to
> > add
> > > > somethings in 3.4.1
> > > >
> > > >
> > > > Also found two issues in hadoop-yarn-ui while building the arm binaries
> > > >
> > > > https://issues.apache.org/jira/browse/YARN-11712
> > > >
> > > > https://issues.apache.org/jira/browse/YARN-11713
> > > >
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Mukund
> > > >
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Reply via email to