TL;DR I think the 3.4.1 RC2 is okay from an Ozone's perspective. Most of
the issues were introduced in 3.4.0 and not too terrible.
I am planning to introduce Hadoop 3.4.0 for Ozone 2.0 so incompatibilities
are acceptable.

Thanks for the tips.

Yes I had to bump the hadoop-thirdparty version as well. Maybe it's because
Ozone uses Hadoop RPC directly so it has direct dependency on
hadoop-thirdparty.

I was able to compile and pass most tests, and there were a few hurdles.

For some strange reason, Ozone's contract test
had java.lang.NoSuchMethodError. I suspect it's HADOOP-18996
<https://issues.apache.org/jira/browse/HADOOP-18996>
But anyway, got that resolved by declaring direct dependency on assertj. So
this is fine.

There is aother java.lang.NoSuchMethodError error in Ozone's
TestDelegationToken. Suspect it's due to HADOOP-17317
<https://issues.apache.org/jira/browse/HADOOP-17317> which updated
dnsjava. Hadoop has dnsjava = 3.6.1, Ozone has dnsjava = 2.1.9.
I think Ozone should anyway update dnsjava. It would mean Ozone won't be
able to support Hadoop 2.10 in secure mode, but I think it's okay. If
Hadoop 2 support is still needed I can create a build profile for Hadoop2.

HADOOP-18502 <https://issues.apache.org/jira/browse/HADOOP-18502> caused a
behavior change in Hadoop metrics. Looking at the description, I think it's
a reasonable fix so updated the Ozone test accordingly.

Ozone's docker based acceptance are failing because of HADOOP-17524
<https://issues.apache.org/jira/browse/HADOOP-17524>. This is an
incompatible change but looks easy to deal with.

Testing audit parser | FAIL |
320
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:321>'log4j:ERROR
Could not instantiate class [org.apache.hadoop.log.metrics.EventCounter].
321
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:322>java.lang.ClassNotFoundException:
org.apache.hadoop.log.metrics.EventCounter
322
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:323>
at
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)

323
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:324>
at
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)

324
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:325>
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)
325
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:326>
at java.base/java.lang.Class.forName0(Native Method)
326
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:327>
at java.base/java.lang.Class.forName(Class.java:375)
327
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:328>
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:159)
328
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:329>
at
org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:299)

329
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:330>
at
org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:122)

330
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:331>
at
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:728)

331
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:332>
at
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:711)

332
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:333>
at
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyCon...
333
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:334>
[ Message content over the limit has been removed. ]
334
<https://github.com/jojochuang/ozone/actions/runs/11151108809/job/30994606577#step:5:335>...4j.LoggerFactory.bind(LoggerFactory.java:199)\n\tat
org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:186)\n\tat
org.slf4j.LoggerFactory.getProvider(LoggerFactory.java:496)\n\tat
org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:482)\n\tat
org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:431)\n\tat
org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:457)\n\tat
org.apache.hadoop.ozone.audit.parser.common.DatabaseHelper.<clinit>(DatabaseHelper.java:54)\n\tat
org.apache.hadoop.ozone.audit.parser.handler.QueryCommandHandler.call(QueryCommandHandler.java:54)\n\tat
org.apache.hadoop.ozone.audit.parser.handler.QueryCommandHandler.call(QueryCommandHandler.java:34)\n\tat
picocli.CommandLine.executeUserObject(CommandLine.java:2041)\n\tat
picocli.CommandLine.access$1500(CommandLine.java:148)\n\tat
picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2461)\n\tat
picocli.CommandLine$RunLast.handle(CommandLine.java:2453)\n\tat
picocli.CommandLine$RunLast.handle(CommandLine.java:2415)\n\tat
picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2273)\n\tat
picocli.CommandLine$RunLast.execute(CommandLine.java:2417)\n\tat
picocli.CommandLine.execute(CommandLine.java:2170)\n\tat
org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:100)\n\tat
org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:91)\n\tat
org.apache.hadoop.ozone.audit.parser.AuditParser.main(AuditParser.java:54)\nlog4j:ERROR
Could not instantiate appender named "EventCounter".\n1\t\n'

On Wed, Oct 2, 2024 at 12:06 PM Steve Loughran <ste...@cloudera.com.invalid>
wrote:

> you using the hadoop thirdparty jar? there is a 1.3.0 release out
>
> On Wed, 2 Oct 2024 at 17:01, Wei-Chiu Chuang <weic...@apache.org> wrote:
>
> > HBase project is adding support for Hadoop 3.4.0, and I had to add a few
> > changes on top of that to let HBase shading to pass (license issues due
> to
> > transitive dependencies and so on). Those are quite common when updating
> to
> > a new Hadoop version.
> >
> > But apart from that it builds and unit tests passed
> > https://github.com/apache/hbase/pull/6331 there was one failure but it
> > passes locally for me.
> > One more thing to add is that HBase master requires JDK17 or higher to
> > build now. That just works out of the box.
> >
> > Ozone is a separate story.
> >
> https://github.com/jojochuang/ozone/actions/runs/11134281812/job/30942713712
> > I had to make a code change to due Ozone's use of Hadoop's non public
> > static variables. So that's okay.
> > I am having trouble with the unit tests (docker based acceptance test
> > doesn't work yet due to the lack of Hadoop 3.4.1 images) due to mixed
> > version of protobuf (or so I thought)
> >
> > There are failures like this that look similar to HADOOP-9845
> > <https://issues.apache.org/jira/browse/HADOOP-9845> so I suspect it's
> due
> > to the protobuf version updated from 3.7 to 3.25. I guess I can update
> > Ozone's protobuf version to match what's in Hadoop thirdparty.
> >
> > com.google.protobuf.ServiceException:
> > java.lang.UnsupportedOperationException: This is supposed to be
> overridden
> > by subclasses.
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:264)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:132)
> > at com.sun.proxy.$Proxy94.submitRequest(Unknown Source)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:437)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366)
> > at com.sun.proxy.$Proxy94.submitRequest(Unknown Source)
> > at
> > org.apache.hadoop.ozone.om
> .protocolPB.Hadoop3OmTransport.submitRequest(Hadoop3OmTransport.java:80)
> > at
> > org.apache.hadoop.ozone.om
> .protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:338)
> > at
> > org.apache.hadoop.ozone.om
> .protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceInfo(OzoneManagerProtocolClientSideTranslatorPB.java:1863)
> > at
> org.apache.hadoop.ozone.client.rpc.RpcClient.<init>(RpcClient.java:273)
> > at
> >
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:248)
> > at
> >
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:231)
> > at
> >
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:151)
> > at
> > org.apache.hadoop.ozone.om
> .OmTestManagers.<init>(OmTestManagers.java:124)
> > at org.apache.hadoop.ozone.om
> .OmTestManagers.<init>(OmTestManagers.java:83)
> > at
> >
> org.apache.hadoop.ozone.security.acl.TestOzoneNativeAuthorizer.setup(TestOzoneNativeAuthorizer.java:147)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at
> >
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
> > at
> >
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> > at
> >
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
> > at
> >
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
> > at
> >
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeAllMethod(TimeoutExtension.java:70)
> > at
> >
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
> > at
> >
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> > at
> >
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
> > at
> >
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
> > at
> >
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllMethods$13(ClassBasedTestDescriptor.java:412)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllMethods(ClassBasedTestDescriptor.java:410)
> > at
> >
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:216)
> > at
> >
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:85)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
> > at
> > org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> > at java.util.ArrayList.forEach(ArrayList.java:1259)
> > at
> >
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
> > at
> > org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> > at
> >
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
> > at
> >
> org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
> > at
> >
> org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:198)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:169)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:93)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:58)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:141)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:57)
> > at
> >
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:103)
> > at
> >
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:85)
> > at
> >
> org.junit.platform.launcher.core.DelegatingLauncher.execute(DelegatingLauncher.java:47)
> > at
> >
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:63)
> > at
> >
> com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:57)
> > at
> >
> com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
> > at
> >
> com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
> > at
> >
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
> > at
> >
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:232)
> > at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:55)
> > Caused by: java.lang.UnsupportedOperationException: This is supposed to
> be
> > overridden by subclasses.
> > at
> >
> org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.getUnknownFields(GeneratedMessageV3.java:280)
> > at
> >
> org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.getSerializedSize(RpcHeaderProtos.java:2381)
> > at
> >
> org.apache.hadoop.thirdparty.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:88)
> > at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:428)
> > at org.apache.hadoop.ipc.Client.lambda$getConnection$1(Client.java:1633)
> > at
> >
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1632)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1473)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1426)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:250)
> > ... 82 more
> >
> > On Wed, Oct 2, 2024 at 7:51 AM Steve Loughran <ste...@cloudera.com>
> wrote:
> >
> >>
> >> Please do!
> >>
> >> On Tue, 1 Oct 2024 at 20:54, Wei-Chiu Chuang <weic...@apache.org>
> wrote:
> >>
> >>> Hi I'm late to the party, but I'd like to build and test this release
> >>> with
> >>> Ozone and HBase.
> >>>
> >>> On Tue, Oct 1, 2024 at 2:12 AM Mukund Madhav Thakur
> >>> <mtha...@cloudera.com.invalid> wrote:
> >>>
> >>> > Thanks @Dongjoon Hyun <dongjoon.h...@gmail.com> for trying out the
> RC
> >>> and
> >>> > finding out this bug. This has to be fixed.
> >>> > It would be great if others can give the RC a try such that we know
> of
> >>> any
> >>> > issues earlier.
> >>> >
> >>> > Thanks
> >>> > Mukund
> >>> >
> >>> > On Tue, Oct 1, 2024 at 2:21 AM Steve Loughran
> >>> <ste...@cloudera.com.invalid
> >>> > >
> >>> > wrote:
> >>> >
> >>> > > ok, we will have to consider that a -1
> >>> > >
> >>> > > Interestingly we haven't seen that on any of our internal QE, maybe
> >>> none
> >>> > of
> >>> > > the requests weren't overlapping.
> >>> > >
> >>> > > I was just looking towards an =0 because of
> >>> > >
> >>> > > https://issues.apache.org/jira/browse/HADOOP-19295
> >>> > >
> >>> > > *Unlike the v1 sdk, PUT/POST of data now shares the same timeout as
> >>> all
> >>> > > other requests, and on a slow network connection requests time out.
> >>> > > Furthermore, large file uploads cn generate the same failure
> >>> > > condition because the competing block uploads reduce the bandwidth
> >>> for
> >>> > the
> >>> > > others.*
> >>> > >
> >>> > > I'll describe more on the JIRA -the fix is straightforward, have a
> >>> much
> >>> > > longer timeout, such as 15 minutes. It will mean that problems with
> >>> other
> >>> > > calls will not timeout for the same time.
> >>> > >
> >>> > > Note that In previous releases that request timeout *did not* apply
> >>> to
> >>> > the
> >>> > > big upload. This has been reverted.
> >>> > >
> >>> > > This is not a regression between 3.4.0; it had the same problem
> just
> >>> > nobody
> >>> > > has noticed. That's what comes from doing a lot of the testing
> >>> within AWS
> >>> > > and other people doing the testing (me) not trying to upload files
> >
> >>> > 1GB. I
> >>> > > have now.
> >>> > >
> >>> > > Anyway, I do not consider that a -1 because it wasn't a regression
> >>> and
> >>> > it's
> >>> > > straightforward to work around in a site configuration.
> >>> > >
> >>> > > Other than that, my findings were
> >>> > > -Pnative breaks enforcer on macos (build only; fix is upgrade
> >>> enforcer
> >>> > > version)
> >>> > >
> >>> > > -native code probes on my ubuntu rasberry pi5 (don't laugh -this is
> >>> the
> >>> > > most powerful computer I personally own) wan about a missing link
> in
> >>> the
> >>> > > native checks.
> >>> > >  I haven't yet set up openssl bindings for s3a and abfs to see if
> >>> they
> >>> > > actually work.
> >>> > >
> >>> > >   [hadoopq] 2024-09-27 19:52:16,544 WARN crypto.OpensslCipher:
> >>> Failed to
> >>> > > load OpenSSL Cipher.
> >>> > >   [hadoopq] java.lang.UnsatisfiedLinkError:
> EVP_CIPHER_CTX_block_size
> >>> > >   [hadoopq]     at
> >>> org.apache.hadoop.crypto.OpensslCipher.initIDs(Native
> >>> > > Method)
> >>> > >   [hadoopq]     at
> >>> > >
> >>> org.apache.hadoop.crypto.OpensslCipher.<clinit>(OpensslCipher.java:90)
> >>> > >   [hadoopq]     at
> >>> > >
> >>> org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.
> >>> > >
> >>> > > You're one looks like it is. Pity -but thank you for the testing.
> >>> Give
> >>> > it a
> >>> > > couple more days to see if people report any other issues.
> >>> > >
> >>> > > Mukund has been doing all the work on this; I'll see how much I can
> >>> do
> >>> > > myself to share the joy.
> >>> > >
> >>> > > On Sun, 29 Sept 2024 at 06:24, Dongjoon Hyun <dongj...@apache.org>
> >>> > wrote:
> >>> > >
> >>> > > > Unfortunately, it turns out to be a regression in addition to a
> >>> > breaking
> >>> > > > change.
> >>> > > >
> >>> > > > In short, HADOOP-19098 (or more) makes Hadoop 3.4.1 fails even
> when
> >>> > users
> >>> > > > give disjoint ranges.
> >>> > > >
> >>> > > > I filed a Hadoop JIRA issue and a PR. Please take a look at that.
> >>> > > >
> >>> > > > - HADOOP-19291. `CombinedFileRange.merge` should not convert
> >>> disjoint
> >>> > > > ranges into overlapped ones
> >>> > > > - https://github.com/apache/hadoop/pull/7079
> >>> > > >
> >>> > > > I believe this is a Hadoop release blocker for both Apache ORC
> and
> >>> > Apache
> >>> > > > Parquet project perspective.
> >>> > > >
> >>> > > > Dongjoon.
> >>> > > >
> >>> > > > On 2024/09/29 03:16:18 Dongjoon Hyun wrote:
> >>> > > > > Thank you for 3.4.1 RC2.
> >>> > > > >
> >>> > > > > HADOOP-19098 (Vector IO: consistent specified rejection of
> >>> > overlapping
> >>> > > > ranges) seems to be a hard breaking change at 3.4.1.
> >>> > > > >
> >>> > > > > Do you think we can have an option to handle the overlapping
> >>> ranges
> >>> > in
> >>> > > > Hadoop layer instead of introducing a breaking change to the
> users
> >>> at
> >>> > the
> >>> > > > maintenance release?
> >>> > > > >
> >>> > > > > Dongjoon.
> >>> > > > >
> >>> > > > > On 2024/09/25 20:13:48 Mukund Madhav Thakur wrote:
> >>> > > > > > Apache Hadoop 3.4.1
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > With help from Steve I have put together a release candidate
> >>> (RC2)
> >>> > > for
> >>> > > > > > Hadoop 3.4.1.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > What we would like is for anyone who can to verify the
> >>> tarballs,
> >>> > > > especially
> >>> > > > > >
> >>> > > > > > anyone who can try the arm64 binaries as we want to include
> >>> them
> >>> > too.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > The RC is available at:
> >>> > > > > >
> >>> > > > > >
> >>> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > The git tag is release-3.4.1-RC2, commit
> >>> > > > > > b3a4b582eeb729a0f48eca77121dd5e2983b2004
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > The maven artifacts are staged at
> >>> > > > > >
> >>> > > > > >
> >>> > > >
> >>> >
> >>>
> https://repository.apache.org/content/repositories/orgapachehadoop-1426
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > You can find my public key at:
> >>> > > > > >
> >>> > > > > >
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Change log
> >>> > > > > >
> >>> > > > > >
> >>> > > >
> >>> > >
> >>> >
> >>>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/CHANGELOG.md
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Release notes
> >>> > > > > >
> >>> > > > > >
> >>> > > >
> >>> > >
> >>> >
> >>>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/RELEASENOTES.md
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > This is off branch-3.4.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Key changes include
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > * Bulk Delete API.
> >>> > > https://issues.apache.org/jira/browse/HADOOP-18679
> >>> > > > > >
> >>> > > > > > * Fixes and enhancements in Vectored IO API.
> >>> > > > > >
> >>> > > > > > * Improvements in Hadoop Azure connector.
> >>> > > > > >
> >>> > > > > > * Fixes and improvements post upgrade to AWS V2 SDK in
> >>> > S3AConnector.
> >>> > > > > >
> >>> > > > > > * This release includes Arm64 binaries. Please can anyone
> with
> >>> > > > > >
> >>> > > > > >   compatible systems validate these.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Note, because the arm64 binaries are built separately on a
> >>> > different
> >>> > > > > >
> >>> > > > > > platform and JVM, their jar files may not match those of the
> >>> x86
> >>> > > > > >
> >>> > > > > > release -and therefore the maven artifacts. I don't think
> this
> >>> is
> >>> > > > > >
> >>> > > > > > an issue (the ASF actually releases source tarballs, the
> >>> binaries
> >>> > are
> >>> > > > > >
> >>> > > > > > there for help only, though with the maven repo that's a bit
> >>> > > blurred).
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > The only way to be consistent would actually untar the
> >>> x86.tar.gz,
> >>> > > > > >
> >>> > > > > > overwrite its binaries with the arm stuff, retar, sign and
> >>> push out
> >>> > > > > >
> >>> > > > > > for the vote. Even automating that would be risky.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Please try the release and vote. The vote will run for 5
> days.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Thanks,
> >>> > > > > >
> >>> > > > > > Mukund
> >>> > > > > >
> >>> > > > >
> >>> > > > >
> >>> ---------------------------------------------------------------------
> >>> > > > > To unsubscribe, e-mail:
> common-dev-unsubscr...@hadoop.apache.org
> >>> > > > > For additional commands, e-mail:
> >>> common-dev-h...@hadoop.apache.org
> >>> > > > >
> >>> > > > >
> >>> > > >
> >>> > > >
> >>> ---------------------------------------------------------------------
> >>> > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >>> > > > For additional commands, e-mail:
> common-dev-h...@hadoop.apache.org
> >>> > > >
> >>> > > >
> >>> > >
> >>> >
> >>>
> >>
>

Reply via email to