and I've done the bytebuffer flip for read vectored
https://github.com/apache/hadoop/pull/7725

but .flip() is used in so many other places, it's not enough. Given that
the overloaded flip() method appears in a lot of the jdk8 releases.

So how about "this releases uses the overloaded ByteByffer.flip() method,
which is absent from very old JDK releases. If you encounter the error "
java.lang.NoSuchMethodError:
java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;" you need to upgrade to a
more recent version of java 8 or later.




On Wed, 4 Jun 2025 at 16:45, Steve Loughran <ste...@cloudera.com> wrote:

> parquet stuff is unrelated, filed
> https://github.com/apache/parquet-java/issues/3237 . still my fault
> though.
>
> overall, I'm going to give a quick -1 with the commitment to provide one
> of the patches.
>
> my comments in markdown format:
>
>
> ## relesse docs index.html
>
> the summary in index.html are from 3.4.1, so we need to update them to
> 3.4.2 with key features anre issues fixed.
>
>
> Call out avro upgrade in the release notes -this is incompatible but
> required for security reasons.
>
> > We have upgraded to Avro 1.11.4, which is incompatible with previous
> versions.
> { This is critical for security reasons -everyone needs to upgrade their
> own uses of Avro too.
>
> A side-effect of excluding the AWS SDK from the release is that the
> hadoop-aws module does not
> declare a transitive dependency on the AWS SDK. This means that if you use
> hadoop-aws, you must explicitly include the AWS SDK in your project.
>
> I'd like to see this (somehow) returned, at the very least we explicitly
> add it into the hadoop-cloud-connectors dependencies so you can get it that
> way.
>
> otherwise it's a bit trickier to get right
>
> doc changes
>
> > This release was qualified against the 2.29.52 release, which at the
> time of the qualification was the last release compatible with third-party
> stores. This regression may have been fixed in later releases.
>
>       <dependency>
>         <groupId>software.amazon.awssdk</groupId>
>         <artifactId>bundle</artifactId>
>         <version>2.29.52</version>
>         <exclusions>
>           <exclusion>
>             <groupId>*</groupId>
>             <artifactId>*</artifactId>
>           </exclusion>
>         </exclusions>
>       </dependency>
>
>
>
> > The minimum version of the AWS SDK bundle.jar which can be used in this
> release is 2.29.52; later versions may work, but no later versions have
> been qualified. Upgrade with care, especially when working with third-party
> stores.
>
> > see: https://issues.apache.org/jira/browse/HADOOP-19490
>
> ## java8
>
> my setup has an old java8 release
>
> java -version
> openjdk version "1.8.0_362"
> OpenJDK Runtime Environment (Zulu 8.68.0.21-CA-macos-aarch64) (build
> 1.8.0_362-b09)
> OpenJDK 64-Bit Server VM (Zulu 8.68.0.21-CA-macos-aarch64) (build
> 25.362-b09, mixed mode)
>
> it is getting that recurrent bytebuffer flip error, from an overloaded
> version of ByteBuffer...this can happen even with recent java8 releases.
>
> We cast the ByteBuffer to (java.nio.Buffer) before calling flip().
>
> [INFO] -------------------------------------------------------
> [INFO] Running org.apache.parquet.hadoop.TestParquetReader
> Exception in thread "Thread-8" java.lang.NoSuchMethodError:
> java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$AsyncHandler.completed(RawLocalFileSystem.java:428)
>         at
> org.apache.hadoop.fs.RawLocalFileSystem$AsyncHandler.completed(RawLocalFileSystem.java:362)
>         at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)
>         at
> sun.nio.ch.SimpleAsynchronousFileChannelImpl$2.run(SimpleAsynchronousFileChannelImpl.java:335)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:750)
>
> this can actually be fixed trivially, casting to java.io.Buffer first:
> ((Byffer)buff).flip())
>
> I'll do this
>
> On Tue, 3 Jun 2025 at 16:04, Steve Loughran <ste...@cloudera.com> wrote:
>
>>
>> I'm sort of leaning towards a -1, though the regression is actually in
>> 3.4.0, and surfacing as a failure to release buffers in localfs IO.
>>
>>
>> [ERROR] org.apache.parquet.hadoop.TestParquetReader.testRangeFiltering[2]
>> -- Time elapsed: 0.033 s <<< ERROR!
>> org.apache.parquet.bytes.TrackingByteBufferAllocator$LeakedByteBufferException:
>> 4 ByteBuffer object(s) is/are remained unreleased after closing this
>> allocator.
>>         at
>> org.apache.parquet.bytes.TrackingByteBufferAllocator.close(TrackingByteBufferAllocator.java:160)
>>         at
>> org.apache.parquet.hadoop.TestParquetReader.closeAllocator(TestParquetReader.java:175)
>>         at
>> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>         at
>> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at
>> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>>         at
>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>>         at
>> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>> Caused by:
>> org.apache.parquet.bytes.TrackingByteBufferAllocator$ByteBufferAllocationStacktraceException:
>> Set org.apache.parquet.bytes.TrackingByteBufferAllocator.DEBUG = true for
>> more info
>>
>>
>>    1. This is very much the vector io stuff.
>>    2. It's not a regression since 3.4.0
>>    3. but it's indicative of a memory leak. Surprised and Annoyed I
>>    hadn't spotted this before...the tests went in last year.
>>
>>
>> On Tue, 3 Jun 2025 at 10:55, Steve Loughran <ste...@cloudera.com> wrote:
>>
>>>
>>> Ahmar, no need for prs there, it's a "commit then review" repo, as in
>>> other people can revoke if you break things.
>>>
>>> will do the merge and test
>>>
>>> On Mon, 2 Jun 2025 at 16:51, Suhail, Ahmar <ahma...@amazon.co.uk.invalid>
>>> wrote:
>>>
>>>> Steve -  I created a PR on hadoop-release-support with my properties
>>>> file: https://github.com/apache/hadoop-release-support/pull/4
>>>>
>>>>
>>>> Masatake - Yes, sounds good, will update the documentation for the new
>>>> RC if it is created.
>>>>
>>>>
>>>> Mukund - I realised too late that I messed up the commit for that one.
>>>> Will figure out how to fix..
>>>>
>>>> ________________________________
>>>> From: Steve Loughran <ste...@cloudera.com>
>>>> Sent: Monday, June 2, 2025 4:25:00 PM
>>>> To: Ahmar Suhail
>>>> Cc: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org;
>>>> mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
>>>> Subject: RE: [EXTERNAL] [VOTE] Release Apache Hadoop 3.4.2
>>>>
>>>>
>>>> CAUTION: This email originated from outside of the organization. Do not
>>>> click links or open attachments unless you can confirm the sender and know
>>>> the content is safe.
>>>>
>>>>
>>>>
>>>> can you put up the relevant changes to the hadoop-release-support
>>>> module; I'd like to use it as part of my validation and I'm assuming you
>>>> have that src/releases/release-info-3.4.2.properties file
>>>>
>>>> On Wed, 28 May 2025 at 13:25, Ahmar Suhail <ah...@apache.org<mailto:
>>>> ah...@apache.org>> wrote:
>>>> Hey all,
>>>>
>>>> The first release candidate for Hadoop 3.4.2 is now available for
>>>> voting.
>>>>
>>>> There are a couple of things to note:
>>>>
>>>> 1/ No Arm64 artifacts. This is due to previously reported issues:
>>>> https://issues.apache.org/jira/projects/YARN/issues/YARN-11712 and
>>>> ttps://issues.apache.org/jira/projects/YARN/issues/YARN-11713<
>>>> http://issues.apache.org/jira/projects/YARN/issues/YARN-11713>
>>>> <https://issues.apache.org/jira/projects/YARN/issues/YARN-11713>, which
>>>> mean that the build fails on arm64.
>>>>
>>>> 2/ Relevant for anyone testing S3A: We've removed the AWS SDK bundle
>>>> from hadoop-3.4.2.tar.gz. This is because the SDK bundle is now ~600MB,
>>>> which makes the size of tar > 1GB, and it can no longer be uploaded to
>>>> SVN.
>>>> For S3A, download SDK bundle v2.29.52 from:
>>>> https://mvnrepository.com/artifact/software.amazon.awssdk/bundle/2.29.52
>>>> ,
>>>> and drop it into /share/hadoop/common/lib. Release notes will be updated
>>>> with these instructions.
>>>>
>>>>
>>>> The RC is available at:
>>>>
>>>> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.2-RC1/
>>>>
>>>> The git tag is release-3.4.2-RC1, commit
>>>> 09870840ec35b48cd107972eb24d25e8aece04c9
>>>>
>>>> The maven artifacts are staged at:
>>>>
>>>> https://repository.apache.org/content/repositories/orgapachehadoop-1437
>>>>
>>>>
>>>> You can find my public key (02085AFB652F796A3B01D11FD737A6F52281FA98)
>>>> at:
>>>>
>>>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>>>>
>>>>
>>>> This release has been created off of branch-3.4. Key changes include:
>>>>
>>>> * S3A: Integration with S3 Analytics Accelerator input stream
>>>> * S3A: Support for S3 conditional writes
>>>> * ABFS: Deprecation of WASB driver
>>>> * ABFS: Support for Non-Heirarchical Namespace Accounts on ABFS Driver
>>>>
>>>>
>>>> This is my first attempt at managing a release, please do test the
>>>> release
>>>> and let me know in case of any issues.
>>>>
>>>> Thanks,
>>>> Ahmar
>>>>
>>>

Reply via email to