[
https://issues.apache.org/jira/browse/BEAM-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ismaël Mejía updated BEAM-3649:
-------------------------------
Component/s: (was: io-java-hadoop-format)
io-java-hadoop-file-system
> HadoopSeekableByteChannel breaks when backing InputStream doesn't supporte
> ByteBuffers
> --------------------------------------------------------------------------------------
>
> Key: BEAM-3649
> URL: https://issues.apache.org/jira/browse/BEAM-3649
> Project: Beam
> Issue Type: Bug
> Components: io-java-hadoop-file-system
> Affects Versions: 2.0.0, 2.1.0, 2.2.0
> Reporter: Guillaume Balaine
> Priority: Minor
> Fix For: Not applicable
>
>
> This happened last summer, when I wanted to use S3A as the backing HDFS
> access implementation.
> This is because while this method is called :
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java#L145]
> This class does not implement ByteBuffer readable
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> I fixed it by manually incrementing the read position and copying the backing
> array instead of buffering.
> [https://github.com/Igosuki/beam/commit/3838f0db43b6422833a045d1f097f6d7643219f1]
> I know the s3 direct implementation is the preferred path, but this is
> possible, and likely happens to a lot of developers.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)