[jira] [Closed] (BEAM-3649) HadoopSeekableByteChannel breaks when backing InputStream doesn't supporte ByteBuffers
[ https://issues.apache.org/jira/browse/BEAM-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ismaël Mejía closed BEAM-3649. -- > HadoopSeekableByteChannel breaks when backing InputStream doesn't supporte > ByteBuffers > -- > > Key: BEAM-3649 > URL: https://issues.apache.org/jira/browse/BEAM-3649 > Project: Beam > Issue Type: Bug > Components: io-java-hadoop >Affects Versions: 2.0.0, 2.1.0, 2.2.0 >Reporter: Guillaume Balaine >Priority: Minor > Fix For: Not applicable > > > This happened last summer, when I wanted to use S3A as the backing HDFS > access implementation. > This is because while this method is called : > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java#L145] > This class does not implement ByteBuffer readable > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java > I fixed it by manually incrementing the read position and copying the backing > array instead of buffering. > [https://github.com/Igosuki/beam/commit/3838f0db43b6422833a045d1f097f6d7643219f1] > I know the s3 direct implementation is the preferred path, but this is > possible, and likely happens to a lot of developers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (BEAM-3649) HadoopSeekableByteChannel breaks when backing InputStream doesn't supporte ByteBuffers
[ https://issues.apache.org/jira/browse/BEAM-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guillaume Balaine closed BEAM-3649. --- Resolution: Fixed Fix Version/s: 2.4.0 Fixed by BEAM-2790 > HadoopSeekableByteChannel breaks when backing InputStream doesn't supporte > ByteBuffers > -- > > Key: BEAM-3649 > URL: https://issues.apache.org/jira/browse/BEAM-3649 > Project: Beam > Issue Type: Bug > Components: io-java-hadoop >Affects Versions: 2.0.0, 2.1.0, 2.2.0 >Reporter: Guillaume Balaine >Priority: Minor > Fix For: 2.4.0 > > > This happened last summer, when I wanted to use S3A as the backing HDFS > access implementation. > This is because while this method is called : > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java#L145] > This class does not implement ByteBuffer readable > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java > I fixed it by manually incrementing the read position and copying the backing > array instead of buffering. > [https://github.com/Igosuki/beam/commit/3838f0db43b6422833a045d1f097f6d7643219f1] > I know the s3 direct implementation is the preferred path, but this is > possible, and likely happens to a lot of developers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)