[
https://issues.apache.org/jira/browse/HADOOP-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12648873#action_12648873
]
Pete Wyckoff commented on HADOOP-4619:
--------------------------------------
It is still an improvement and doubles the size of the files that can be
written from 2GB to 4GB on 32 bit machines.
It would be nice if we could easily detect overflow, but there is no OFF_MAX.
> hdfs_write infinite loop when dfs fails and cannot write files > 2 GB
> ---------------------------------------------------------------------
>
> Key: HADOOP-4619
> URL: https://issues.apache.org/jira/browse/HADOOP-4619
> Project: Hadoop Core
> Issue Type: Bug
> Components: libhdfs
> Affects Versions: 0.19.0, 0.20.0
> Reporter: Pete Wyckoff
> Assignee: Pete Wyckoff
> Fix For: 0.19.1, 0.20.0
>
> Attachments: HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt,
> HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt
>
>
> 1. hdfs_write does not check hdfsWrite return code so -1 return code is
> ignored.
> 2. hdfs_write uses int for overall file length
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.