[
https://issues.apache.org/jira/browse/HADOOP-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12649063#action_12649063
]
Brian Bockelman commented on HADOOP-4619:
-----------------------------------------
+1 on the patch now.
I suspect it's best that we follow the correct Unix way here; things were put
there for a reason. Raghu, if you want largefile support on your 32-bit box,
you need to compile with the CFLAG -D_FILE_OFFSET_BITS=64.
> hdfs_write infinite loop when dfs fails and cannot write files > 2 GB
> ---------------------------------------------------------------------
>
> Key: HADOOP-4619
> URL: https://issues.apache.org/jira/browse/HADOOP-4619
> Project: Hadoop Core
> Issue Type: Bug
> Components: libhdfs
> Affects Versions: 0.19.0, 0.20.0
> Reporter: Pete Wyckoff
> Assignee: Pete Wyckoff
> Fix For: 0.19.1, 0.20.0
>
> Attachments: HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt,
> HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt
>
>
> 1. hdfs_write does not check hdfsWrite return code so -1 return code is
> ignored.
> 2. hdfs_write uses int for overall file length
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.