[ http://issues.apache.org/jira/browse/HADOOP-738?page=comments#action_12452016 ] Doug Cutting commented on HADOOP-738: -------------------------------------
> I suggest we make the CRC files visible. Either someone needs to change the description of this issue, or this discussion needs to move to a new issue. As I keep saying, the fixes proposed don't fix the described problem: when local CRC files are used then 'dfs -put' fails. I have not verified this, but, if this is the case, it is a bug that should be fixed. Disabling CRC files for some folks, while a reasonable thing to do, won't fix this. Neither will changing their names. > multi-terabyte search indexes [ ... not typical ...] Aren't these prototypical for Hadoop? Isn't that why we're building Hadoop? Should such uses require special configuration? > [...] CRC errors are not a dominate concern If you have ECC memory, which you do, but not all folks do. > dfs get or copyToLocal should not copy crc file > ----------------------------------------------- > > Key: HADOOP-738 > URL: http://issues.apache.org/jira/browse/HADOOP-738 > Project: Hadoop > Issue Type: Bug > Components: dfs > Affects Versions: 0.8.0 > Environment: all > Reporter: Milind Bhandarkar > Assigned To: Milind Bhandarkar > Fix For: 0.9.0 > > Attachments: hadoop-crc.patch > > > Currently, when we -get or -copyToLocal a directory from DFS, all the files > including crc files are also copied. When we -put or -copyFromLocal again, > since the crc files already exist on DFS, this put fails. The solution is not > to copy checksum files when copying to local. Patch is forthcoming. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
