[
https://issues.apache.org/jira/browse/HDFS-13476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443156#comment-16443156
]
feng xu edited comment on HDFS-13476 at 4/18/18 8:38 PM:
-
By the way, java.io.[file.exists(|file://:exists%28/]) is not sufficient to
determine if a file exists, because
[fs|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/io/File.java#File.0fs].getBooleanAttributes()
could fail with other reasons.
was (Author: fxu...@hotmail.com):
By the way, java.io.file::exists() is not sufficient to determine if a file
exists, because
[fs|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/io/File.java#File.0fs].[getBooleanAttributes|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/io/FileSystem.java#FileSystem.getBooleanAttributes%28java.io.File%29]
could fail with other reasons.
> HDFS (Hadoop/HDP 2.7.3.2.6.4.0-91) reports CORRUPT files
>
>
> Key: HDFS-13476
> URL: https://issues.apache.org/jira/browse/HDFS-13476
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
>Affects Versions: 2.7.4
>Reporter: feng xu
>Priority: Critical
>
> We have a security software runs on local file system(ext4), and the security
> software denies some particular users to access some
> {color:#33}particular {color}HDFS folders based on security policy. For
> example, the security policy always gives the user hdfs full permission, and
> denies the user yarn to access /dir1. If the user yarn tries to access a
> file under HDFS folder {color:#33}/dir1{color}, the security software
> denies the access and returns EACCES from file system call through errno.
> This used to work because the data corruption was determined by block
> scanner([https://blog.cloudera.com/blog/2016/12/hdfs-datanode-scanners-and-disk-checker-explained/).]
> On HDP 2.7.3.2.6.4.0-91, HDFS reports a lot data corruptions because of the
> security policy to deny file access in HDFS from local file system. We
> debugged HDFS and found out BlockSender() directly calls the following
> statements and may cause the problem:
> datanode.notifyNamenodeDeletedBlock(block, replica.getStorageUuid());
> datanode.data.invalidate(block.getBlockPoolId(), new
> Block[]\{block.getLocalBlock()});
> In the mean time, the block scanner is not triggered because of the
> undocumented property {color:#33}dfs.datanode.disk.check.min.gap. However
> the problem is still there if we disable
> dfs.datanode.disk.check.min.gap{color} by setting it to 0. .
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org