[
https://issues.apache.org/jira/browse/HDFS-13476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16443023#comment-16443023
]
feng xu commented on HDFS-13476:
--------------------------------
2018-04-18 12:40:48,466 ERROR datanode.DataNode (DataXceiver.java:run(278)) -
4381-fxu-centos7:50010:DataXceiver error processing READ_BLOCK operation src:
/10.3.43.81:51424 dst: /10.3.43.81:50010
java.io.FileNotFoundException: BlockId 1073741896 is not valid.
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockFile(FsDatasetImpl.java:739)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockFile(FsDatasetImpl.java:730)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getMetaDataInputStream(FsDatasetImpl.java:232)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:299)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:547)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
at java.lang.Thread.run(Thread.java:745)
> HDFS (Hadoop/HDP 2.7.3.2.6.4.0-91) reports CORRUPT files
> --------------------------------------------------------
>
> Key: HDFS-13476
> URL: https://issues.apache.org/jira/browse/HDFS-13476
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.7.4
> Reporter: feng xu
> Priority: Critical
>
> We have a security software runs on local file system(ext4), and the security
> software denies some particular users to access some
> {color:#333333}particular {color}HDFS folders based on security policy. For
> example, the security policy always gives the user hdfs full permission, and
> denies the user yarn to access /dir1. If the user yarn tries to access a
> file under HDFS folder {color:#333333}/dir1{color}, the security software
> denies the access and returns EACCES from file system call through errno.
> This used to work because the data corruption was determined by block
> scanner([https://blog.cloudera.com/blog/2016/12/hdfs-datanode-scanners-and-disk-checker-explained/).]
> On HDP 2.7.3.2.6.4.0-91, HDFS reports a lot data corruptions because of the
> security policy to deny file access in HDFS from local file system. We
> debugged HDFS and found out BlockSender() directly calls the following
> statements and may cause the problem:
> datanode.notifyNamenodeDeletedBlock(block, replica.getStorageUuid());
> datanode.data.invalidate(block.getBlockPoolId(), new
> Block[]\{block.getLocalBlock()});
> In the mean time, the block scanner is not triggered because of the
> undocumented property {color:#333333}dfs.datanode.disk.check.min.gap. However
> the problem is still there if we disable
> dfs.datanode.disk.check.min.gap{color} by setting it to 0. .
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]