[
https://issues.apache.org/jira/browse/HDFS-15543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hemanth Boyina reassigned HDFS-15543:
-------------------------------------
Assignee: Hemanth Boyina
> RBF: Write Should allow, when a subcluster is unavailable for RANDOM mount
> points with fault Tolerance enabled.
> ----------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-15543
> URL: https://issues.apache.org/jira/browse/HDFS-15543
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: rbf
> Affects Versions: 3.1.1
> Environment: FI_MultiDestination_client]# *hdfs dfsrouteradmin -ls
> /test_ec*
> *Mount Table Entries:*
> Source Destinations Owner
> Group Mode Quota/Usage
> */test_ec* *hacluster->/tes_ec,hacluster1->/tes_ec* test
> ficommon rwxr-xr-x
> [NsQuota: -/-, SsQuota: -/-]
> Reporter: Harshakiran Reddy
> Assignee: Hemanth Boyina
> Priority: Major
>
> A RANDOM mount point should allow to creating new files if one subcluster is
> down also with Fault Tolerance was enabled. but here it's failed.
> *File Write throne the Exception:-*
> 2020-08-26 19:13:21,839 WARN hdfs.DataStreamer: Abandoning blk_1073743375_2551
> 2020-08-26 19:13:21,877 WARN hdfs.DataStreamer: Excluding datanode
> DatanodeInfoWithStorage[DISK]
> 2020-08-26 19:13:21,878 WARN hdfs.DataStreamer: DataStreamer Exception
> java.io.IOException: Unable to create new block.
> at
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1758)
> at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
> 2020-08-26 19:13:21,879 WARN hdfs.DataStreamer: Could not get block
> locations. Source file "/test_ec/f1._COPYING_" - Aborting...block==null
> put: Could not get block locations. Source file "/test_ec/f1._COPYING_" -
> Aborting...block==null
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]