[
https://issues.apache.org/jira/browse/HDFS-15543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Harshakiran Reddy updated HDFS-15543:
-------------------------------------
Description:
A RANDOM mount point should allow to creating new files if one subcluster is
down also with Fault Tolerance was enabled. but here it's failed.
FI_MultiDestination_client]# hdfs dfsrouteradmin -ls /test_ec
*Mount Table Entries:*
Source Destinations Owner Group Mode Quota/Usage
/test_ec *hacluster->/tes_ec,hacluster1->/tes_ec* test ficommon rwxr-xr-x
[NsQuota: -/-, SsQuota: -/-]
*File Write throne the Exception:-*
2020-08-26 19:13:21,839 WARN hdfs.DataStreamer: Abandoning blk_1073743375_2551
2020-08-26 19:13:21,877 WARN hdfs.DataStreamer: Excluding datanode
DatanodeInfoWithStorage[DISK]
2020-08-26 19:13:21,878 WARN hdfs.DataStreamer: DataStreamer Exception
java.io.IOException: Unable to create new block.
at
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1758)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
2020-08-26 19:13:21,879 WARN hdfs.DataStreamer: Could not get block locations.
Source file "/test_ec/f1._COPYING_" - Aborting...block==null
put: Could not get block locations. Source file "/test_ec/f1._COPYING_" -
Aborting...block==null
was:
A RANDOM mount point should allow to creating new files if one subcluster is
down also with Fault Tolerance was enabled. but here it's failed.
*File Write throne the Exception:-*
2020-08-26 19:13:21,839 WARN hdfs.DataStreamer: Abandoning blk_1073743375_2551
2020-08-26 19:13:21,877 WARN hdfs.DataStreamer: Excluding datanode
DatanodeInfoWithStorage[DISK]
2020-08-26 19:13:21,878 WARN hdfs.DataStreamer: DataStreamer Exception
java.io.IOException: Unable to create new block.
at
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1758)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
2020-08-26 19:13:21,879 WARN hdfs.DataStreamer: Could not get block locations.
Source file "/test_ec/f1._COPYING_" - Aborting...block==null
put: Could not get block locations. Source file "/test_ec/f1._COPYING_" -
Aborting...block==null
Environment: (was: FI_MultiDestination_client]# *hdfs dfsrouteradmin
-ls /test_ec*
*Mount Table Entries:*
Source Destinations Owner
Group Mode Quota/Usage
*/test_ec* *hacluster->/tes_ec,hacluster1->/tes_ec* test
ficommon rwxr-xr-x [NsQuota:
-/-, SsQuota: -/-]
)
> RBF: Write Should allow, when a subcluster is unavailable for RANDOM mount
> points with fault Tolerance enabled.
> ----------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-15543
> URL: https://issues.apache.org/jira/browse/HDFS-15543
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: rbf
> Affects Versions: 3.1.1
> Reporter: Harshakiran Reddy
> Assignee: Hemanth Boyina
> Priority: Major
>
> A RANDOM mount point should allow to creating new files if one subcluster is
> down also with Fault Tolerance was enabled. but here it's failed.
> FI_MultiDestination_client]# hdfs dfsrouteradmin -ls /test_ec
> *Mount Table Entries:*
> Source Destinations Owner Group Mode Quota/Usage
> /test_ec *hacluster->/tes_ec,hacluster1->/tes_ec* test ficommon rwxr-xr-x
> [NsQuota: -/-, SsQuota: -/-]
> *File Write throne the Exception:-*
> 2020-08-26 19:13:21,839 WARN hdfs.DataStreamer: Abandoning blk_1073743375_2551
> 2020-08-26 19:13:21,877 WARN hdfs.DataStreamer: Excluding datanode
> DatanodeInfoWithStorage[DISK]
> 2020-08-26 19:13:21,878 WARN hdfs.DataStreamer: DataStreamer Exception
> java.io.IOException: Unable to create new block.
> at
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1758)
> at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
> 2020-08-26 19:13:21,879 WARN hdfs.DataStreamer: Could not get block
> locations. Source file "/test_ec/f1._COPYING_" - Aborting...block==null
> put: Could not get block locations. Source file "/test_ec/f1._COPYING_" -
> Aborting...block==null
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]