Daisuke Kobayashi created HDFS-14315:
----------------------------------------
Summary: Add more detailed log message when decreasing replication
factor < max in snapshots
Key: HDFS-14315
URL: https://issues.apache.org/jira/browse/HDFS-14315
Project: Hadoop HDFS
Issue Type: Improvement
Components: hdfs
Affects Versions: 3.1.2
Reporter: Daisuke Kobayashi
When changing replication factor for a given file, the following 3 types of
logging appear in the namenode log, but more detailed message would be useful
especially when the file is in snapshot(s).
{noformat}
Decreasing replication from X to Y for FILE
Increasing replication from X to Y for FILE
Replication remains unchanged at X for FILE
{noformat}
I have added additional log messages as well as further test scenarios to
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication#testReplicationWithSnapshot.
The test sequence is:
1) A file is created with replication factor 1 (there are 5 datanodes)
2) Take a snapshot and increase the factor by 1. Continue this loop up to 5.
3) Setrep back to 3, but the target replication is decided to 4, which is the
maximum in snapshots.
{noformat}
2019-02-25 17:17:26,800 [IPC Server handler 9 on default port 55726] INFO
namenode.FSDirectory (FSDirAttrOp.java:unprotectedSetReplication(408)) -
Decreasing replication from 5 to 4 for /TestSnapshot/sub1/file1. Requested
value is 3, but 4 is the maximum in snapshots
{noformat}
4) Setrep to 3 again, but it's unchanged as follows.
{noformat}
2019-02-25 17:17:26,804 [IPC Server handler 6 on default port 55726] INFO
namenode.FSDirectory (FSDirAttrOp.java:unprotectedSetReplication(420)) -
Replication remains unchanged at 4 for /TestSnapshot/sub1/file1 . Requested
value is 3, but 4 is the maximum in snapshots.
{noformat}
5) Make sure the number of replicas in datanodes remains 4.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]