[ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fuchaohong updated HDFS-17456:
------------------------------
    Description: 
In our production env, The namenode page showed that the datanode space had 
been used up, but the actual datanode machine still had a lot of free space. 
After troubleshooting, the dfsused statistics of datanode are incorrect when 
appending a file. The following is the dfsused after each append of 100.
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|

  was:
In our production env, we found that the namenode page showed that the datanode 
space had been used up, but the actual datanode machine still had a lot of free 
space. After troubleshooting, the dfsused statistics of datanode are incorrect 
when appending a file. The following is the dfsused after each append of 100.
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|


> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-17456
>                 URL: https://issues.apache.org/jira/browse/HDFS-17456
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.3.3
>            Reporter: fuchaohong
>            Priority: Major
>
> In our production env, The namenode page showed that the datanode space had 
> been used up, but the actual datanode machine still had a lot of free space. 
> After troubleshooting, the dfsused statistics of datanode are incorrect when 
> appending a file. The following is the dfsused after each append of 100.
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to