[
https://issues.apache.org/jira/browse/HDDS-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Arpit Agarwal updated HDDS-3721:
--------------------------------
Labels: Triaged (was: )
> Implement getContentSummary to provide replicated size properly to dfs -du
> command
> ----------------------------------------------------------------------------------
>
> Key: HDDS-3721
> URL: https://issues.apache.org/jira/browse/HDDS-3721
> Project: Hadoop Distributed Data Store
> Issue Type: Improvement
> Reporter: Istvan Fajth
> Assignee: Istvan Fajth
> Priority: Major
> Labels: Triaged
>
> Currently when you run hdfs dfs -du command against a path on Ozone, it uses
> the default implementation from FileSystem class in the Hadoop project, and
> that does not care to calculate with replication factor by default. In
> DistributedFileSystem and in a couple of FileSystem implementation there is
> an override to calculate the full replicated size properly.
> Currently the output is something like this for a folder which has file with
> replication factor of 3:
> {code}
> hdfs dfs -du -s -h o3fs://perfbucket.volume.ozone1/terasort/datagen
> 931.3 G 931.3 G o3fs://perfbucket.volume.ozone1/terasort/datagen
> {code}
> The command in Ozone's case as well should report the replicated size az the
> second number so something around 2.7TB in this case.
> In order to do so, we should implement getContentSummary and calculate the
> replicated size in the response properly in order to get there.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]