[
https://issues.apache.org/jira/browse/HDDS-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17711129#comment-17711129
]
Ethan Rose commented on HDDS-8402:
----------------------------------
h2. Background
Ozone estimates disk usage by adding the bytes written for each file to a
cached value saved to a file called {{scmUsed}} on each volume. To account for
RocksDB WAL and other things, this value is periodically refreshed using the
{{du}} command. See {{DU#constructCommand}} in the code for the command that is
being run, it should be {{{}du -sk{}}}. The time interval after which {{du}} is
re-run is controlled by {{{}hdds.datanode.du.refresh.period{}}}. This defaults
to 1 hour but could be made lower for testing.
h2. Things to check
* As [~sadanand_shenoy] pointed out, setting this config will only take affect
for new data. If the disk was already passed the usage threshold and then the
config was turned on, ozone will *not* move the existing data to bring usage
below the threshold.
** I think there is a datanode level volume balancer that has been WIP for a
while but I am not sure the progress on it.
* The df output you shared with the header line would be this:
{code:java}
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdi 7.3T 6.9T 35G 100% /mnt/disk/7
{code}
df says that the disk is 100% full, but there is 365GB of data unaccounted for
when {{Used}} and {{Avail}} are added up. I am curious what ozone sees in this
case, so if you could share the contents of the scmUsed file and the results of
{{du -sk}} for this volume that would be helpful.
Also, could you include tell us which version of Ozone you are running?
> hdds.datanode.dir.du.reserved is not respected
> ----------------------------------------------
>
> Key: HDDS-8402
> URL: https://issues.apache.org/jira/browse/HDDS-8402
> Project: Apache Ozone
> Issue Type: Bug
> Reporter: Mohanad Elsafty
> Priority: Major
>
> hdds.datanode.dir.du.reserved is not being respected by some of the datanodes
> in my cluster.
> Actually I have this patch in my codebase
> https://issues.apache.org/jira/browse/HDDS-6577 still not being respected.
>
> {code:java}
> <property>
> <name>hdds.datanode.dir</name>
> <value>/mnt/disk/0/ozone,/mnt/disk/1/ozone,/mnt/disk/2/ozone,/mnt/disk/3/ozone,/mnt/disk/4/ozone,/mnt/disk/5/ozone,/mnt/disk/6/ozone,/mnt/disk/7/ozone,/mnt/disk/8/ozone,/mnt/disk/9/ozone,/mnt/disk/10/ozone</value>
> </property>
> <property>
> <name>hdds.datanode.dir.du.reserved</name>
> <value>/mnt/disk/0/ozone:50GB,/mnt/disk/1/ozone:50GB,/mnt/disk/2/ozone:50GB,/mnt/disk/3/ozone:50GB,/mnt/disk/4/ozone:50GB,/mnt/disk/5/ozone:50GB,/mnt/disk/6/ozone:50GB,/mnt/disk/7/ozone:50GB,/mnt/disk/8/ozone:50GB,/mnt/disk/9/ozone:50GB,/mnt/disk/10/ozone:50GB
> </value>
> </property> {code}
> {code:java}
> $ df -h
> /dev/sdi 7.3T 6.9T 35G 100% /mnt/disk/7
> {code}
>
> I have several similar cases to this one. Could it be due to the open
> databses that need extra space on the disk?
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]