[ 
https://issues.apache.org/jira/browse/HDDS-4965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDDS-4965:
----------------------------------
    Description: 
Steps to repro
{code}
 ozone sh volume create o3://ozone1/vol1
 ozone sh bucket create o3://ozone1/vol1/bucket1

# run teragen to generate data
yarn jar 
/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
teragen -Dmapreduce.job.maps=10 -DmDmapreduce.map.memory.mb=4096 
-Dmapreduce.reduce.memory.mb=4096 1000000000 o3fs://bucket1.vol1.ozone1/teragen1

# delete, skip trash
hdfs dfs -rm -r -skipTrash o3fs://bucket1.vol1.ozone1/teragen1

# forcefully close containers

for i in {1..1000}; do sudo -u hdfs ozone admin container close ${i}; done
{code}

Files are deleted from namespace, but space not released.
All containers are CLOSED, numberOfKeys=0. OM's keyTable empty.

The metadata of each leftover container occupies ~140mb in space. Over time, a 
DN can accumulate tens or even hundreds of GB of wasted space.

  was:
Steps to repro
{code}
 ozone sh volume create o3://ozone1/vol1
 ozone sh bucket create o3://ozone1/vol1/bucket1

# run teragen to generate data
yarn jar 
/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
teragen -Dmapreduce.job.maps=10 -DmDmapreduce.map.memory.mb=4096 
-Dmapreduce.reduce.memory.mb=4096 1000000000 o3fs://bucket1.vol1.ozone1/teragen1

# delete, skip trash
hdfs dfs -rm -r -skipTrash o3fs://bucket1.vol1.ozone1/teragen1

# forcefully close containers

for i in {1..1000}; do sudo -u hdfs ozone admin container close ${i}; done
{code}

Files are deleted from namespace, but space not released.
All containers are CLOSED. OM's keyTable empty.

The metadata of each leftover container occupies ~140mb in space. Over time, a 
DN can accumulate tens or even hundreds of GB of wasted space.


> Container metadata not removed after deleting Ozone files
> ---------------------------------------------------------
>
>                 Key: HDDS-4965
>                 URL: https://issues.apache.org/jira/browse/HDDS-4965
>             Project: Apache Ozone
>          Issue Type: Bug
>            Reporter: Wei-Chiu Chuang
>            Priority: Major
>
> Steps to repro
> {code}
>  ozone sh volume create o3://ozone1/vol1
>  ozone sh bucket create o3://ozone1/vol1/bucket1
> # run teragen to generate data
> yarn jar 
> /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
> teragen -Dmapreduce.job.maps=10 -DmDmapreduce.map.memory.mb=4096 
> -Dmapreduce.reduce.memory.mb=4096 1000000000 
> o3fs://bucket1.vol1.ozone1/teragen1
> # delete, skip trash
> hdfs dfs -rm -r -skipTrash o3fs://bucket1.vol1.ozone1/teragen1
> # forcefully close containers
> for i in {1..1000}; do sudo -u hdfs ozone admin container close ${i}; done
> {code}
> Files are deleted from namespace, but space not released.
> All containers are CLOSED, numberOfKeys=0. OM's keyTable empty.
> The metadata of each leftover container occupies ~140mb in space. Over time, 
> a DN can accumulate tens or even hundreds of GB of wasted space.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to