From: "Yan, Zheng" <uker...@gmail.com>
Date: Tuesday, July 31, 2018 at 8:14 PM
To: "Kamble, Nitin A" <nitin.kam...@teradata.com>
Cc: "arya...@intermedia.net" <arya...@intermedia.net>, John Spray 
<jsp...@redhat.com>, ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Force cephfs delayed deletion

[External Email]
________________________________
On Wed, Aug 1, 2018 at 6:43 AM Kamble, Nitin A 
<nitin.kam...@teradata.com<mailto:nitin.kam...@teradata.com>> wrote:
Hi John,

I am running ceph Luminous 12.2.1 release on the storage nodes with v4.4.114 
kernel on the cephfs clients.

3 client nodes are running 3 instances of a test program.
The test program is doing this repeatedly in a loop:

  *   sequentially write a 256GB file on cephfs
  *   delete the file
do the clients write to the same file? I mean same file name in a directory.

No. each client node has a separate work directory on cephfs. And each client 
apps writes on a separate file. There is no sharing of file data across clients.

Thanks,
Nitin


‘ceph df’ shows that after delete the space is not getting freed from cephfs 
and and cephfs space utilization (number of objects, space used and  % 
utilization) keeps growing up continuously.

I double checked, and no process is holding an open handle to the closed files.

When the test program is stopped, the writing workload stops and then the 
cephfs space utilization starts going down as expected.

Looks like the cephfs write load is not giving enough opportunity to actually 
perform the delete file operations from clients. It is a consistent behavior, 
and easy to reproduce.

I tried playing with these advanced MDS config parameters:

  *   mds_max_purge_files
  *   mds_max_purge_ops
  *   mds_max_purge_ops_per_pg
  *   mds_purge_queue_busy_flush_period

But it is not helping with the workload.

Is this a known issue? And is there a workaround to give more priority to the 
objects purging operations?

Thanks in advance,
Nitin

From: ceph-users 
<ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>> 
on behalf of Alexander Ryabov 
<arya...@intermedia.net<mailto:arya...@intermedia.net>>
Date: Thursday, July 19, 2018 at 8:09 AM
To: John Spray <jsp...@redhat.com<mailto:jsp...@redhat.com>>
Cc: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>" 
<ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] Force cephfs delayed deletion


>Also, since I see this is a log directory, check that you don't have some 
>processes that are holding their log files open even after they're unlinked.

Thank you very much - that was the case.

lsof /mnt/logs | grep deleted



After dealing with these, space was reclaimed in about 2-3min.





________________________________
From: John Spray <jsp...@redhat.com<mailto:jsp...@redhat.com>>
Sent: Thursday, July 19, 2018 17:24
To: Alexander Ryabov
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Force cephfs delayed deletion

On Thu, Jul 19, 2018 at 1:58 PM Alexander Ryabov 
<arya...@intermedia.net<mailto:arya...@intermedia.net>> wrote:

Hello,

I see that free space is not released after files are removed on CephFS.

I'm using Luminous with replica=3 without any snapshots etc and with default 
settings.



From client side:
$ du -sh /mnt/logs/
4.1G /mnt/logs/
$ df -h /mnt/logs/
Filesystem   Size  Used Avail Use% Mounted on
h1,h2:/logs  125G   87G   39G  70% /mnt/logs

These stats are after couple of large files were removed in /mnt/logs dir, but 
that only dropped Useв space a little.

Check what version of the client you're using -- some older clients had bugs 
that would hold references to deleted files and prevent them from being purged. 
 If you find that the space starts getting freed when you unmount the client, 
this is likely to be because of a client bug.

Also, since I see this is a log directory, check that you don't have some 
processes that are holding their log files open even after they're unlinked.

John



Doing 'sync' command also changes nothing.

From server side:
# ceph  df
GLOBAL:
    SIZE     AVAIL      RAW USED     %RAW USED
    124G     39226M       88723M         69.34
POOLS:
    NAME                ID     USED       %USED     MAX AVAIL     OBJECTS
    cephfs_data         1      28804M     76.80         8703M        7256
    cephfs_metadata     2        236M      2.65         8703M         101


Why there are such a large difference between 'du' and 'USED'?

I've found that it could be due to 'delayed delete' 
http://docs.ceph.com/docs/luminous/dev/delayed-delete/<https://imsva91-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2furl.emailprotection.link%2f%3fa%2dzj7k72CS0gb415itDM1eU90VEm5BhojI3q4cHQrsilpaYjPTTNWfFTqC14bd5j2XtNq%2dUFuEZul7eZHtnh%2d5g%7e%7e&umid=A4F1CB59-715B-8E05-AA77-B7AD0CCE487F&auth=5e584526fc71bf85011d6d2e8a81aa05f4bd018d-f875139c2b2492612900e3cbf4d3611ffd4146fd>

And previously it seems could be tuned by adjusting the "mds max purge files" 
and "mds max purge ops"

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013679.html<https://imsva91-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2furl.emailprotection.link%2f%3fahP6OBo0IWu9FuToCxmcjXNlOi%5fXi08d0Sp5qC%2dIarmsg%2dWUfPMfZGva3XnsWI0H5JwptWRr%5fO01mBq7%5fSTKpTpSIHa813nlawWz6bWtlwV0p%5fVIDvbWClfITb4PwNDx3&umid=A4F1CB59-715B-8E05-AA77-B7AD0CCE487F&auth=5e584526fc71bf85011d6d2e8a81aa05f4bd018d-d910bc9758f99a5deeee06f7ee9186b56cbfc3ba>

But there is no more of such options in 
http://docs.ceph.com/docs/luminous/cephfs/mds-config-ref/<https://imsva91-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2furl.emailprotection.link%2f%3fa%2dzj7k72CS0gb415itDM1eU90VEm5BhojI3q4cHQrsimLce%2dzmpj%2dukHA75X0Jqtb8s9Rl8x3JX2E68Wl%2d6DuGg%7e%7e&umid=A4F1CB59-715B-8E05-AA77-B7AD0CCE487F&auth=5e584526fc71bf85011d6d2e8a81aa05f4bd018d-ee5880c281daff327c5800bae0292c5dc74ba0b7>



So the question is - how to purge deleted data and reclaim free space?

Thank you.


______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com<https://imsva91-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2furl.emailprotection.link%2f%3fahP6OBo0IWu9FuToCxmcjXCl2cKOQOsBa372K2gMLXSslcSDG0hOfqsfgDj2933TkMvnibtT%5f5Wj7G0DNATrJ5g%7e%7e&umid=A4F1CB59-715B-8E05-AA77-B7AD0CCE487F&auth=5e584526fc71bf85011d6d2e8a81aa05f4bd018d-6997b2c56f1e5034a20a9557401fcadcf9cad590>

Intermedia
10th floor, 2, Alexander Nevsky Sq.
Saint-Petersburg, Russia 191167
www.intermedia.net<http://www.intermedia.net/>
[J.D. Power]

J.D. Power certifies Intermedia for technical support excellence two years 
running, a first among cloud application providers



________________________________

This message is intended only for the person(s) to which it is addressed and 
may contain Intermedia.net. Inc. privileged, confidential and/or proprietary 
information. If you have received this communication in error, please notify us 
immediately by replying to the message and deleting it from your computer. Any 
disclosure, copying, distribution, or the taking of any action concerning the 
contents of this message and any attachment(s) by anyone other than the named 
recipient(s) is strictly prohibited.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to