Loocking at the osp.*.sync* values i see osp.rsos-OST0000-osc-MDT0000.sync_changes=14174002 osp.rsos-OST0000-osc-MDT0000.sync_in_flight=0 osp.rsos-OST0000-osc-MDT0000.sync_in_progress=4096 osp.rsos-OST0000-osc-MDT0000.destroys_in_flight=14178098
And it takes 10 sec between changes of those values. So is there any other tunable I can tweak on either OSS or MDS side? On 2/6/20 6:58 AM, Andreas Dilger wrote: > On Feb 4, 2020, at 07:23, Åke Sandgren <[email protected] > <mailto:[email protected]>> wrote: >> >> When I create a large number of files on an OST and then remove them, >> the used inode count on the OST decreases very slowly, it takes several >> hours for it to go from 3M to the correct ~10k. >> >> (I'm running the io500 test suite) >> >> Is there something I can do to make it release them faster? >> Right now it has gone from 3M to 1.5M in 6 hours, (lfs df -i). > > It this the object count or the file count? Are you possibly using a lot of > stripes on the files being deleted that is multiplying the work needed? > >> These are SSD based OST's in case it matters. > > The MDS controls the destroy of the OST objects, so there is a rate > limit, but ~700/s seems low to me, especially for SSD OSTs. > > You could check "lctl get_param osp.*.sync*" on the MDS to see how > many destroys are pending. Also, increasing osp.*.max_rpcs_in_flight > on the MDS might speed this up? It should default to 32 per OST on > the MDS vs. default 8 for clients > > Cheers, Andreas > -- > Andreas Dilger > Principal Lustre Architect > Whamcloud > > > > > > -- Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden Internet: [email protected] Phone: +46 90 7866134 Fax: +46 90-580 14 Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se _______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
