Looks like you¹ve got some really large changelogs built up. Did you have robin hood, or some other consumer running at some point that has since stalled?
-Ben Evans On 10/16/15, 7:24 AM, "lustre-discuss on behalf of Torsten Harenberg" <[email protected] on behalf of [email protected]> wrote: >Dear all, > >I just noticed that the metadata device of our ~120 TB lustre system is >getting full - see monitoring plot attached. > >However, the amount of data stored is more of less the same (usually >around ~70% of its capacity). > >I am trying to undestand what that means. Does a full MGTMDT can be >translated to "too many files stored"? Or is some kind of clean up needed? > >The file system was created with > >mkfs.lustre --fsname=lustre --mgs --mdt --backfstype=ext4 >--failnode=132.195.124.201@tcp --verbose /dev/mapper/MGTMDT > >with Lustre 2.1.5. > >Recently, the system was updated to 2.5.3 > >lfs df output is as follows: > >[root@fugg1 lustre]# lfs df /lustre >UUID 1K-blocks Used Available Use% Mounted on >lustre-MDT0000_UUID 805164976 700243760 51234128 93% >/lustre[MDT:0] >lustre-OST0000_UUID 8585168804 5456372044 2699295668 67% >/lustre[OST:0] >lustre-OST0001_UUID 8585168804 5243270580 2912396160 64% >/lustre[OST:1] >lustre-OST0002_UUID 8585168804 5479924236 2675742656 67% >/lustre[OST:2] >lustre-OST0003_UUID 8585168804 5364347704 2791318664 66% >/lustre[OST:3] >lustre-OST0004_UUID 8585168804 5420547256 2735119548 66% >/lustre[OST:4] >lustre-OST0005_UUID 8585168804 5407878880 2747789400 66% >/lustre[OST:5] >lustre-OST0006_UUID 8585168804 5689191524 2466475596 70% >/lustre[OST:6] >lustre-OST0007_UUID 8585168804 5541360168 2614306556 68% >/lustre[OST:7] >lustre-OST0008_UUID 8585168804 5448642208 2707025228 67% >/lustre[OST:8] >lustre-OST0009_UUID 8585168804 5369793176 2785874116 66% >/lustre[OST:9] >lustre-OST000a_UUID 8585168804 5461624660 2694043120 67% >/lustre[OST:10] >lustre-OST000b_UUID 8585168804 5330093508 2825574244 65% >/lustre[OST:11] >lustre-OST000c_UUID 8585168804 5562546324 2593121768 68% >/lustre[OST:12] >lustre-OST000d_UUID 8585168804 5559035748 2596632340 68% >/lustre[OST:13] >lustre-OST000e_UUID 8585168804 5373777460 2781890244 66% >/lustre[OST:14] > >filesystem summary: 128777532060 81708405476 40626605308 67% /lustre > >[root@fugg1 lustre]# lfs df -i /lustre >UUID Inodes IUsed IFree IUse% Mounted on >lustre-MDT0000_UUID 536870912 104832073 432038839 20% >/lustre[MDT:0] >lustre-OST0000_UUID 16777216 6536109 10241107 39% >/lustre[OST:0] >lustre-OST0001_UUID 16777216 6165467 10611749 37% >/lustre[OST:1] >lustre-OST0002_UUID 16777216 6709225 10067991 40% >/lustre[OST:2] >lustre-OST0003_UUID 16777216 6427098 10350118 38% >/lustre[OST:3] >lustre-OST0004_UUID 16777216 6607051 10170165 39% >/lustre[OST:4] >lustre-OST0005_UUID 16777216 6725912 10051304 40% >/lustre[OST:5] >lustre-OST0006_UUID 16777216 6933544 9843672 41% >/lustre[OST:6] >lustre-OST0007_UUID 16777216 6978005 9799211 42% >/lustre[OST:7] >lustre-OST0008_UUID 16777216 6252495 10524721 37% >/lustre[OST:8] >lustre-OST0009_UUID 16777216 6873340 9903876 41% >/lustre[OST:9] >lustre-OST000a_UUID 16777216 7091045 9686171 42% >/lustre[OST:10] >lustre-OST000b_UUID 16777216 6337078 10440138 38% >/lustre[OST:11] >lustre-OST000c_UUID 16777216 5993369 10783847 36% >/lustre[OST:12] >lustre-OST000d_UUID 16777216 6357097 10420119 38% >/lustre[OST:13] >lustre-OST000e_UUID 16777216 6087599 10689617 36% >/lustre[OST:14] > >filesystem summary: 536870912 104832073 432038839 20% /lustre > >[root@fugg1 lustre]# > >Thanks for any hint! > >Best regards > > Torsten > >-- ><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> ><> <> ><> Dr. Torsten Harenberg [email protected] <> ><> Bergische Universitaet <> ><> FB C - Physik Tel.: +49 (0)202 439-3521 <> ><> Gaussstr. 20 Fax : +49 (0)202 439-2811 <> ><> 42097 Wuppertal @CERN: Bat. 1-1-049 <> ><> <> ><><><><><><><>< Of course it runs NetBSD http://www.netbsd.org ><> _______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
