There was a bug (sorry don't recall which) that would leave the llog files
present and full of logs which should have been cleared, remount was the
solution. I don't recall the details here but I'm sure you'd be able to
find the LU ticket after some searching.
On Sat, Oct 15, 2016 at 3:49 PM, Pawel Dziekonski <dzi...@wcss.pl> wrote:
> we had the same problem on 2.5.3. Robinhood was supposed to
> consume changelog but it wasn't. Don't know why. Simply
> disabling changelog was not enough - we had to remount MDT.
> We did it by simply doing failover to other MDS node (HA
> The other issue we had with MDT was the size of inodes -
> they are (were at that time) created with 512 bytes by
> default and when you use the stripe count then it will not
> accommodate the lfsck and xattr data on that single inode
> and it starts utilizing the disk space. So you have to
> create the inodes with proper size, then whole data will be
> saved in that same inode and will not occupy additional disk
> space. AFAIK, this is a known issue since 2.x.
> Unfortunately the only solution was to reformat MDT offline.
> (via failover for
> On pią, 14 paź 2016 at 06:46:59 -0400, Jessica Otey wrote:
> > All,
> > My colleagues in Chile now believe that both of their 2.5.3 file
> > systems are experiencing this same problem with the MDTs filling up
> > with files. We have also come across a report from another user from
> > early 2015 denoting the same issue, also with a 2.5.3 system.
> > See: https://urldefense.proofpoint.com/v2/url?u=https-3A__www.
> > We are confident that these files are not related to the changelog
> > Does anyone have any other suggestions as to what the cause of this
> > problem could be?
> > I'm intrigued that the Lustre version involved in all 3 reports is
> > 2.5.3. Could this be a bug?
> > Thanks,
> > Jessica
> > >On Thu, Sep 29, 2016 at 8:58 AM, Jessica Otey <jo...@nrao.edu
> > ><mailto:jo...@nrao.edu>> wrote:
> > >
> > > Hello all,
> > > I write on behalf of my colleagues in Chile, who are experiencing
> > > a bizarre problem with their MDT, namely, it is filling up with 4
> > > MB files. There is no issue with the number of inodes, of which
> > > there are hundreds of millions unused. Â
> > >
> > > [root@jaopost-mds ~]# tune2fs -l /dev/sdb2 | grep -i inode
> > > device /dev/sdb2 mounted by lustre
> > > Filesystem features: Â Â Â has_journal ext_attr resize_inode
> > > dir_index filetype needs_recovery flex_bg dirdata sparse_super
> > > large_file huge_file uninit_bg dir_nlink quota
> > > Inode count: Â Â Â Â Â Â Â 239730688
> > > Free inodes: Â Â Â Â Â Â Â 223553405
> > > Inodes per group: Â Â Â Â 32768
> > > Inode blocks per group: Â 4096
> > > First inode: Â Â Â Â Â Â Â 11
> > > Inode size:Â Â Â Â Â 512
> > > Journal inode: Â Â Â Â Â Â 8
> > > Journal backup: Â Â Â Â Â inode blocks
> > > User quota inode: Â Â Â Â 3
> > > Group quota inode: Â Â Â Â 4
> > >
> > > Has anyone ever encountered such a problem? The only thing unusual
> > > about this cluster is that it is using 2.5.3 MDS/OSSes while still
> > > using 1.8.9 clientsâ€”something I didn't actually believe was
> > > possible, as I thought the last version to work effectively with
> > > 1.8.9 clients was 2.4.3. However, for all I know, the version gap
> > > may have nothing to do with this phenomena.
> > >
> > > Any and all advice is appreciated. Any general information on the
> > > structure of the MDT also welcome, as such info is in short supply
> > > on the internet.
> > >
> > > Thanks,
> > > Jessica
> > >
> Pawel Dziekonski <pawel.dziekon...@wcss.pl>
> Wroclaw Centre for Networking & Supercomputing, HPC Department
> phone: +48 71 320 37 39, fax: +48 71 322 57 97,
> lustre-discuss mailing list
lustre-discuss mailing list