Re: [lustre-discuss] Any way to dump Lustre quota data?
Interesting. So the files under qmt don't appear to be useful for this, however the ones under quota_slave do appear to have what I want, though it looks like I'll have to pull the data from every OST and sum them together myself. This actually isn't too bad and can give me more useful information. Thanks! Kevin On Thu, Sep 5, 2019 at 10:00 AM Jeff Johnson wrote: > Kevin, > > There are files in /proc/fs/lustre/qmt/yourfsname-QMT/ that you can > pull it all from based on UID and GID. Look for files like md-0x0/glb-usr > dt-0x0/glb-usr and files in > /proc/fs/lustre/osd-zfs/yourfsname-MDT/quota_slave. > > I’m not in front of a keyboard, I’m cooking breakfast but I’ll follow up > with the exact files. You can cat them and maybe find what you’re looking > for. > > —Jeff > > On Thu, Sep 5, 2019 at 05:07 Kevin M. Hildebrand wrote: > >> Is there any way to dump the Lustre quota data in its entirety, rather >> than having to call 'lfs quota' individually for each user, group, and >> project? >> >> I'm currently doing this on a regular basis so we can keep graphs of how >> users and groups behave over time, but it's problematic for two reasons: >> 1. Getting a comprehensive list of users and groups to iterate over is >> difficult- sure I can use the passwd/group files, but if a user has been >> deleted there may still be files owned by a now orphaned userid or groupid >> which I won't see. We may also have thousands of users in the passwd file >> that don't have files on a particular Lustre filesystem, and doing lfs >> quota calls for those users wastes time. >> 2. Calling lfs quota hundreds of times for each of the users, groups, >> and projects takes a while. This reduces my ability to collect the data at >> the frequency I want. Ideally I'd like to be able to collect every minute >> or so. >> >> I have two different Lustre installations, one running 2.8.0 with >> ldiskfs, the other running 2.10.8 with ZFS. >> >> Thanks, >> Kevin >> >> -- >> Kevin Hildebrand >> University of Maryland >> Division of IT >> >> ___ >> lustre-discuss mailing list >> lustre-discuss@lists.lustre.org >> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org >> > -- > -- > Jeff Johnson > Co-Founder > Aeon Computing > > jeff.john...@aeoncomputing.com > www.aeoncomputing.com > t: 858-412-3810 x1001 f: 858-412-3845 > m: 619-204-9061 > > 4170 Morena Boulevard, Suite C - San Diego, CA 92117 > > High-Performance Computing / Lustre Filesystems / Scale-out Storage > ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Re: [lustre-discuss] Any way to dump Lustre quota data?
Kevin, There are files in /proc/fs/lustre/qmt/yourfsname-QMT/ that you can pull it all from based on UID and GID. Look for files like md-0x0/glb-usr dt-0x0/glb-usr and files in /proc/fs/lustre/osd-zfs/yourfsname-MDT/quota_slave. I’m not in front of a keyboard, I’m cooking breakfast but I’ll follow up with the exact files. You can cat them and maybe find what you’re looking for. —Jeff On Thu, Sep 5, 2019 at 05:07 Kevin M. Hildebrand wrote: > Is there any way to dump the Lustre quota data in its entirety, rather > than having to call 'lfs quota' individually for each user, group, and > project? > > I'm currently doing this on a regular basis so we can keep graphs of how > users and groups behave over time, but it's problematic for two reasons: > 1. Getting a comprehensive list of users and groups to iterate over is > difficult- sure I can use the passwd/group files, but if a user has been > deleted there may still be files owned by a now orphaned userid or groupid > which I won't see. We may also have thousands of users in the passwd file > that don't have files on a particular Lustre filesystem, and doing lfs > quota calls for those users wastes time. > 2. Calling lfs quota hundreds of times for each of the users, groups, and > projects takes a while. This reduces my ability to collect the data at the > frequency I want. Ideally I'd like to be able to collect every minute or > so. > > I have two different Lustre installations, one running 2.8.0 with ldiskfs, > the other running 2.10.8 with ZFS. > > Thanks, > Kevin > > -- > Kevin Hildebrand > University of Maryland > Division of IT > > ___ > lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org > -- -- Jeff Johnson Co-Founder Aeon Computing jeff.john...@aeoncomputing.com www.aeoncomputing.com t: 858-412-3810 x1001 f: 858-412-3845 m: 619-204-9061 4170 Morena Boulevard, Suite C - San Diego, CA 92117 High-Performance Computing / Lustre Filesystems / Scale-out Storage ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
[lustre-discuss] Any way to dump Lustre quota data?
Is there any way to dump the Lustre quota data in its entirety, rather than having to call 'lfs quota' individually for each user, group, and project? I'm currently doing this on a regular basis so we can keep graphs of how users and groups behave over time, but it's problematic for two reasons: 1. Getting a comprehensive list of users and groups to iterate over is difficult- sure I can use the passwd/group files, but if a user has been deleted there may still be files owned by a now orphaned userid or groupid which I won't see. We may also have thousands of users in the passwd file that don't have files on a particular Lustre filesystem, and doing lfs quota calls for those users wastes time. 2. Calling lfs quota hundreds of times for each of the users, groups, and projects takes a while. This reduces my ability to collect the data at the frequency I want. Ideally I'd like to be able to collect every minute or so. I have two different Lustre installations, one running 2.8.0 with ldiskfs, the other running 2.10.8 with ZFS. Thanks, Kevin -- Kevin Hildebrand University of Maryland Division of IT ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org