Kevin, There are files in /proc/fs/lustre/qmt/yourfsname-QMT0000/ that you can pull it all from based on UID and GID. Look for files like md-0x0/glb-usr dt-0x0/glb-usr and files in /proc/fs/lustre/osd-zfs/yourfsname-MDT0000/quota_slave.
I’m not in front of a keyboard, I’m cooking breakfast but I’ll follow up with the exact files. You can cat them and maybe find what you’re looking for. —Jeff On Thu, Sep 5, 2019 at 05:07 Kevin M. Hildebrand <ke...@umd.edu> wrote: > Is there any way to dump the Lustre quota data in its entirety, rather > than having to call 'lfs quota' individually for each user, group, and > project? > > I'm currently doing this on a regular basis so we can keep graphs of how > users and groups behave over time, but it's problematic for two reasons: > 1. Getting a comprehensive list of users and groups to iterate over is > difficult- sure I can use the passwd/group files, but if a user has been > deleted there may still be files owned by a now orphaned userid or groupid > which I won't see. We may also have thousands of users in the passwd file > that don't have files on a particular Lustre filesystem, and doing lfs > quota calls for those users wastes time. > 2. Calling lfs quota hundreds of times for each of the users, groups, and > projects takes a while. This reduces my ability to collect the data at the > frequency I want. Ideally I'd like to be able to collect every minute or > so. > > I have two different Lustre installations, one running 2.8.0 with ldiskfs, > the other running 2.10.8 with ZFS. > > Thanks, > Kevin > > -- > Kevin Hildebrand > University of Maryland > Division of IT > > _______________________________________________ > lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org > -- ------------------------------ Jeff Johnson Co-Founder Aeon Computing jeff.john...@aeoncomputing.com www.aeoncomputing.com t: 858-412-3810 x1001 f: 858-412-3845 m: 619-204-9061 4170 Morena Boulevard, Suite C - San Diego, CA 92117 High-Performance Computing / Lustre Filesystems / Scale-out Storage
_______________________________________________ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org