Re: [lustre-discuss] Lustre project quotas and project IDs

2023-03-23 Thread Passerini Marco
Hi,


I think in our situation we would need to have multiple project IDs assigned 
the same UID, and in some case they would need to be assigned to GID instead, 
so it might get a little complex.

The possibility of using a custom string instead of a project ID number would 
be best for us, I believe. I don't think we would rely on the fallback option 
you suggest.


Regards,

Marco Passerini


From: Andreas Dilger 
Sent: Wednesday, March 22, 2023 5:02:36 PM
To: Passerini Marco
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Lustre project quotas and project IDs

Of course my preference would be a contribution to improving the name-projid 
mapping in the "lfs project" command under LU-13335 so this would also help 
other Lustre users manage their project IDs.

One proposal I had in LU-13335 that I would welcome feedback on was if a name 
or projid did not exist in /etc/projid that the lfs tool would fall back to 
doing a name/uid lookup in /etc/passwd (or other database as configured in 
/etc/nsswitch.conf).

This would avoid the need to duplicate the full UID database in /etc/projid for 
the common case of projid = uid, and allows using LDAP, NIS, AD, sssd, etc. for 
projid lookup without them having explicit support for a projid database.

This behavior could optionally be configured with a "commented-out" directive 
at the start of /etc/projid, like:

 #lfs fallback: passwd

or "group" or "none".  If all the projects are defined in the passwd database, 
then potentially just this one line is needed in /etc/projid, or not at all if 
"passwd" is the default fallback.

Would this meet your need for using an external database, while still allowing 
your development efforts to produce a solution that helps the Lustre community?

Of course at some point it wouod be desirable to have a dedicated projid 
database supported by glibc, but that is would take much more time and effort 
to implement and deploy, while the passwd/group fallback can be handled 
internally by the lfs command.

Cheers, Andreas

On Mar 17, 2023, at 04:10, Passerini Marco  wrote:



Hi Andreas,


I'm talking the order of ~10,000s of project IDs.

I've been thinking the same as you, that is, doing PROJID=1M + UID  etc. 
However, in our case, it might be better to rely on some scripting and an 
external DB, to keep track of the latest added ID, so that we could increment 
the highest value by 1 on new ID creation. The highest value could as well be 
looked up in:


/proc/fs/lustre/osd-ldiskfs/myfs-MDT/quota_slave_dt/acct_project

Regards,

Marco Passerini


From: Andreas Dilger 
Sent: Thursday, March 16, 2023 11:35:16 PM
To: Passerini Marco
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Lustre project quotas and project IDs

On Mar 16, 2023, at 04:50, Passerini Marco 
mailto:marco.passer...@cscs.ch>> wrote:

By trial and error, I found that, when using project quotas, the maximum ID 
available is 4294967294. Is this correct?

Yes, the "-1" ID is reserved for error conditions.

If I assign quota to a lot of project IDs, is the performance expected to go 
down more than having just a few or is it fixed?

Probably if you have millions or billions of different IDs there would be some 
performance loss, at a minimum just because the quota files will consume a lot 
of disk space and memory to manage.  I don't think we've done specific scaling 
testing for the number of project IDs, but it has worked well for the 
"expected" number of different IDs at production sites (in the 10,000s).

I've recommended to a few sites that want to have a "unified" quota to use e.g. 
PROJID=UID for user directories, PROJID=1M + UID for scratch, and PROJID=2M+N 
for independent projects, just to make the PROJIDs easily identified (at least 
until someone implements LU-13335 to do projid<->name mapping).

How many IDs were you thinking of using?

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre project quotas and project IDs

2023-03-17 Thread Passerini Marco
Hi Andreas,


I'm talking the order of ~10,000s of project IDs.

I've been thinking the same as you, that is, doing PROJID=1M + UID  etc. 
However, in our case, it might be better to rely on some scripting and an 
external DB, to keep track of the latest added ID, so that we could increment 
the highest value by 1 on new ID creation. The highest value could as well be 
looked up in:


/proc/fs/lustre/osd-ldiskfs/myfs-MDT/quota_slave_dt/acct_project

Regards,

Marco Passerini


From: Andreas Dilger 
Sent: Thursday, March 16, 2023 11:35:16 PM
To: Passerini Marco
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Lustre project quotas and project IDs

On Mar 16, 2023, at 04:50, Passerini Marco 
mailto:marco.passer...@cscs.ch>> wrote:

By trial and error, I found that, when using project quotas, the maximum ID 
available is 4294967294. Is this correct?

Yes, the "-1" ID is reserved for error conditions.

If I assign quota to a lot of project IDs, is the performance expected to go 
down more than having just a few or is it fixed?

Probably if you have millions or billions of different IDs there would be some 
performance loss, at a minimum just because the quota files will consume a lot 
of disk space and memory to manage.  I don't think we've done specific scaling 
testing for the number of project IDs, but it has worked well for the 
"expected" number of different IDs at production sites (in the 10,000s).

I've recommended to a few sites that want to have a "unified" quota to use e.g. 
PROJID=UID for user directories, PROJID=1M + UID for scratch, and PROJID=2M+N 
for independent projects, just to make the PROJIDs easily identified (at least 
until someone implements LU-13335 to do projid<->name mapping).

How many IDs were you thinking of using?

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Lustre project quotas and project IDs

2023-03-16 Thread Passerini Marco
By trial and error, I found that, when using project quotas, the maximum ID 
available is 4294967294. Is this correct?


If I assign quota to a lot of project IDs, is the performance expected to go 
down more than having just a few or is it fixed?


Regards,

Marco Passerini

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Monitoring Lustre IOPS on OSTs

2023-01-23 Thread Passerini Marco
Hi,


I'd like to monitor the IOPS on the Lustre OSTs.


I have stats like this:


[root@xxx04 ~]# lctl get_param obdfilter.xxx-OST.stats
obdfilter.xxx-OST.stats=
snapshot_time 348287.096066602 secs.nsecs
start_time0.0 secs.nsecs
elapsed_time  348287.096066602 secs.nsecs
read_bytes3075 samples [bytes] 0 1134592 2891776 2568826650624
write_bytes   1312266 samples [bytes] 1 4194304 5424381521966 
4303559271585613940
read  3075 samples [usecs] 0 489 31040 3417458
write 1312266 samples [usecs] 1 1630262 6193336309 
3686004830368845
setattr   20373 samples [usecs] 1 61 133314 1090612
punch 4600 samples [usecs] 3 65 51155 732959
sync  4 samples [usecs] 443 997 2853 2250847
destroy   30777 samples [usecs] 11 42634 20978596 33571893182
create993 samples [usecs] 1 29995 534579 1518818913
statfs135519 samples [usecs] 0 24 610182 4231650
get_info  108 samples [usecs] 1 3092 6218 17971060
set_info  10047 samples [usecs] 1 22 83418 770166

>From the docs https://wiki.lustre.org/Lustre_Monitoring_and_Statistics_Guide I 
>see:

"""
For read_bytes and write_bytes:

First number = number of times (samples) the OST has handled a read or 
write.

"""


I guess this can be considered as OST IOPS?


Regards,

Marco Passerini
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Restrict who can assign OST pools to directories

2022-11-18 Thread Passerini Marco
>so that it becomes difficult(not impossible though) for users
to use :). Users don't have access to MDS to get the entire lists of
pools defined.


Users can see what pools have been assigned to existing directories with "lfs 
getstripe" though.. so it's it's not very secure!


Regards,

Marco Passerini


From: Raj 
Sent: Monday, November 7, 2022 6:23:48 PM
To: Andreas Dilger
Cc: Passerini Marco; lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Restrict who can assign OST pools to directories

Marco, One other idea is to give an unfriendly pool name that users
can't guess. Like "myfs.mkpilaxluia"  instead of myfs.flash or
myfs.ssd so that it becomes difficult(not impossible though) for users
to use :). Users don't have access to MDS to get the entire lists of
pools defined.
Thanks,
Raj

On Mon, Nov 7, 2022 at 4:28 AM Andreas Dilger via lustre-discuss
 wrote:
>
> Unfortunately, this is not possible today, though I don't think it would be 
> too hard for someone to implement this by copying "enable_remote_dir_gid" and 
> similar checks on the MDS.
>
> In Lustre 2.14 and later, it is possible to set an OST pool quota that can 
> restrict users from creating too many files in a pool.  This doesn't directly 
> prevent them from setting the pool on a directory (though I guess this 
> _could_ be checked), but they would get an EDQUOT error when trying to create 
> in that directory, and quickly tire of trying to use it.
>
> Cheers, Andreas
>
> On Nov 4, 2022, at 05:57, Passerini Marco  wrote:
>
> Hi,
>
> Is there a way in Lustre to restrict who can assign OST pools to directories? 
> In specific, can we limit the following command so that it can be run by root 
> only?
>
> lfs setstripe --pool myfs.mypool test_dir
>
> I would need something similar to what can be done for remote directories:
> lctl set_param mdt.*.enable_remote_dir_gid=1
>
> Regards,
> Marco Passerini
>
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
>
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Restrict who can assign OST pools to directories

2022-11-04 Thread Passerini Marco
Hi,


Is there a way in Lustre to restrict who can assign OST pools to directories? 
In specific, can we limit the following command so that it can be run by root 
only?


lfs setstripe --pool myfs.mypool test_dir


I would need something similar to what can be done for remote directories:

lctl set_param mdt.*.enable_remote_dir_gid=1


Regards,

Marco Passerini
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Changelogs and jobstats with multitenancy and multiple cluster mounts

2021-11-24 Thread Passerini Marco
Hi,


Considering the following scenarios:

1) a single Lustre filesystem mounted on several clusters

2) a Lustre configured for multi-tenancy, exporting different filesystems on 
different VLANs which are mounted on several clusters


How does this interact with Lustre change logs, and with Lustre jobstats?


>From what I saw, Lustre changelogs, in case of multitenancy, work only if the 
>mounting client has visibility to the filesystem "root", or not to a 
>"sub"-filesystem. Is it possible to have it enabled for each exported 
>filesystem, in a more fine-grained way?


About jobstats: if there are multiple (e.g.) Slurm clusters which share the 
same filesystem, there might be conflicts on Job IDs. Is there a way to 
identify which cluster a job id belongs to?

Regards,
Marco Passerini
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Lustre src.rpm build on centos7 failing

2021-09-29 Thread Passerini Marco
Hi,


I'm trying to build lustre-2.12.7-1.src.rpm on a Centos 7.9.2009 Virtual 
machine.

I managed to compile the client, but I have some problems with the server 
compilation.


I have the following dependencies installed:


kernel-devel
epel-release
libyaml-devel
zlib-devel
libselinux-devel
kernel-debuginfo-common-x86_64
kernel-debuginfo
"@Development tools"
corosync
corosynclib-devel
corosync-debuginfo
pacemaker
pacemaker-debuginfo
net-snmp-devel
python-docutils
libyaml-devel
zfs-release.el7_9
libzfs4
zfs-debuginfo
zfs-release
libzfs4-devel
libzpool4
zfs
zfs-dkms
zfs-dracut
zfs-test



If try to build for ldiskfs I get this error:




rpmbuild --rebuild  --without o2ib --without krb5 --without lustre-tests --with 
ldiskfs --without zfs lustre-2.12.7-1.src.rpm

###
make[3]: Leaving directory `/root/rpmbuild/BUILD/lustre-2.12.7/lustre/osc'
make[2]: Leaving directory `/root/rpmbuild/BUILD/lustre-2.12.7/lustre/osc'
make[2]: Entering directory `/root/rpmbuild/BUILD/lustre-2.12.7/lustre'
make[3]: Entering directory `/root/rpmbuild/BUILD/lustre-2.12.7/lustre'
make[3]: Nothing to be done for `install-exec-am'.
make[3]: Nothing to be done for `install-data-am'.
make[3]: Leaving directory `/root/rpmbuild/BUILD/lustre-2.12.7/lustre'
make[2]: Leaving directory `/root/rpmbuild/BUILD/lustre-2.12.7/lustre'
make[1]: Leaving directory `/root/rpmbuild/BUILD/lustre-2.12.7/lustre'
+ 
basemodpath=/root/rpmbuild/BUILDROOT/lustre-2.12.7-1.x86_64/lib/modules/3.10.0-1160.42.2.el7.x86_64/extra/lustre
+ mkdir -p 
/root/rpmbuild/BUILDROOT/lustre-2.12.7-1.x86_64/lib/modules/3.10.0-1160.42.2.el7.x86_64/extra/lustre-osd-ldiskfs/fs
+ mv 
/root/rpmbuild/BUILDROOT/lustre-2.12.7-1.x86_64/lib/modules/3.10.0-1160.42.2.el7.x86_64/extra/lustre/fs/osd_ldiskfs.ko
 
/root/rpmbuild/BUILDROOT/lustre-2.12.7-1.x86_64/lib/modules/3.10.0-1160.42.2.el7.x86_64/extra/lustre-osd-ldiskfs/fs/osd_ldiskfs.ko
mv: cannot stat 
'/root/rpmbuild/BUILDROOT/lustre-2.12.7-1.x86_64/lib/modules/3.10.0-1160.42.2.el7.x86_64/extra/lustre/fs/osd_ldiskfs.ko':
 No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.IpMMWA (%install)
###


If I try for ZFS I get this instead:


rpmbuild --rebuild  --without o2ib --without krb5 --without lustre-tests 
--without ldiskfs --with zfs lustre-2.12.7-1.src.rpm

###
Making all in .
In file included from /var/lib/dkms/zfs/2.0.6/source/include/sys/arc.h:32:0,
 from 
/root/rpmbuild/BUILD/lustre-2.12.7/lustre/osd-zfs/osd_internal.h:51,
 from 
/root/rpmbuild/BUILD/lustre-2.12.7/lustre/osd-zfs/osd_handler.c:52:
/var/lib/dkms/zfs/2.0.6/source/include/sys/zfs_context.h:45:23: fatal error: 
sys/types.h: No such file or directory
 #include 
   ^
compilation terminated.
make[6]: *** [/root/rpmbuild/BUILD/lustre-2.12.7/lustre/osd-zfs/osd_handler.o] 
Error 1
make[5]: *** [/root/rpmbuild/BUILD/lustre-2.12.7/lustre/osd-zfs] Error 2
make[5]: *** Waiting for unfinished jobs
make[4]: *** [/root/rpmbuild/BUILD/lustre-2.12.7/lustre] Error 2
make[3]: *** [_module_/root/rpmbuild/BUILD/lustre-2.12.7] Error 2
make[2]: *** [modules] Error 2
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.AChW7c (%build)
###


I have the file but the path is not included in the compilation, it seems:

[root@marco-oss tmp]# locate sys/types.h
/usr/include/libspl/sys/types.h
/usr/include/sys/types.h
/usr/src/debug/glibc-2.17-c758a686/posix/sys/types.h
/usr/src/debug/zfs-2.0.6/lib/libspl/include/sys/types.h
/usr/src/zfs-2.0.6/include/os/freebsd/spl/sys/types.h
/usr/src/zfs-2.0.6/include/os/linux/spl/sys/types.h
/usr/src/zfs-2.0.6/lib/libspl/include/sys/types.h

I found similar errors in older emails, but I struggled to find a clear 
solution to this.  Could you give me advice on how to proceed? Thanks in 
advance!



Regards,
Marco Passerini


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org