Re: [lustre-discuss] how to unsquash users

2022-05-20 Thread Julien Rey via lustre-discuss
Hello Sebastien, Thanks for your clear explanation. I managed to "unsquash" my directory's ownership with a simple chown and without having to re-mount the lustre filesystem. I understand that I should activate the "admin" and "trusted" properties on the default nodemap to turn it off

Re: [lustre-discuss] how to unsquash users

2022-05-20 Thread Sebastien Buisson via lustre-discuss
Hi, It looks like you did not set properties on the default nodemap, which gets involved for your machine not in the 10.0.1.[35-38] range. When in use, the nodemap feature does not change anything about UID/GID of files as stored on servers, it just changes (maps) the way clients see them.

Re: [lustre-discuss] Avoiding system cache when using ssd pfl extent

2022-05-20 Thread Åke Sandgren
On 5/20/22 09:53, Andreas Dilger via lustre-discuss wrote: To elaborate a bit on Patrick's answer, there is no mechanism to do this on the *client*, because the performance difference between client RAM and server storage is still fairly significant, especially if the application is doing

[lustre-discuss] lustre_rsync with growing statuslog

2022-05-20 Thread Robert Redl
Dear Lustre Experts, since a few weeks we are keeping two Lustre system synchronous using lustre_rsync. That works fine, but the statuslog file is growing. It is currently about 500MB in size. Updating it is apparently slowing down the whole process. Is it only important to keep the

[lustre-discuss] how to unsquash users

2022-05-20 Thread Julien Rey via lustre-discuss
Hello everyone, We are running lustre 2.12.7 and today I tried to set up a few nodemaps so as to restrict access to an unique user (seamless user with uid/gid 3669) to a subdirectory (/webservices/seamless) from a range of machines (10.0.1.[35-38]). Here's what I did so far : lctl

Re: [lustre-discuss] Avoiding system cache when using ssd pfl extent

2022-05-20 Thread Andreas Dilger via lustre-discuss
Ake, in this particular case I can answer your question in detail. Before SFAOS 12.1 (IIRC) the /sys/block/*/queue/rotational setting is set from userspace at mount time via a udev script, and the Lustre detection of "rotational=0" could be racy. Newer versions of SFAOS (12.1+) set the

Re: [lustre-discuss] lustre_rsync with growing statuslog

2022-05-20 Thread Andreas Dilger via lustre-discuss
On May 20, 2022, at 06:33, Robert Redl mailto:robert.r...@lmu.de>> wrote: Dear Lustre Experts, since a few weeks we are keeping two Lustre system synchronous using lustre_rsync. That works fine, but the statuslog file is growing. It is currently about 500MB in size. Updating it is apparently

Re: [lustre-discuss] Avoiding system cache when using ssd pfl extent

2022-05-20 Thread Andreas Dilger via lustre-discuss
To elaborate a bit on Patrick's answer, there is no mechanism to do this on the *client*, because the performance difference between client RAM and server storage is still fairly significant, especially if the application is doing sub-page read or write operations. However, on the *server* the