See the Lustre Operations Manual for options setting the JobID. You can set it 
using fields like "%u" for UID, or you can set it per process group, or for the 
whole node.  For containers, you could set it for the process group when it 
starts and it should be inherited by all processes in the container? 

Cheers, Andreas

> On Dec 10, 2021, at 08:00, Iannetti, Gabriele <g.ianne...@gsi.de> wrote:
> 
> Dear Lustre community,
> 
> on our submit nodes users log in transparently into Singularity containers.
> Jobs submitted from those sessions are automatically transparently launched 
> inside a container as well through the slurmd agent.
> Lustre is also mounted within the container.
> 
> Since the setting `jobid_var=procname_uid` is set on the submit nodes, it is 
> providing us a mangled output for the jobid field:
> 
> jobid="loop7"
> jobid="loop7..0"
> jobid="loop7.0"
> jobid="loop7.00"
> jobid="loop7000"
> 
> Loop devices are used in Singularity to facilitate the mounting of container 
> filesystems from SIF images.
> 
> Is there anything we can configure in Singularity or Lustre to pass the UID 
> of the user that has started a container 
> or is the container runtime with Singularity not supported for the Jobstats?
> 
> Best
> Gabriele
> 
> _______________________________________________
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to