The group and passwd file are copied from the master (well, only some entries), but in any case they match.
I was fearing you'd tell me about the public keys... :-( The answer to that is that I don't mount the home directories on the nodes by default, and I wasn't planning to do so, period. In fact, in some sense having the password files and keys available on the nodes kind of defeats the security safeguards built into xcpu, especially if one uses the xcpufs -u option. It would be far better to do as we discussed before: let the scheduler assign the permissions. In the meantime (until the bjs port is available), this may still be the only way to do it... Another option would be to somehow make available just the users' public keys, collected in some directory structure, and modify the init_unix_users() routine in xcpufs.c accordingly. What do you think about this? Daniel On 11/3/08, Abhishek Kulkarni <[EMAIL PROTECTED]> wrote: > > can you make sure the group and passwd files on the node match with > those on the master? > > xcpufs also looks for the public key of the user in it's home directory > specified in the /etc/passwd file. so make sure that the key is readable > by xcpufs on the node. > > > On Mon, 2008-11-03 at 12:42 -0500, Daniel Gruner wrote: > > Immediately after booting: > > > > [EMAIL PROTECTED] xcpufs]# xgetent group n0000 > > xgetent: n0000: Error 5: unknown user > > [EMAIL PROTECTED] xcpufs]# xgetent passwd n0000 > > xgetent: n0000: Error 5: unknown user > > > > which is consistent with what I mentioned, i.e. that the '-u' flag to > > xcpufs didn't do anything. > > > > After doing xgroupset and xuserset manually: > > > > [EMAIL PROTECTED] xcpufs]# xgetent group n0000 > > > > Group Database From Node: n0000 > > danny:500 > > root:0 > > xcpu-admin:65530 > > [EMAIL PROTECTED] xcpufs]# xgetent passwd n0000 > > > > Password Database From Node: n0000 > > danny:500:500 > > root:0:0 > > xcpu-admin:65530:65530 > > > > Daniel > > > > On 11/3/08, Abhishek Kulkarni <[EMAIL PROTECTED]> wrote: > > > > > > On Mon, 2008-11-03 at 11:52 -0500, Daniel Gruner wrote: > > > > I have modified the perceus scripts so that before xcpufs is run there > > > > exist on the nodes the /etc/group and /etc/passwd files. However, > > > > even if I run "xcpufs -u" the group/user membership is not set, as you > > > > suggest it should be. I am using a statically linked version of > > > > xcpufs on the nodes (freshly compiled, not the one that comes with > > > > perceus). > > > > > > > > How might one go about debugging this? > > > > > > > > > > > > > what is the output of: > > > > > > xgetent group <nodename> > > > xgetent passwd <nodename> > > > > > > from the perceus master? > > > > > > > > > > Thanks, > > > > Daniel > > > > > > > > On 11/2/08, Abhishek Kulkarni <[EMAIL PROTECTED]> wrote: > > > > > > > > > > > > > > > > > Sorry, but don't follow. Are you talking about perceus modules? > I > > > > > > didn't think we wanted standard passwd/group files sent to the > nodes, > > > > > > which is what perceus normally does. > > > > > > > > > > > > > > > Yes, I was talking about Perceus modules. "groupfile", "passwdfile" > are > > > > > modules which just copy the group and passwd file (respectively) > from the > > > > > perceus master to the slaves. > > > > > > > > > > > > > > > > > I can't even find where xcpu > > > > > > stores the information for xgroupset/xuserset! (I guess this > shows > > > > > > that it is not trivial to get into the xcpu code...). Also, > what do > > > > > > you mean by the "-u" switch for xcpu? Oh, I just looked at > xcpufs.c > > > > > > and I see it there - it is not in the man page for xcpufs, > though... > > > > > > I guess this would do the trick if we simply want all users and > groups > > > > > > to be authenticated on the nodes at all time. > > > > > > > > > > > > > > > the group and user information is stored by xcpufs in a userpool > structure > > > > > in-memory. the -u switch is to automatically add all the users and > groups > > > > > to the pool. it would do the trick only if the users and/or groups > you > > > > > want to be authenticated against are present on the slave nodes. > > > > > > > > > > > > > > > I would much prefer to > > > > > > have the batch queuing systems do this on a job-by-job basis, > sort of > > > > > > like the node ownership setting that bjs does on bproc clusters, > since > > > > > > this would prevent people from running interactively on the > nodes that > > > > > > are owned by someone else. > > > > > > > > > > > > > > > yes, that's the idea. > > > > > > > > > > > > > > > > > > > > > > Daniel > > > > > > > > > > > > On Fri, Oct 31, 2008 at 7:32 PM, Abhishek Kulkarni <[EMAIL > PROTECTED]> > > > > > > wrote: > > > > > >> > > > > > >> i believe the xcpu module is activated in the "init" > provisionary stage > > > > > >> and the groupfile/passwdfile modules get activated in the > "ready" stage. > > > > > >> so the way to do this would be to make the groupfile and passwd > file > > > > > >> modules to run before the xcpu module, and start xcpu with the > "-u" > > > > > >> switch > > > > > >> > > > > > >> or better yet: with the new -u switch in xuserset and xgroupset > you > > > > > >> could add all the users from the master node. > > > > > >> > > > > > >> Thanks, > > > > > >> -- Abhishek > > > > > >> > > > > > >> On Fri, 2008-10-31 at 14:54 -0400, Daniel Gruner wrote: > > > > > >>> Hi > > > > > >>> > > > > > >>> I was wondering if anybody has scripts for a perceus-xcpu > installation > > > > > >>> that will automatically add groups and users to freshly booted > nodes. > > > > > >>> It appears to me that all the perceus scripts in > > > > > >>> /etc/perceus/nodescripts run on the node itself, and not on > the master > > > > > >>> node, which is where one needs to execute the xgroupset and > xuserset > > > > > >>> commands. Any help would be appreciated. > > > > > >>> > > > > > >>> Thanks, > > > > > >>> Daniel > > > > > >>> > > > > > >>> p.s. Is anybody planning an xcpu get-together for SC08? I > think it > > > > > >>> would be great... > > > > > >> > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > >
