> users files and processes, zones are the cleanest way
> to do this.
> However, if you are adverse to using zones for some
> reason, this can

I am not adverse to it. Zones was my first thought. But I just don't see it as 
viable, and cost-effective. If one of the OEMs has 30,000 accounts, the CGI 
cluster has 8 machines (growing 1 per month) and FTP cluster has 3. FTP 
generally has about 400-500 logins.

Now, I might be wrong, but this what I think if the problem with Zones:

I can not simply spin up a Zone _when the SSH login happens_, ie, dynamically 
as the user connects. This means I have to pre-create zones. Creating 30,000 
zones, because I will not know how will use it, clearly means I need 30 servers 
in the SSH cluster (if I can get 1,000 idle zones on a supermicro! Thats 22,000 
LWPs! according to someones post).

Even if I add new provisioning commands that a user has to "sign up" for SSH 
access (not sure that will pass approval, but still) I would still have 'a lot 
of zones' just sitting idle, and a lot of machines.

I won't know for sure, but starting with 3 SSH servers seems reasonable, as FTP 
uses 3 (and everyone here has either 54MBit DSL, or 100Mbit fiber after all). 
There won't be as much bandwidth, but more processes. But not as many as on CGI.



> Create zfs datasets for each user and put inheritable
> ACLs on them to
> only allow the file owner to access/see them. Mark

I did extensive tests with the idea of each user having own ZFS dataset. It is 
just not practical. Even with automount and mirror-mount it is not practical.


> also want to put
> quotas and reservations on the datasets. Mark Maybee

It refers to zfs set quota, which is for datasets. However, we DO use 
userquota, which is fantastic.



> Then, change the privilege set to only allow them to
> view their own
> processes. Glenn Brunette posted a good write up on

This, also mentioned by  Rob McMahon is most excellent. It takes care of the 
process issues completely. It only leaves the disk access as a problem.


I amused myself by writing a kernel module to replace "sys_chdir" with my own, 
using:

[code]
    if (path && *path && (getuid() == 1072)) {
      cmn_err(CE_NOTE, "lund: chdir directory (%s) %d", path ,getuid());

      if (error = lookupname(path, UIO_USERSPACE, FOLLOW, NULLVPP, &vp)) {
          cmn_err(CE_NOTE, "lookupname failed");
          return (set_errno(error));
      }

      if (!strncmp("/export/", vp->v_path, 8) &&
           strncmp("/export/user/www/jp/r/e/domain/1/5/ac0010115",
                   vp->v_path, 47)) {
          // If it is INSIDE /export, but doesn't start with their home-dir
          // we deny it.
          cmn_err(CE_NOTE, "lund: denied '%s'", vp->v_path);
          set_errno(ENOENT);
          return -1;
[/code]

Which works like a charm. However, I don't see any way to get a user's 'home 
directory' from kernel context. It just isn't something the kernel generally 
cares about, so I don't think it is in the user-> structures. (But I probably 
should replace getuid() with a lookup).

And I suspect I can't call getpwuid() from the module, since it uses PAM, and 
LDAP. Even though it should all be cached from the login.
-- 
This message posted from opensolaris.org
_______________________________________________
sysadmin-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/sysadmin-discuss

Reply via email to