Mike,

forgot to mention that some of this is done in hpc computing world. 
typically it is a case where jobs are submitted via a web interface
(very controlled) and all jobs are run as if owned by that web
connection owner.  we run some bio-genome codes on our large cluster
like this.  the accountable person owns the web codes and jobs run are
very restricted according to the web interface - under that person's
named account.  someone goes bad and he is responsible!

stephen



Mike Mettke wrote:
> 
> All,
> 
> here's an idea I've been toying with:
> 
> An OSCAR cluster in the end provides a service to a customer, such as
> running parallel programs. Currently, we import for customer
> authentication the associated /etc/passwd file and for customer data the
> user filespace via nfs. This requires a lot of updating, and also, most
> importantly, reflects the view that the customer network is homogenous
> with OSCAR.
> 
> Now, in my practical experience, I've rarely seen organizations which
> have a single, fully homogenous network. For historical and political
> reasons, I usually see 2 or 3 networks or clusters in a single
> organization. Your mileage may vary, however.
> 
> The expense associated with OSCAR cluster creates some pressure to make
> the computing power associated with OSCAR available to multiple customer
> clusters. Importing the user authentication data is usually difficult,
> since the user ID space is non-orthogonal across multiple clusters.
> Also, opening up the data space is difficult as well, since data on the
> customer clusters is sometimes not well enough compartmentalized,
> leading to the situation of access to all data or none at all. In the
> end OSCAR shouldn't have to care about restrictions associated with each
> customers data set, as long as all OSCAR jobs + date are
> compartmentalized against each other.
> 
> How about:
> 
> 1. OSCAR runs all jobs anonymously, such as oscaruser ?
> Maybe we can create and array of 10,000 oscarusers, like
> oscaruser1....oscauser10000, and randomly pick one ID to execute under.
> This would provide job-job data and user ID separation.
> 
> 2. This does mean all necessary user data is copied for each job from
> customer clusters into OSCAR. Associated results would have to be copied
> back. PBS has some mechanism for this via the "stage file" option.
> If that seems wasteful, nfs is in the end doing exactly the same, that
> is, copying the necessary data from host A to host B.
> 
> 3. The headnode would have to have some authentication mechanism, based
> on host and/or domainnames and possibly users, such as
> 
> *.domain1.com
> *.domain2.com
> hosta.domain3.com
> hostb.domain4.com wilma
> hostc.domain5.com fred
> 
> What do you think ?
> 
> just wondering,
> Mike
> 
> --
> Wireless Advanced Technologies Lab
> Bell Labs - Lucent Technologies
> 
> -------------------------------------------------------
> This sf.net email is sponsored by: See the NEW Palm
> Tungsten T handheld. Power & Color in a compact size!
> http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en
> _______________________________________________
> Oscar-users mailing list
> [EMAIL PROTECTED]
> https://lists.sourceforge.net/lists/listinfo/oscar-users

-- 
------------------------------------------------------------------------
Stephen L. Scott, Ph.D.                 voice: 865-574-3144
Oak Ridge National Laboratory           fax:   865-574-0680
P. O. Box 2008, Bldg. 6012, MS-6367     [EMAIL PROTECTED]
Oak Ridge, TN 37831-6367                http://www.csm.ornl.gov/~sscott/
------------------------------------------------------------------------


-------------------------------------------------------
This sf.net email is sponsored by: See the NEW Palm 
Tungsten T handheld. Power & Color in a compact size!
http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en
_______________________________________________
Oscar-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/oscar-users

Reply via email to