Many big UPC site has similar setup
Home directory, backup
Scratch luster never backup
Hsm env with tape as backend

Sent from my iPad

On Aug 10, 2010, at 8:02 PM, "D. Marc Stearman" <[email protected]> wrote:

> At LLNL, we have all the users home directories on NFS servers, and we  
> use lustre for scratch space/scientific data.
> 
> When a user logs in to any of our clusters they have a normal home  
> directory using all the features of a modern day NAS appliance: quotas  
> (16-32GB), snapshots, etc, and we back up the users home directory  
> data.  The NAS volumes may be tens of terabytes.
> 
> The lustre space, which is many petabytes, is organized into scratch  
> file systems mounted on all our clusters under /p/lscratch*.
> 
> So a user may login to their homedir as /home/user1, and write all of  
> their scientific data to /p/lscratcha/user1.  We also do not back up  
> our lustre scratch space, rather leave it up to the user to copy  
> important data to our HPSS tape archive.
> 
> -Marc
> 
> ----
> D. Marc Stearman
> Lustre Operations Lead
> [email protected]
> 925.423.9670
> Pager: 1.888.203.0641
> 
> 
> 
> 
> On Aug 10, 2010, at 12:19 PM, David Noriega wrote:
> 
>> We just got our lustre system online, and as we continue to play with
>> it, I need some help supporting my argument that we should have two
>> file servers. One nfs server to host up user's home directories and
>> then the lustre file server to host up space for their jobs to run. My
>> manager's concern is confusing users, which I think for anyone using a
>> cluster isn't completely valid, but any information towards technical
>> details supporting a two file server solution would be helpful.
>> 
>> Thanks
>> David
>> 
>> -- 
>> Personally, I liked the university. They gave us money and facilities,
>> we didn't have to produce anything! You've never been out of college!
>> You don't know what it's like out there! I've worked in the private
>> sector. They expect results. -Ray Ghostbusters
>> _______________________________________________
>> Lustre-discuss mailing list
>> [email protected]
>> http://*lists.lustre.org/mailman/listinfo/lustre-discuss
>> 
> 
> _______________________________________________
> Lustre-discuss mailing list
> [email protected]
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to