On 24/12/14 04:12, Prentice Bisbal wrote: > Anyway, another person in the conversation felt that this would be bad, > because if someone was running a job that would hammer the fileystem, it > would make the filesystem unresponsive, and keep other people from > logging in and doing work.
I don't believe we've ever seen this issue with GPFS and we have some people running some pretty pathological codes for I/O (including OpenFOAM which is plain insane and some of the bioinformatics codes that want to do single byte synchronous I/O). I think the worst issue we've had was a problem with OpenFOAM with a user who ran us out of inodes - they created many millions of directories, each with 4 files in them. But we killed the job, added metadata disks online to extend inodes and then educated the user. It's not something unique to GPFS though (and could probably be harder to recover from on other filesystems). All the best, Chris -- Christopher Samuel Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: [email protected] Phone: +61 (0)3 903 55545 http://www.vlsci.org.au/ http://twitter.com/vlsci _______________________________________________ Beowulf mailing list, [email protected] sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
