how about using cgroups, putting each user's login shell in a cgroup that cannot use more then X% of the whole CPUs. this will affect all processes spawned under the user's shell.

--guy

On 04/21/2017 03:07 PM, Josh Roden wrote:
Hi

server setup:
------------------
Centos 6
32GB RAM
16 Cpu's
70 students max

I am using /etc/security/limits.conf to prevent the students from
choking the whole
server but sometimes one student will write a very bad program that somehow
runs itself again and again - so fast that "killall" and "pkill -9 -u"
can't stop/remove
the user fast enough before the student's program is run again and again...

Here is my definition in limits:

 @stud           hard    cpu             8
 @stud           hard    nproc           256
 @stud           hard    nofile          1024
 @stud           -       maxlogins       6

I can't reduce cpu time below 8min because eclipse will be killed every
hour or so.
My problem seems to be that the student can run up to 256 processes that
each
uses 100% of a single CPU and we only have 16 CPU''s.

Thanks for any suggestions.
Josh


_______________________________________________
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



_______________________________________________
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il

Reply via email to