As promised, here are my vsched findings. My set up is util-vserver 0.30.195 and vs 1.9.3.



The token-bucket scheduler principle is pretty well explained here:

http://www.linux-vserver.org/index.php?page=Linux-VServer-Paper-06


vsched takes the following arguments:

   --fill-rate

        The number of tokens that will be placed in the bucket.

   --interval

        How often (the above specified) number of tokens will be placed.
        This is in jiffies. Through some googleing I've found references
        that a jiffy is about 10ms, but it seems to me it's less than
        that. Not sure if the CPU speed has bearing on it. (Anyone know?)

   --tokens

        The bucket starts out with this many tokens. Tokens_max takes
        precedence here, so it cannot be higher than tokens_max.

   --tokens_min

        When a bucket is empty, the context is on hold _until_ at least
        this many tokens are in the bucket.

   --tokens_max

        The size of the bucket. When tokens aren't being used, the bucket
        will be getting fuller and fuller, but up to this value. So in effect
        this is your CPU burst parameter.

   --cpu_mask

        This is obsolete, but I've found the current vsched is a little
        picky and will segfault if you omit parameters, so I always
        specified 0 here.


According to the VServer paper, "At each timer tick, a running process consumes exactly one token from the bucket". Here running means actually needing the CPU as opposed to "running" as in "existing". Most processes are not running most of the time, e.g. an httpd waiting on a socket isn't running, even though ps would list it.



A token is quite a bit of CPU time (again I'm not sure if this is CPU speed dependent, my tests were on a 2.8GHz Xeon). Typing "python" on the command line (which is a huge operation IMHO) consumes 17 tokens in my tests. Having 100000 tokens in your bucket is probably sufficient for a medium size compile job.



Here are some guidelines. All this is very much unscientific and without a lot of testing and theory behind, so if someone has better quigelines, please pitch in.


When trying to come up with a good setting in my environment (basically hosting), I was looking for values that would not cripple the snappiness of the server, but prevent people from being stupid (e.g. cat /dev/zero | bzip2 | bzip2 | bzip2 > /dev/null).

The fill interval should be short enough to not be noticeable, so something like 100 jiffies. The fill rate should be relatively small, something like 30 tokens. Tokens_min seems like it should simply equal to the fill rate. The tokens_max should be generous so that people can do short cpu-intensive things when the need them, so something like 10000 tokens.

You can see current token stats by looking at

/proc/virtual/<xid>/sched

on the mother server. (If fill_rate is 115 no matter what you do, see my vsched posting earlier in the list).


You can also use vsched to pace any cpu intensive command, e.g.:

vcontext --create --     \
  vsched --fill-rate 30  \
         --interval 100  \
         --tokens 100    \
         --tokens_min 30 \
         --tokens_max 200 \
          --cpu_mask 0   -- /bin/my_cpu_hog


While playing with this stuff I've run into situations where a context has no tokens left, at which point you cannot even kill the processes in it. Don't panic - you can always reenter the context and call vsched with new parameters.


I think that's about it.

HTH,

Grisha
_______________________________________________
Vserver mailing list
[EMAIL PROTECTED]
http://list.linux-vserver.org/mailman/listinfo/vserver

Reply via email to