On 2013-04-08 23:37, Markus Koeberl wrote:
 > > Check the I/O scheduler settings on your virtual machine and on the
 > > server.
 > > The default cfq is not the fastest for this usecase. Especially if you
 > > are using a battery backed hw raid you may test both set to noop.

You can get it with:
cat /sys/block/{DEVICE-NAME}/queue/scheduler

Ok, this is interesting. We've recently started using virtual fileservers, but have not done any changes to the IO scheduler. Is this the recommended setup for openafs fileservers on vmware? I just checked one of our virtual fileservers:
# cat /sys/block/sde/queue/scheduler
noop anticipatory deadline [cfq]

Btw, we mount the disks as "raw devices" in vmware, as this gives the best performance.

Also check if you have disabled atime for the /vicepX filesytem

I know the noatime option can give better performance, but can it be used on /vicepX partitions? And will we get a performance benefit using it?

/Staffan
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to