On Monday 08 April 2013 20:52:23 [email protected] wrote:
> > On Sunday 07 April 2013 20:55:41 [email protected] wrote:
> >
> > Check the I/O scheduler settings on your virtual machine and on the
> > server.
> > The default cfq is not the fastest for this usecase. Especially if you
> > are using a battery backed hw raid you may test both set to noop.
> 
> Where can I find that? I'm currently using Virtualbox, although I have
> previously used VMWare Server. I also use some HW clients. I have tested
> the server on HW earlier, too, but none have solved the magnitude problem.
> That's why I returned to virtualization, because it's otherwise much more
> convenient in development phase.

You can get it with:
cat /sys/block/{DEVICE-NAME}/queue/scheduler

Also check if you have disabled atime for the /vicepX filesytem

I use xen and Virtualbox always with block devices (lvm, iSCSI, ZFS). I guess 
there will also be a huge slowdown if you use files.


> > Also try to use memcache on the client for the test!
> 
> Hmmm... For some reason, I can't seem to turn the memcache on, on my
> client machine. Other parameters work, but with -memcache it halts on
> (GDM) login.

My debian notebook works fine with: 
OPTIONS="-memcache" in /etc/openafs/afs.conf and 
/afs:/var/cache/openafs:100000 in /etc/openafs/cacheinfo


Reply via email to