[CentOS-virt] Virtuozzo GFS

2008-05-31 Thread James Thompson
I have just finished deploying two Dell PowerEdge 1950s with CentOS 5.1 and
Virtuozzo 4. GFS is up and running and Virtuozzo is configured for
shared-storage clustering. Everything works adequately but I am wondering if
anyone else has experienced load issues like I am seeing. I have three
VEs/VMs running, two on one node and one on the other node. One of the VEs
on each node are doing very little (one is just idling with apache and mysql
and the other is running rsync every six hours). The other is running
Zimbra. Every so often load will spike on the node running the Zimbra VE to
as high as 2 or 3 then settle down a short while later to around 0.8 or 0.9.
During the spikes the node not running Zimbra will other see an increase
from its idle load of 0.4 or so up to as high as 1.7 as I have seen. I
notice when running top that dlm_send and dlm_recv will jump to the top
fairly frequently when these load spikes occur.
What I am wondering is whether anyone else has experienced these kind of
load scenarios with GFS and what they have done to deal with them? We are
hoping to deploy a bit more densely on this setup so I'd like to make any
performance adjustments I can at this stage.


Thanks,
James Thompson
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Virtuozzo GFS

2008-05-31 Thread Scott Dowdle
James,

- James Thompson [EMAIL PROTECTED] wrote:
 I have just finished deploying two Dell PowerEdge 1950s with CentOS
 5.1 and Virtuozzo 4. GFS is up and running and Virtuozzo is configured
 for shared-storage clustering. Everything works adequately but I am
 wondering if anyone else has experienced load issues like I am seeing.
 I have three VEs/VMs running, two on one node and one on the other
 node. One of the VEs on each node are doing very little (one is just
 idling with apache and mysql and the other is running rsync every six
 hours). The other is running Zimbra. Every so often load will spike on
 the node running the Zimbra VE to as high as 2 or 3 then settle down a
 short while later to around 0.8 or 0.9. During the spikes the node not
 running Zimbra will other see an increase from its idle load of 0.4 or
 so up to as high as 1.7 as I have seen. I notice when running top that
 dlm_send and dlm_recv will jump to the top fairly frequently when
 these load spikes occur.
 
 What I am wondering is whether anyone else has experienced these kind
 of load scenarios with GFS and what they have done to deal with them?
 We are hoping to deploy a bit more densely on this setup so I'd like
 to make any performance adjustments I can at this stage.
 
 Thanks,
 James Thompson 

A load of 2-3 isn't much at all... so I don't think I'd call that much of a 
spike.  I have run OpenVZ at work and on a hobby server.  In both cases I have 
about 7 containers... one of them being Zimbra.  The other 6 containers are 
fairly busy so the two machines see a decent amount of load.  I am NOT using 
GFS though.  What is dlm_send and dlm_recv part of?  GFS?

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Virtuozzo GFS

2008-05-31 Thread James Thompson
I believe both dlm_send and dlm_receive are part of GFS' locking mechanism
where locks are bouncing back and forth between servers. The Zimbra VE is
pretty consistent in its internal load and what struck me was that
Virtuozzo's PIM interface flags the increased load (leading me to believe it
is abnormal). Perhaps I am mistaken on the meaning of the load figures
though. On a busy machine what would be considered a normal load? Does it
differ based on the number of processors and cores available in the system?
I came across this article which makes me think I may be mistaken in my
concern: http://www.linuxjournal.com/article/9001

Can anyone who is using GFS perhaps comment on how it has performed for them
in virtualization scenarios? When initially working with just the iSCSI
mounted disks involved operations were just slightly less snappy than
local file operations. With GFS things seem at least noticeably more
sluggish which I expect is normal given the multi-server locking but has
anyone worked on optimizing GFS performance in a virtualization environment?


Thanks,
James Thompson

On Sat, May 31, 2008 at 11:56 AM, Scott Dowdle [EMAIL PROTECTED]
wrote:

 James,

 - James Thompson [EMAIL PROTECTED] wrote:
  I have just finished deploying two Dell PowerEdge 1950s with CentOS
  5.1 and Virtuozzo 4. GFS is up and running and Virtuozzo is configured
  for shared-storage clustering. Everything works adequately but I am
  wondering if anyone else has experienced load issues like I am seeing.
  I have three VEs/VMs running, two on one node and one on the other
  node. One of the VEs on each node are doing very little (one is just
  idling with apache and mysql and the other is running rsync every six
  hours). The other is running Zimbra. Every so often load will spike on
  the node running the Zimbra VE to as high as 2 or 3 then settle down a
  short while later to around 0.8 or 0.9. During the spikes the node not
  running Zimbra will other see an increase from its idle load of 0.4 or
  so up to as high as 1.7 as I have seen. I notice when running top that
  dlm_send and dlm_recv will jump to the top fairly frequently when
  these load spikes occur.
 
  What I am wondering is whether anyone else has experienced these kind
  of load scenarios with GFS and what they have done to deal with them?
  We are hoping to deploy a bit more densely on this setup so I'd like
  to make any performance adjustments I can at this stage.
 
  Thanks,
  James Thompson

 A load of 2-3 isn't much at all... so I don't think I'd call that much of a
 spike.  I have run OpenVZ at work and on a hobby server.  In both cases I
 have about 7 containers... one of them being Zimbra.  The other 6 containers
 are fairly busy so the two machines see a decent amount of load.  I am NOT
 using GFS though.  What is dlm_send and dlm_recv part of?  GFS?

 TYL,
 --
 Scott Dowdle
 704 Church Street
 Belgrade, MT 59714
 (406)388-0827 [home]
 (406)994-3931 [work]
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Virtuozzo GFS

2008-05-31 Thread Bradley Falzon
On Sat, 2008-05-31 at 16:19 -0400, James Thompson wrote:
 With GFS things seem at least noticeably more sluggish which I expect
 is normal given the multi-se

James,

I haven't personally used GFS before, but the sysstat package might be
able to help you out. Basically, from what i can see there would be two
different causes of the load:

CPU:
We've ran Zimbra in VMWare Server with an average load average of around
2-3 (exactly what your seeing) using Full Virtualisation via Intel VT
(ensure your Dell 1950 has this turned on via BIOS - it's not on by
default). Here you would probably see 'top' claiming the load average
(on the hypervisor) to be mostly on CPU.

IO:
Use the sysstat rpm (iowait app) to have a look at iowait times and
narrow it down to a IO problem, here you'd see 'top' claiming high load
average but CPU's look fine. We ran iSCSI on a 50mbit link (quick test),
and whilst formatting the load average shot up to ~5, CPU and everything
was fine and iowait quickly showed us it was waiting on IO the whole
time.

Hope that might help, you've probably checked these already though.

Does Virtualizzo supper paravirtualised guests - that might be the next
step to check out if you haven't already...

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt