[one-users] OpenNebula Drain CPU

2012-07-01 Thread Alberto Zuin - Liste

Hello List,
I have installed my servers in datacenter last Friday: I have a Cloud 
Controller made on old Xeon quadcore (a dell PowerEdge 860) with 6 GB 
RAM and 2x2TB SATA disks and I use it only for storage via MooseFS and 
to run OpenNebula daemon; I have also 2 KVM nodes that use VM raw disk 
on MooseFS daemon.
I plan to move the datas and some VM from other 2 old PE860 on this 
OpenNebula infrastructure and, after that, use the other 2 servers to 
add MooseFS nodes to the storage cluster to enable HA.
At the moment I have 10 VM running on this infrastructure, all ready but 
with the only load of incoming emails (no DNS, no web sites).
Without OpenNebula runnung, I have a load under 1 process; CPU is 70% 
idle and 25% waiting for disk.
This morning, (yesterday was OK) with OpenNebula running I have a load 
about 9-10 process; CPU is 0% idle, 0% waiting for disk, 60% system and 
35% nice. RAM is OK: 1 GB used, 1 GB free an 4 GB cached; no swap at all.
I checked the system for possible rootkits ecc., I checked logs (debug 
level 3) and I tried to move out one.db and restart OpenNebula but the 
load is always high. Nokogiri gem is installed. Sunstone, radacctd, 
econe, ecc. not running, only One daemon.
When I start OpenNebula, if I run netstat, I see many connection in 
Time Wait state to MooseFS.


In this situation I don't unterstand if it is a MooseFS problem or a 
OpenNebula problem, but without OpenNebula running all KVM VM works 
well, no other problems. What can I do to debug (and solve) this?


Thanks,
Alberto

--

Alberto Zuin
via Mare, 36/A
36030 Lugo di Vicenza (VI)
Italy
P.I. 04310790284
Tel. +39.0499271575
Fax. +39.0492106654
Cell. +39.3286268626
www.azns.it - albe...@azns.it

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] contribution of LVM2 transfer manager driver for opennebula 3.4.x

2012-07-01 Thread Rolandas Naujikas

On 2012-06-30 19:14, Shankhadeep Shome wrote:

Will this work with 3.6? Is there any reason clustered lvm is required over


I didn't try yet 3.6, but as seen in release notes for beta version, 
there are no changes in transfer manager drivers part, so it should work 
(but please test before using in production).


For normal LVM with shared disks (SAN), there I have no experience. I 
suspect, that in this case probably mv script could be improved, by 
not copying from one node to another in some cases (and/or not creating 
the copy in file for STOP).


Regards, Rolandas Naujikas


normal lvm with shared disks? I've been using glusterfs for shared storage
but it's been unstable and I've lost data with high i/o.

Shank

On Tue, Jun 26, 2012 at 6:58 AM, Rolandas Naujikas 
rolandas.nauji...@mif.vu.lt wrote:


Hi,

Because Debian 6.0 Xen doesn't support tap:aio: and because LVM disks are
faster, I wrote modified transfer manager driver for opennebula 3.4.x, that
use LVM volumes on local disks in virtualization hosts.

There are 3 kinds:

*) lvm2 - works with shared or not shared filesystem datastore (for system
datastore there is parameter in lvm.conf to tell shared or not it is).

*) lvm2ssh - the same, but removed code to detect that datastore is shared.

*) lvm2shared - the same, but with only assumption, than datastore is
shared.

URL of all them is at 
http://mif.vu.lt/~rolnas/one/**one34/tm/http://mif.vu.lt/~rolnas/one/one34/tm/

Regards, Rolandas Naujikas

P.S. This driver was in almost working condition in opennebula 3.2, but
was lost in opennebula 3.4. lvm driver in opennebula 3.4.1 is not for
system datastore and is different by design.
__**_
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org






___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org