Re: [ovirt-users] Engine setup failed with the latest master

2015-04-21 Thread Yedidyah Bar David
- Original Message -
 From: knarra kna...@redhat.com
 To: users@ovirt.org
 Sent: Tuesday, April 21, 2015 9:43:04 AM
 Subject: [ovirt-users] Engine setup failed with the latest master
 
 Hi Everyone,
 
  I have latest master installed on my system. When i tried yum
 update followed by engine-setup, engine-setup is not successful.

You mean this is an upgrade, from some older version of master to a
newer one?

Can you state exact versions?

 
 Following are the errors i see during engine-setup.
 
 
 [INFO  ] Rolling back database schema
   [ INFO  ] Clearing Engine database engine_20150402155955
   [ INFO  ] Restoring Engine database engine_20150402155955
   [ INFO  ] Restoring file
 '/var/lib/ovirt-engine/backups/engine-2015042802.LxRI8N.dump' to
 database localhost:engine_20150402155955.
   [ ERROR ] Errors while restoring engine_20150402155955 database,
 please check the log file for details

That's a different problem, happening during rollback. If possible, please
post relevant parts of the log, so that we can see why that failed.

 [ INFO  ] Stage: Clean up
 Log file is located at
 /var/log/ovirt-engine/setup/ovirt-engine-setup-20150421110722-raf0k9.log
 [ INFO  ] Generating answer file
 '/var/lib/ovirt-engine/setup/answers/20150421113049-setup.conf'
   [ INFO  ] Stage: Pre-termination
   [ INFO  ] Stage: Termination
   [ ERROR ] Execution of setup failed
 
 I see the following in the  logs.
 
 http://fpaste.org/213674/29598543/

What OS?

Please post the output of:

rpm -qa | egrep -i 'openssl|m2crypto'

/usr/share/ovirt-engine/bin/pki-pkcs12-extract.sh --name=engine --passin=mypass 
--cert=-

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS

2015-04-21 Thread InterNetX - Juergen Gotteswinter


Am 21.04.2015 um 16:09 schrieb Maikel vd Mosselaar:
 Hi Fred,
 
 
 This is one of the nodes from yesterday around 01:00 (20-04-15). The
 issue started around 01:00.
 https://bpaste.net/raw/67542540a106
 
 The VDSM logs are very big so i am unable to paste a bigger part of the
 logfile, i don't know what the maximum allowed attachment size is of the
 mailing list?
 
 dmesg on the one the nodes (despite this message the storage is still
 accessible):
 https://bpaste.net/raw/67da167aa300
 
Flaky Network? NFS / Lockd Processes saturated @ Nexenta?

 
 
 Kind regards,
 
 Maikel
 
 On 04/21/2015 02:32 PM, Fred Rolland wrote:
 Hi,

 Can you please attach VDSM logs ?

 Thanks,

 Fred

 - Original Message -
 From: Maikel vd Mosselaar m.vandemossel...@smoose.nl
 To: users@ovirt.org
 Sent: Monday, April 20, 2015 3:25:38 PM
 Subject: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS

 Hi,

 We are running ovirt 3.5.1 with 3 nodes and seperate engine.

 All on CentOS 6.6:
 3 x nodes
 1 x engine

 1 x storage nexenta with NFS

 For multiple weeks we are experiencing issues of our nodes that cannot
 access the storage at random moments (atleast thats what the nodes
 think).

 When the nodes are complaining about a unavailable storage then the load
 rises up to +200 on all three nodes, this causes that all running VMs
 are unaccessible. During this process oVirt event viewer shows some i/o
 storage error messages, when this happens random VMs get paused and will
 not be resumed anymore (this almost happens every time but not all the
 VMs get paused).

 During the event we tested the accessibility from the nodes to the
 storage and it looks like it is working normal, at least we can do a
 normal
 ls on the storage without any delay of showing the contents.

 We tried multiple things that we thought it causes this issue but
 nothing worked so far.
 * rebooting storage / nodes / engine.
 * disabling offsite rsync backups.
 * moved the biggest VMs with highest load to different platform outside
 of oVirt.
 * checked the wsize and rsize on the nfs mounts, storage and nodes are
 correct according to the NFS troubleshooting page on ovirt.org.

 The environment is running in production so we are not free to test
 everything.

 I can provide log files if needed.

 Kind Regards,

 Maikel


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Options not being passed fence_ipmilan, Ovirt3.5 on Centos 7.1 hosts

2015-04-21 Thread Mike Lindsay
Hi All,

I have a bit of an issue with a new install of Ovirt 3.5 (our 3.4 cluster
is working fine) in a 4 node cluster.

When I test fencing (or cause a kernal panic triggering a fence) the
fencing fails. On investigation it appears that the fencing options are not
being passed to the fencing script (fence_ipmilan in this case):

Fence options under GUI(as entered in the gui): lanplus, ipport=623,
power_wait=4, privlvl=operator

from vdsm.log on the fence proxy node:

Thread-818296::DEBUG::2015-04-21 12:39:39,136::API::1209::vds::(fenceNode)
fenceNode(addr=x.x.x.x,port=,agent=ipmilan,user=stonith,passwd=,action=status,secure=False,options=
power_wait=4
Thread-818296::DEBUG::2015-04-21 12:39:39,137::utils::739::root::(execCmd)
/usr/sbin/fence_ipmilan (cwd None)
Thread-818296::DEBUG::2015-04-21 12:39:39,295::utils::759::root::(execCmd)
FAILED: err = 'Failed: Unable to obtain correct plug status or plug is
not available\n\n\n'; rc = 1
Thread-818296::DEBUG::2015-04-21 12:39:39,296::API::1164::vds::(fence) rc 1
inp agent=fence_ipmilan
Thread-818296::DEBUG::2015-04-21 12:39:39,296::API::1235::vds::(fenceNode)
rc 1 in agent=fence_ipmilan
Thread-818296::DEBUG::2015-04-21
12:39:39,297::stompReactor::163::yajsonrpc.StompServer::(send) Sending
response


from engine.log on the engine:
2015-04-21 12:39:38,843 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-4) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Host mpc-ovirt-node03 from cluster Default was
chosen as a proxy to execute Status command on Host mpc-ovirt-node04.
2015-04-21 12:39:38,845 INFO  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-4) Using Host mpc-ovirt-node03 from cluster Default as
proxy to execute Status command on Host
2015-04-21 12:39:38,885 INFO  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-4) Executing Status Power Management command, Proxy
Host:mpc-ovirt-node03, Agent:ipmilan, Target Host:, Management IP:x.x.x.x,
User:stonith, Options: power_wait=4, ipport=623, privlvl=operator,lanplus,
Fencing policy:null
2015-04-21 12:39:38,921 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--127.0.0.1-8702-4) START, FenceVdsVDSCommand(HostName =
mpc-ovirt-node03, HostId = 5613a489-589d-4e89-ab01-3642795eedb8,
targetVdsId = dbfa4e85-3e97-4324-b222-bf40a491db08, action = Status, ip =
x.x.x.x, port = , type = ipmilan, user = stonith, password = **,
options = ' power_wait=4, ipport=623, privlvl=operator,lanplus', policy =
'null'), log id: 774f328
2015-04-21 12:39:39,338 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-4) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Power Management test failed for Host
mpc-ovirt-node04.Done
2015-04-21 12:39:39,339 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--127.0.0.1-8702-4) FINISH, FenceVdsVDSCommand, return: Test Succeeded,
unknown, log id: 774f328
2015-04-21 12:39:39,340 WARN  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-4) Fencing operation failed with proxy host
5613a489-589d-4e89-ab01-3642795eedb8, trying another proxy...
2015-04-21 12:39:39,594 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-4) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Host mpc-ovirt-node01 from cluster Default was
chosen as a proxy to execute Status command on Host mpc-ovirt-node04.
2015-04-21 12:39:39,595 INFO  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-4) Using Host mpc-ovirt-node01 from cluster Default as
proxy to execute Status command on Host
2015-04-21 12:39:39,598 INFO  [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-4) Executing Status Power Management command, Proxy
Host:mpc-ovirt-node01, Agent:ipmilan, Target Host:, Management IP:x.x.x.x,
User:stonith, Options: power_wait=4, ipport=623, privlvl=operator,lanplus,
Fencing policy:null
2015-04-21 12:39:39,634 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--127.0.0.1-8702-4) START, FenceVdsVDSCommand(HostName =
mpc-ovirt-node01, HostId = c3e8be6e-ac54-4861-b774-17ba5cc66dc6,
targetVdsId = dbfa4e85-3e97-4324-b222-bf40a491db08, action = Status, ip =
x.x.x.x, port = , type = ipmilan, user = stonith, password = **,
options = ' power_wait=4, ipport=623, privlvl=operator,lanplus', policy =
'null'), log id: 6369eb1
2015-04-21 12:39:40,056 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-4) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Power Management test failed for Host
mpc-ovirt-node04.Done
2015-04-21 12:39:40,057 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--127.0.0.1-8702-4) FINISH, FenceVdsVDSCommand, return: Test Succeeded,
unknown, log id: 6369eb1


For verification I temporarily replaced 

[ovirt-users] Nested KVM on AMD

2015-04-21 Thread Winfried de Heiden

  
  
Hi all,
  
  For testing purposes I installed vdsm-hook-nestedvt:
  
  rpm -qi vdsm-hook-nestedvt.noarch
  
  Name    : vdsm-hook-nestedvt   Relocations: (not
  relocatable)
  Version : 4.16.10   Vendor: (none)
  Release : 8.gitc937927.el6  Build Date: ma 12 jan
  2015 13:21:31 CET
  Install Date: do 16 apr 2015 15:37:00 CEST  Build Host:
  fc21-vm03.phx.ovirt.org
  Group   : Applications/System   Source RPM:
  vdsm-4.16.10-8.gitc937927.el6.src.rpm
  Size    : 1612 License: GPLv2+
  Signature   : RSA/SHA1, wo 21 jan 2015 15:32:39 CET, Key ID
  ab8c4f9dfe590cb7
  URL : http://www.ovirt.org/wiki/Vdsm
  Summary : Nested Virtualization support for VDSM
  Description :
  If the nested virtualization is enabled in your kvm module
  this hook will expose it to the guests.
  
  Installation looks fine on the oVirt (AMD cpu) KVM-host:
  
  [root@bigvirt ~]# cat /sys/module/kvm_amd/parameters/nested 
  1
  
  and in oVirt manager "50_nestedvt" will show up  in Host Hooks.
  However, trying to install "oVirt Node Hypervisor 3.5" it will
  warn "No virtualization Hardware was detected". Also, the svm flag
  is not shown on the guest machine.
  Am I missing something? Why is nested kvm not working?
  
  Winfried
  

  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Nested KVM on AMD

2015-04-21 Thread Artyom Lukianov
I enabled nested virtualization via:
1) echo options kvm-amd nested=1  /etc/modprobe.d/kvm-amd.conf
2) modprobe -r kvm-amd
3) modprobe kvm-amd

After it I can see svm flag on vm cpu, but from some reason I still receive the 
same error No virtualization Hardware was detected, when try to deploy vm in 
engine as host.

The same scenario for intel cpu's works fine and I can use nested 
virtualization:
1) echo options kvm-intel nested=1  /etc/modprobe.d/kvm-intel.conf
2) modprobe -r kvm-amd
3) modprobe kvm-amd

So it looks like problem in host deploy with amd cpu maybe Alon Bar-Lev can 
help with it.
Also check your kernel it must be = 3.10

- Original Message -
From: Winfried de Heiden w...@dds.nl
To: users@ovirt.org
Sent: Tuesday, April 21, 2015 10:45:06 AM
Subject: [ovirt-users] Nested KVM on AMD

Hi all, 

For testing purposes I installed vdsm-hook-nestedvt: 

rpm -qi vdsm-hook-nestedvt.noarch 

Name : vdsm-hook-nestedvt Relocations: (not relocatable) 
Version : 4.16.10 Vendor: (none) 
Release : 8.gitc937927.el6 Build Date: ma 12 jan 2015 13:21:31 CET 
Install Date: do 16 apr 2015 15:37:00 CEST Build Host: fc21-vm03.phx.ovirt.org 
Group : Applications/System Source RPM: vdsm-4.16.10-8.gitc937927.el6.src.rpm 
Size : 1612 License: GPLv2+ 
Signature : RSA/SHA1, wo 21 jan 2015 15:32:39 CET, Key ID ab8c4f9dfe590cb7 
URL : http://www.ovirt.org/wiki/Vdsm 
Summary : Nested Virtualization support for VDSM 
Description : 
If the nested virtualization is enabled in your kvm module 
this hook will expose it to the guests. 

Installation looks fine on the oVirt (AMD cpu) KVM-host: 

[root@bigvirt ~]# cat /sys/module/kvm_amd/parameters/nested 
1 

and in oVirt manager 50_nestedvt will show up in Host Hooks. However, trying 
to install oVirt Node Hypervisor 3.5 it will warn No virtualization Hardware 
was detected. Also, the svm flag is not shown on the guest machine. 
Am I missing something? Why is nested kvm not working? 

Winfried 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS

2015-04-21 Thread InterNetX - Juergen Gotteswinter
Hi,

how about Load, Latency, strange dmesg messages on the Nexenta ? You are
using bonded Gbit Networking? If yes, which mode?

Cheers,

Juergen

Am 20.04.2015 um 14:25 schrieb Maikel vd Mosselaar:
 Hi,
 
 We are running ovirt 3.5.1 with 3 nodes and seperate engine.
 
 All on CentOS 6.6:
 3 x nodes
 1 x engine
 
 1 x storage nexenta with NFS
 
 For multiple weeks we are experiencing issues of our nodes that cannot
 access the storage at random moments (atleast thats what the nodes think).
 
 When the nodes are complaining about a unavailable storage then the load
 rises up to +200 on all three nodes, this causes that all running VMs
 are unaccessible. During this process oVirt event viewer shows some i/o
 storage error messages, when this happens random VMs get paused and will
 not be resumed anymore (this almost happens every time but not all the
 VMs get paused).
 
 During the event we tested the accessibility from the nodes to the
 storage and it looks like it is working normal, at least we can do a normal
 ls on the storage without any delay of showing the contents.
 
 We tried multiple things that we thought it causes this issue but
 nothing worked so far.
 * rebooting storage / nodes / engine.
 * disabling offsite rsync backups.
 * moved the biggest VMs with highest load to different platform outside
 of oVirt.
 * checked the wsize and rsize on the nfs mounts, storage and nodes are
 correct according to the NFS troubleshooting page on ovirt.org.
 
 The environment is running in production so we are not free to test
 everything.
 
 I can provide log files if needed.
 
 Kind Regards,
 
 Maikel
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move/Migrate Storage Domain to new devices

2015-04-21 Thread Aharon Canan
Hi 

Did you try to copy the template to the new storage domain? 
Under Template tab - Disks sub-tab - copy 

Regards, 
__ 
Aharon Canan 

- Original Message -

 From: Dael Maselli dael.mase...@lnf.infn.it
 To: users@ovirt.org
 Sent: Monday, April 20, 2015 5:48:03 PM
 Subject: [ovirt-users] Move/Migrate Storage Domain to new devices

 Hi,

 I have a data storage domain that use one FC LUN. I need to move all
 data to a new storage server.

 I tried by move single disks to a new storage domain but some cannot be
 moved, I think because they are thin-cloned by template.

 When I worked with LVM I use to do a simple pvmove leaving the VG
 intact, is there something similar (online or in maintenance) in oVirt?
 Can I just do a pvmove from the SPM host o it's going to destroy everything?

 Thank you very much.

 Regards,
 Dael Maselli.

 --
 ___

 Dael Maselli --- INFN-LNF Computing Service -- +39.06.9403.2214
 ___

 * http://www.lnf.infn.it/~dmaselli/ *
 ___

 Democracy is two wolves and a lamb voting on what to have for lunch
 ___

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Nested KVM on AMD

2015-04-21 Thread Winfried de Heiden

  
  
Hi all,
  
  So, the kernel might be the problem, I'm running Centos6 using a
  kernel 2.6:
  
  [root@bigvirt qemu]# uname -r
  2.6.32-504.12.2.el6.x86_64
  
  It seems I need to upgrade first to Centos7
  
  Winfried 

Op 21-04-15 om 10:06 schreef Artyom
  Lukianov:


  I enabled nested virtualization via:
1) echo "options kvm-amd nested=1"  /etc/modprobe.d/kvm-amd.conf
2) modprobe -r kvm-amd
3) modprobe kvm-amd

After it I can see svm flag on vm cpu, but from some reason I still receive the same error "No virtualization Hardware was detected", when try to deploy vm in engine as host.

The same scenario for intel cpu's works fine and I can use nested virtualization:
1) echo "options kvm-intel nested=1"  /etc/modprobe.d/kvm-intel.conf
2) modprobe -r kvm-amd
3) modprobe kvm-amd

So it looks like problem in host deploy with amd cpu maybe Alon Bar-Lev can help with it.
Also check your kernel it must be = 3.10

- Original Message -
From: "Winfried de Heiden" w...@dds.nl
To: users@ovirt.org
Sent: Tuesday, April 21, 2015 10:45:06 AM
Subject: [ovirt-users] Nested KVM on AMD

Hi all, 

For testing purposes I installed vdsm-hook-nestedvt: 

rpm -qi vdsm-hook-nestedvt.noarch 

Name : vdsm-hook-nestedvt Relocations: (not relocatable) 
Version : 4.16.10 Vendor: (none) 
Release : 8.gitc937927.el6 Build Date: ma 12 jan 2015 13:21:31 CET 
Install Date: do 16 apr 2015 15:37:00 CEST Build Host: fc21-vm03.phx.ovirt.org 
Group : Applications/System Source RPM: vdsm-4.16.10-8.gitc937927.el6.src.rpm 
Size : 1612 License: GPLv2+ 
Signature : RSA/SHA1, wo 21 jan 2015 15:32:39 CET, Key ID ab8c4f9dfe590cb7 
URL : http://www.ovirt.org/wiki/Vdsm 
Summary : Nested Virtualization support for VDSM 
Description : 
If the nested virtualization is enabled in your kvm module 
this hook will expose it to the guests. 

Installation looks fine on the oVirt (AMD cpu) KVM-host: 

[root@bigvirt ~]# cat /sys/module/kvm_amd/parameters/nested 
1 

and in oVirt manager "50_nestedvt" will show up in Host Hooks. However, trying to install "oVirt Node Hypervisor 3.5" it will warn "No virtualization Hardware was detected". Also, the svm flag is not shown on the guest machine. 
Am I missing something? Why is nested kvm not working? 

Winfried 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Agenda for April 22 oVirt Weekly Sync

2015-04-21 Thread Sandro Bonazzola
Hi,
I would like to add to the oVirt Weekly Sync agenda the following topics

- oVirt 3.5.2 GA - Go / No Go
- oVirt 3.6.0 Feature submission deadline
- oVirt Infra security hardening status
- Action items status from previous sync meeting
- Schedule / Volunteers for office hours series

Thanks,

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS

2015-04-21 Thread InterNetX - Juergen Gotteswinter
Am 21.04.2015 um 16:19 schrieb Maikel vd Mosselaar:
 Hi Juergen,
 
 The load on the nodes rises far over 200 during the event. Load on the
 nexenta stays normal and nothing strange in the logging.

ZFS + NFS could be still the root of this. Your Pool Configuration is
RaidzX or Mirror, with or without ZIL? The sync Parameter of your ZFS
Subvolume which gets exported is kept default on standard ?

http://christopher-technicalmusings.blogspot.de/2010/09/zfs-and-nfs-performance-with-zil.html

Since Ovirt acts very sensible about Storage Latency (throws VM into
unresponsive or unknown state) it might be worth a try to do zfs set
sync=disabled pool/volume to see if this changes things. But be aware
that this makes the NFS Export vuln. against dataloss in case of
powerloss etc, comparable to async NFS in Linux.

If disabling the sync setting helps, and you dont use a seperate ZIL
Flash Drive yet - this whould be very likely help to get rid of this.

Also, if you run a subscribed Version of Nexenta it might be helpful to
involve them.

Do you see any messages about high latency in the Ovirt Events Panel?

 
 For our storage interfaces on our nodes we use bonding in mode 4
 (802.3ad) 2x 1Gb. The nexenta has 4x 1Gb bond in mode 4 also.

This should be fine, as long as no Node uses Mode0 / Round Robin which
whould lead to out of order TCP Packets. The Interfaces themself dont
show any Drops or Errors - on the VM Hosts as well as on the Switch itself?

Jumbo Frames?

 
 Kind regards,
 
 Maikel
 
 
 On 04/21/2015 02:51 PM, InterNetX - Juergen Gotteswinter wrote:
 Hi,

 how about Load, Latency, strange dmesg messages on the Nexenta ? You are
 using bonded Gbit Networking? If yes, which mode?

 Cheers,

 Juergen

 Am 20.04.2015 um 14:25 schrieb Maikel vd Mosselaar:
 Hi,

 We are running ovirt 3.5.1 with 3 nodes and seperate engine.

 All on CentOS 6.6:
 3 x nodes
 1 x engine

 1 x storage nexenta with NFS

 For multiple weeks we are experiencing issues of our nodes that cannot
 access the storage at random moments (atleast thats what the nodes
 think).

 When the nodes are complaining about a unavailable storage then the load
 rises up to +200 on all three nodes, this causes that all running VMs
 are unaccessible. During this process oVirt event viewer shows some i/o
 storage error messages, when this happens random VMs get paused and will
 not be resumed anymore (this almost happens every time but not all the
 VMs get paused).

 During the event we tested the accessibility from the nodes to the
 storage and it looks like it is working normal, at least we can do a
 normal
 ls on the storage without any delay of showing the contents.

 We tried multiple things that we thought it causes this issue but
 nothing worked so far.
 * rebooting storage / nodes / engine.
 * disabling offsite rsync backups.
 * moved the biggest VMs with highest load to different platform outside
 of oVirt.
 * checked the wsize and rsize on the nfs mounts, storage and nodes are
 correct according to the NFS troubleshooting page on ovirt.org.

 The environment is running in production so we are not free to test
 everything.

 I can provide log files if needed.

 Kind Regards,

 Maikel


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.6 new feature: NUMA aware KSM support

2015-04-21 Thread David Maroshi
Greetings users and developers,

Just put up a feature page for the NUMA aware KSM support;

In summary,
===
The KSM service is optimizing shared memory pages across all NUMA nodes. The 
consequences is: a shared memory pages (controlled by KSM) to be read from many 
CPUs across NUMA nodes. Hence degrading (could totally eliminate) the NUMA 
performance gain. The optimal behavior is for KSM feature to optimize memory 
usage per NUMA nodes. This feature is implemented in KSM since RHEL 6.5. We 
want to introduce a new feature to oVirt that will allow host administrator to 
control NUMA aware KSM policy. 

You're welcome to review the feature page:
http://www.ovirt.org/NumaAwareKsmSupport
The feature page is responding to RFE 
https://bugzilla.redhat.com/show_bug.cgi?id=840114

Your comments appreciated.

Thanks
Joy and happiness
Dudi

-- 
Dudi Maroshi
Software Engineer
Red Hat Enterprise Virtualization project 
http://www.redhat.com/en/technologies/virtualization
maro...@redhat.com

I am perfectly imperfect.
There is only one absolute truth, and that is: there in no one absolute truth.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Why my Over Allocation Ratio is negative??

2015-04-21 Thread Adam Litke

On 20/04/15 17:29 +0200, Arman Khalatyan wrote:

In my ovirt-GUI(Version 3.5.2-1.el6 prerelease) I can see following:

Size:
20479 GB
Available:
11180 GB
Used:
9299 GB
Allocated:
290 GB
Over Allocation Ratio:
-52%

What does it mean??


I think it just means that you haven't overcommitted your storage yet
(ie. even if you fully allocated space for all of your current VMs
you'd still have roughly half of your storage free).

--
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS

2015-04-21 Thread Maikel vd Mosselaar

Hi Fred,


This is one of the nodes from yesterday around 01:00 (20-04-15). The 
issue started around 01:00.

https://bpaste.net/raw/67542540a106

The VDSM logs are very big so i am unable to paste a bigger part of the 
logfile, i don't know what the maximum allowed attachment size is of the 
mailing list?


dmesg on the one the nodes (despite this message the storage is still 
accessible):

https://bpaste.net/raw/67da167aa300



Kind regards,

Maikel

On 04/21/2015 02:32 PM, Fred Rolland wrote:

Hi,

Can you please attach VDSM logs ?

Thanks,

Fred

- Original Message -

From: Maikel vd Mosselaar m.vandemossel...@smoose.nl
To: users@ovirt.org
Sent: Monday, April 20, 2015 3:25:38 PM
Subject: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS

Hi,

We are running ovirt 3.5.1 with 3 nodes and seperate engine.

All on CentOS 6.6:
3 x nodes
1 x engine

1 x storage nexenta with NFS

For multiple weeks we are experiencing issues of our nodes that cannot
access the storage at random moments (atleast thats what the nodes think).

When the nodes are complaining about a unavailable storage then the load
rises up to +200 on all three nodes, this causes that all running VMs
are unaccessible. During this process oVirt event viewer shows some i/o
storage error messages, when this happens random VMs get paused and will
not be resumed anymore (this almost happens every time but not all the
VMs get paused).

During the event we tested the accessibility from the nodes to the
storage and it looks like it is working normal, at least we can do a normal
ls on the storage without any delay of showing the contents.

We tried multiple things that we thought it causes this issue but
nothing worked so far.
* rebooting storage / nodes / engine.
* disabling offsite rsync backups.
* moved the biggest VMs with highest load to different platform outside
of oVirt.
* checked the wsize and rsize on the nfs mounts, storage and nodes are
correct according to the NFS troubleshooting page on ovirt.org.

The environment is running in production so we are not free to test
everything.

I can provide log files if needed.

Kind Regards,

Maikel


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS

2015-04-21 Thread Maikel vd Mosselaar

Hi Juergen,

The load on the nodes rises far over 200 during the event. Load on the 
nexenta stays normal and nothing strange in the logging.


For our storage interfaces on our nodes we use bonding in mode 4 
(802.3ad) 2x 1Gb. The nexenta has 4x 1Gb bond in mode 4 also.


Kind regards,

Maikel


On 04/21/2015 02:51 PM, InterNetX - Juergen Gotteswinter wrote:

Hi,

how about Load, Latency, strange dmesg messages on the Nexenta ? You are
using bonded Gbit Networking? If yes, which mode?

Cheers,

Juergen

Am 20.04.2015 um 14:25 schrieb Maikel vd Mosselaar:

Hi,

We are running ovirt 3.5.1 with 3 nodes and seperate engine.

All on CentOS 6.6:
3 x nodes
1 x engine

1 x storage nexenta with NFS

For multiple weeks we are experiencing issues of our nodes that cannot
access the storage at random moments (atleast thats what the nodes think).

When the nodes are complaining about a unavailable storage then the load
rises up to +200 on all three nodes, this causes that all running VMs
are unaccessible. During this process oVirt event viewer shows some i/o
storage error messages, when this happens random VMs get paused and will
not be resumed anymore (this almost happens every time but not all the
VMs get paused).

During the event we tested the accessibility from the nodes to the
storage and it looks like it is working normal, at least we can do a normal
ls on the storage without any delay of showing the contents.

We tried multiple things that we thought it causes this issue but
nothing worked so far.
* rebooting storage / nodes / engine.
* disabling offsite rsync backups.
* moved the biggest VMs with highest load to different platform outside
of oVirt.
* checked the wsize and rsize on the nfs mounts, storage and nodes are
correct according to the NFS troubleshooting page on ovirt.org.

The environment is running in production so we are not free to test
everything.

I can provide log files if needed.

Kind Regards,

Maikel


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS

2015-04-21 Thread Fred Rolland
Hi,

Can you please attach VDSM logs ?

Thanks,

Fred

- Original Message -
 From: Maikel vd Mosselaar m.vandemossel...@smoose.nl
 To: users@ovirt.org
 Sent: Monday, April 20, 2015 3:25:38 PM
 Subject: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS
 
 Hi,
 
 We are running ovirt 3.5.1 with 3 nodes and seperate engine.
 
 All on CentOS 6.6:
 3 x nodes
 1 x engine
 
 1 x storage nexenta with NFS
 
 For multiple weeks we are experiencing issues of our nodes that cannot
 access the storage at random moments (atleast thats what the nodes think).
 
 When the nodes are complaining about a unavailable storage then the load
 rises up to +200 on all three nodes, this causes that all running VMs
 are unaccessible. During this process oVirt event viewer shows some i/o
 storage error messages, when this happens random VMs get paused and will
 not be resumed anymore (this almost happens every time but not all the
 VMs get paused).
 
 During the event we tested the accessibility from the nodes to the
 storage and it looks like it is working normal, at least we can do a normal
 ls on the storage without any delay of showing the contents.
 
 We tried multiple things that we thought it causes this issue but
 nothing worked so far.
 * rebooting storage / nodes / engine.
 * disabling offsite rsync backups.
 * moved the biggest VMs with highest load to different platform outside
 of oVirt.
 * checked the wsize and rsize on the nfs mounts, storage and nodes are
 correct according to the NFS troubleshooting page on ovirt.org.
 
 The environment is running in production so we are not free to test
 everything.
 
 I can provide log files if needed.
 
 Kind Regards,
 
 Maikel
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Is it possible to limit the number and speed of paralel STORAGE migrations?

2015-04-21 Thread Ernest Beinrohr
Ovirt uses dd and qemu-img for live migration. Is it possible to limit 
the number of concurrent live storage moves or limit the bandwidth used?



I'd like to move about 30 disks to another storage during the night, but 
each takes about 30 minutes each and if more than one runs, it chokes my 
storage.


--
Ernest Beinrohr, AXON PRO
Ing http://www.beinrohr.sk/ing.php, RHCE 
http://www.beinrohr.sk/rhce.php, RHCVA 
http://www.beinrohr.sk/rhce.php, LPIC 
http://www.beinrohr.sk/lpic.php, VCA http://www.beinrohr.sk/vca.php,

+421-2-62410360 +421-903-482603
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Feature: Hosted engine VM management

2015-04-21 Thread Roy Golan

On 04/21/2015 06:36 PM, Roy Golan wrote:

Hi all,

Upcoming in 3.6 is enhancement for managing the hosted engine VM.

In short, we want to:

* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the hosted engine VM configuration

please review and comment on the wiki below:

http://www.ovirt.org/Hosted_engine_VM_management

Thanks,
Roy
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is it possible to limit the number and speed of paralel STORAGE migrations?

2015-04-21 Thread Dan Yasny
Why not just script them to migrate one after the other? The CLI is nice
and simple, and the SDK is even nicer

On Tue, Apr 21, 2015 at 11:29 AM, Ernest Beinrohr 
ernest.beinr...@axonpro.sk wrote:

  Ovirt uses dd and qemu-img for live migration. Is it possible to limit
 the number of concurrent live storage moves or limit the bandwidth used?


 I'd like to move about 30 disks to another storage during the night, but
 each takes about 30 minutes each and if more than one runs, it chokes my
 storage.

 --
  Ernest Beinrohr, AXON PRO
 Ing http://www.beinrohr.sk/ing.php, RHCE
 http://www.beinrohr.sk/rhce.php, RHCVA http://www.beinrohr.sk/rhce.php,
 LPIC http://www.beinrohr.sk/lpic.php, VCA
 http://www.beinrohr.sk/vca.php,
 +421-2-62410360 +421-903-482603

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Feature: Hosted engine VM management

2015-04-21 Thread Roy Golan

Hi all,

Upcoming in 3.6 is enhancement for managing the hosted engine VM.

In short, we want to:

* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the hosted engine VM configuration

please review and comment on the wiki below:

http://www.ovirt.org/Hosted_engine_VM_management

Thanks,
Roy
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS

2015-04-21 Thread Maikel vd Mosselaar

Hi,

We are running ovirt 3.5.1 with 3 nodes and seperate engine.

All on CentOS 6.6:
3 x nodes
1 x engine

1 x storage nexenta with NFS

For multiple weeks we are experiencing issues of our nodes that cannot 
access the storage at random moments (atleast thats what the nodes think).


When the nodes are complaining about a unavailable storage then the load 
rises up to +200 on all three nodes, this causes that all running VMs
are unaccessible. During this process oVirt event viewer shows some i/o 
storage error messages, when this happens random VMs get paused and will
not be resumed anymore (this almost happens every time but not all the 
VMs get paused).


During the event we tested the accessibility from the nodes to the 
storage and it looks like it is working normal, at least we can do a normal

ls on the storage without any delay of showing the contents.

We tried multiple things that we thought it causes this issue but 
nothing worked so far.

* rebooting storage / nodes / engine.
* disabling offsite rsync backups.
* moved the biggest VMs with highest load to different platform outside 
of oVirt.
* checked the wsize and rsize on the nfs mounts, storage and nodes are 
correct according to the NFS troubleshooting page on ovirt.org.


The environment is running in production so we are not free to test 
everything.


I can provide log files if needed.

Kind Regards,

Maikel


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users