Re: [ovirt-users] VM failover with ovirt3.5

2015-01-08 Thread Yue, Cong
The patch works for my case1. Thanks!

Thanks,
Cong


-Original Message-
From: Jiri Moskovcak [mailto:jmosk...@redhat.com]
Sent: Tuesday, January 06, 2015 1:44 AM
To: Artyom Lukianov; Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org; Yedidyah Bar David; Sandro 
Bonazzola
Subject: Re: [ovirt-users] VM failover with ovirt3.5

On 01/06/2015 09:34 AM, Artyom Lukianov wrote:
 Case 1:
 In vdsm.log I can see this one:
 Thread-674407::ERROR::2015-01-05 12:09:43,264::migration::259::vm.Vm::(run) 
 vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
 Traceback (most recent call last):
File /usr/share/vdsm/virt/migration.py, line 245, in run
  self._startUnderlyingMigration(time.time())
File /usr/share/vdsm/virt/migration.py, line 324, in 
 _startUnderlyingMigration
  None, maxBandwidth)
File /usr/share/vdsm/virt/vm.py, line 670, in f
  ret = attr(*args, **kwargs)
File /usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py, line 
 111, in wrapper
  ret = f(*args, **kwargs)
File /usr/lib64/python2.7/site-packages/libvirt.py, line 1264, in 
 migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
 dom=self)
 libvirtError: operation aborted: migration job: canceled by client
 I see that this kind can be happen, because migration time exceeding the 
 configured maximum time for migrations, but anyway we need help from devs, I 
 added some to CC.


- agent did everything correctly and as Artyom says, the migration is
aborted by vdsm:

The migration took 260 seconds which is exceeding the configured maximum
time for migrations of 256 seconds. The migration will be aborted.

- there is a configuration option in vdsm conf you can tweak to increase
the timeout:

snip
'migration_max_time_per_gib_mem', '64',
 'The maximum time in seconds per GiB memory a migration may
take '
 'before the migration will be aborted by the source host. '
 'Setting this value to 0 will disable this feature.'
/snip

So as you can see in your case it's 4 * 64 seconds = 256seconds.


--Jirka

 Case 2:
 HA vm must migrate only in case of some fail on host3, so if your host_3 is 
 ok vm will continue run on it.


 - Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
 Sent: Monday, January 5, 2015 7:38:08 PM
 Subject: RE: [ovirt-users] VM failover with ovirt3.5

 I collected the agent.log and vdsm.log in 2 cases.

 Case1 HE VM failover trail
 What I did
 1, make all host be engine up
 2, set host1 be with local maintenance mode. In host1, there is HE VM.
 3, Then HE VM is trying to migrate, but finally it fails. This can be found 
 from agent.log_hosted_engine_1
 As for the log is very large, I uploaded into google dirve. The link is as
 https://drive.google.com/drive/#folders/0B9Pi5vvimKTdNU82bWVpZDhDQmM/0B9Pi5vvimKTdRGJhUXUwejNGRHc
 The logs are for 3 hosts in my environment.

 Case2 non-HE VM failover trail
 1, make all host be engine up
 2,set host2 be with local maintenance mode. In host3, there is one vm with ha 
 enabled. Also for the cluster, Enable HA reservation and Resilience policy 
 is set as migrating virtual machines
 3,But the vm on the top of host3 does not migrate at all.
 The logs are uploaded to good drive as
 https://drive.google.com/drive/#folders/0B9Pi5vvimKTdNU82bWVpZDhDQmM/0B9Pi5vvimKTdd3MzTXZBbmxpNmc


 Thanks,
 Cong




 -Original Message-
 From: Artyom Lukianov [mailto:aluki...@redhat.com]
 Sent: Sunday, January 04, 2015 3:22 AM
 To: Yue, Cong
 Cc: cong yue; stira...@redhat.com; users@ovirt.org
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 Can you provide vdsm logs:
 1) for HE vm case
 2) for not HE vm case
 Thanks

 - Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
 Sent: Thursday, January 1, 2015 2:32:18 AM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 Thanks for the advice. I applied the patch for clientIF.py as
 - port = config.getint('addresses', 'management_port')
 + port = config.get('addresses', 'management_port')

 Now there is no fatal error in beam.log, also migration can start to happen 
 when I set the host where HE VM is to be local maintenance mode. But it 
 finally fail with the following log. Also HE VM can not be done with live 
 migration in my environment.

 MainThread::INFO::2014-12-31
 19:08:06,197::states::759::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Continuing to monitor migration
 MainThread::INFO::2014-12-31
 19:08:06,430::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Current state EngineMigratingAway (score: 2000)
 MainThread::INFO::2014-12-31
 

Re: [ovirt-users] Using gluster on other hosts?

2015-01-08 Thread Sahina Bose


On 01/08/2015 09:41 PM, Will K wrote:

That's what I did, but didn't work for me.

1. use the 192.168.x interface to setup gluster. I used hostname in 
/etc/hosts.

2. setup oVirt using the switched network hostnames, let's say 10.10.10.x
3. oVirt and all that comes up fine.
4. When try to create a storage domain, it only shows the 10.10.10.x 
hostnames available.



Tried to add a brick and I would get something like
Host gfs2 is not in 'Peer in Cluster' state  (while node2 is the 
hostname and gfs2 is the 192.168 name)



Which version of glusterfs do you have?

Kaushal, will this work in glusterfs3.6 and above?




Ran command `gluster probe peer gfs2` or `gluster probe peer 
192.168.x.x` didn't work

peer probe: failed: Probe returned with unknown errno 107

Ran probe again with the switched network hostname or IP worked fine. 
May be it is not possible with current GlusterFS version?

http://www.gluster.org/community/documentation/index.php/Features/SplitNetwork


Will


On Thursday, January 8, 2015 3:43 AM, Sahina Bose sab...@redhat.com 
wrote:




On 01/08/2015 12:07 AM, Will K wrote:

Hi

I would like to see if anyone has good suggestion.

I have two physical hosts with 1GB connections to switched networks. 
The hosts also have 10GB interface connected directly using Twinax 
cable like copper crossover cable.  The idea was to use the 10GB as a 
private network for GlusterFS till the day we want to grow out of 
this 2 node setup.


GlusterFS was setup with the 10GB ports using non-routable IPs and 
hostnames in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2 
192.168.1.2.  I'm following example from 
community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/ 
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/ 
, Currently I'm only using Gluster volume on node1, but `gluster 
probe peer` test worked fine with node2 through the 10GB connection.


oVirt engine was setup on physical host1 with hosted engine.  Now, 
when I try to create new Gluster storage domain, I can only see the 
host node1 available.


Is there anyway I can setup oVirt on node1 and node2, while using 
gfs1 and gfs2 for GlusterFS? or some way to take advantage of the 
10GB connection?


If I understand right, you have 2 interfaces on each of your hosts, 
and you want oVirt to communicate via 1 interface and glusterfs to use 
other?


While adding the hosts to oVirt you could use ip1 and then.while 
creating the volume, add the brick using the other ip address.

For instance, gluster volume create volname 192.168.1.2:/bricks/b1

Currently, there's no way to specify the IP address to use while 
adding a brick from oVirt UI (we're working on this for 3.6), but you 
could do this from the gluster CLI commands. This would then be 
detected in the oVirt UI.






Thanks
W



___
Users mailing list
Users@ovirt.org  mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Setting Base DN for LDAP authentication

2015-01-08 Thread jdeloro
Hello,

I'm trying to configure LDAP authentication with oVirt 3.5 and 
ovirt-engine-extension-aaa-ldap. I chose the simple bind transport example. But 
the given examples are missing the explicit specification of a base dn. Could 
you please advise me how this can be done?

My curent configuration:

[jd@om01 ovirt-engine]$ cat aaa/company-ldap.properties 
include = openldap.properties

vars.server = ldap.company.de

vars.user = cn=system,dc=company,dc=de
vars.password = password

pool.default.serverset.single.server = ${global:vars.server}
pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}

[jd@om01 ovirt-engine]$ cat company-ldap-authn.properties 
ovirt.engine.extension.name = company-ldap-authn
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module = 
org.ovirt.engine-extensions.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class = 
org.ovirt.engineextensions.aaa.ldap.AuthnExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn
ovirt.engine.aaa.authn.profile.name = company-ldap
ovirt.engine.aaa.authn.authz.plugin = company-ldap-authz
config.profile.file.1 = /etc/ovirt-engine/aaa/company-ldap.properties

[jd@om01 ovirt-engine]$ cat company-ldap-authz.properties 
ovirt.engine.extension.name = company-ldap-authz
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module = 
org.ovirt.engine-extensions.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class = 
org.ovirt.engineextensions.aaa.ldap.AuthzExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz
config.profile.file.1 = /etc/ovirt-engine/aaa/company-ldap.properties

[jd@om01 ovirt-engine]$ ldapsearch -H ldap://ldap.company.de -D 
cn=system,dc=company,dc=de -W -b dc=company,dc=de cn=jdeloro
# extended LDIF
#
# LDAPv3
# base dc=company,dc=de with scope subtree
# filter: cn=jdeloro
# requesting: ALL
#

# jdeloro, users, admins, company.de
dn: cn=jdeloro,ou=users,ou=admins,dc=company,dc=de
[... and many more lines ...]

I could not use namingContexts from RootDSE cause this results in base dn dc=de 
instead of dc=company,dc=de.

Kind regards

Jannick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Affinity groups ovirt 3.4.4

2015-01-08 Thread Artyom Lukianov
At present time we have only affinity filter and affinity weight module, so you 
can see work of affinity groups only when you start or manually migrate vms. If 
you have hard positive affinity group, that include two vms, and first vm was 
started on first host, second vm also must start on first host. I don't know if 
devs will add also balancing module for affinity groups. But you can open PRD 
bug for this one.
Thanks

- Original Message -
From: Gary Lloyd g.ll...@keele.ac.uk
To: users@ovirt.org
Sent: Thursday, January 8, 2015 1:12:01 PM
Subject: [ovirt-users] VM Affinity groups ovirt 3.4.4

Hi we have recently updated our production environment to ovirt 3.4.4 . 

I have created a positive enforcing VM Affinity Group with 2 vms in one of our 
clusters, but they don't seem to be moving (currently on different hosts). Is 
there something else I need to activate ? 

Thanks 

Gary Lloyd 
-- 
IT Services 
Keele University 
--- 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Failing to connect to console via vnc (was: HostedEngine Deployment Woes)

2015-01-08 Thread Yedidyah Bar David
- Original Message -
 From: Mikola Rose mr...@power-soft.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: users@ovirt.org
 Sent: Thursday, January 8, 2015 2:03:08 AM
 Subject: Re: [ovirt-users] HostedEngine Deployment Woes
 
 
 I am also Seeing this in the messages.log
 
 Jan  7 13:55:39 pws-hv15 vdsm vds ERROR unexpected error#012Traceback (most
 recent call last):#012  File /usr/share/vdsm/BindingXMLRPC.py, line 1070,
 in wrapper#012res = f(*args, **kwargs)#012  File
 /usr/share/vdsm/BindingXMLRPC.py, line 314, in vmSetTicket#012return
 vm.setTicket(password, ttl, existingConnAction, params)#012  File
 /usr/share/vdsm/API.py, line 610, in setTicket#012return
 v.setTicket(password, ttl, existingConnAction, params)#012  File
 /usr/share/vdsm/vm.py, line 4560, in setTicket#012graphics =
 _domParseStr(self._dom.XMLDesc(0)).childNodes[0]. \#012AttributeError:
 'NoneType' object has no attribute 'XMLDesc'
 Jan  7 13:55:39 pws-hv15 sanlock[2152]: 2015-01-07 13:55:39-0800 6412 [2152]:
 cmd 9 target pid 7899 not found
 Jan  7 13:55:40 pws-hv15 vdsm vm.Vm WARNING
 vmId=`edc69fdd-6188-4dce-b1dc-d5ed03f914e5`::_readPauseCode unsupported by
 libvirt vm

When? When you try to connect with virsh?

Note that you need to create a temporary password with:
hosted-engine --add-console-password

 
 
 On Jan 7, 2015, at 1:58 PM, Mikola Rose
 mr...@power-soft.commailto:mr...@power-soft.com wrote:
 
 Also tried;
 [root@pws-hv15 ~]# virsh -c qemu+tls://pws-hv15.power-soft.net/system console
 HostedEngine
 Connected to domain HostedEngine
 Escape character is ^]
 
 
 no prompts or interaction after the Escape character line

iirc 'console' connects you to the serial console, not vnc, no?

 
 
 On Jan 7, 2015, at 1:42 PM, Mikola Rose
 mr...@power-soft.commailto:mr...@power-soft.com wrote:
 
 I tried realvnc

How exactly?

You need to connect to the host where it runs.

BTW, do you have there other VMs? If you migrated the engine vm there
after you already had other VMs, it will not get console '0' but some
other.

 
 The log file doesn’t really tell me whats going on that I can see
 
 2015-01-07 21:35:15.381+: starting up
 LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
 QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name HostedEngine -S -M rhel6.5.0
 -cpu Westmere -enable-kvm -m 4096 -realtime mlock=off -smp
 2,sockets=2,cores=1,threads=1 -uuid edc69fdd-6188-4dce-b1dc-d5ed03f914e5
 -smbios type=1,manufacturer=Red Hat,product=RHEV
 Hypervisor,version=6Server-6.6.0.2.el6,serial=4C4C4544-0036-5210-8036-B1C04F4A3032_d4:ae:52:d1:91:7a,uuid=edc69fdd-6188-4dce-b1dc-d5ed03f914e5
 -nodefconfig -nodefaults -chardev
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/HostedEngine.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc
 base=2015-01-07T21:35:15,driftfix=slew -no-reboot -no-shutdown -device
 piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
 virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
 virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
 file=/rhel-server-6.6-x86_64-dvd.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=
 -device
 ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
 -drive
 file=/var/run/vdsm/storage/9b103e2f-d35b-4b56-a380-a374c21219d1/bdfd32e0-8116-4d29-baa2-6f6a98d981c6/9d2b9c8d-9005-4cb9-836b-a3848a24b4c4,if=none,id=drive-virtio-disk0,format=raw,serial=bdfd32e0-8116-4d29-baa2-6f6a98d981c6,cache=none,werror=stop,rerror=stop,aio=threads
 -device
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0
 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
 virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:64:dd:63,bus=pci.0,addr=0x3
 -chardev
 socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/edc69fdd-6188-4dce-b1dc-d5ed03f914e5.com.redhat.rhevm.vdsm,server,nowait
 -device
 virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
 -chardev
 socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/edc69fdd-6188-4dce-b1dc-d5ed03f914e5.org.qemu.guest_agent.0,server,nowait
 -device
 virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
 -chardev pty,id=charconsole0 -device
 virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password -vga cirrus
 -msg timestamp=on
 char device redirected to /dev/pts/1

Sorry, no idea.

Please try getting some more info from the vnc client.
I personally use tightvnc on linux and verified that it can connect to
a hosted-engine console.

I am also changing the subject to attract others...

 
 On Jan 7, 2015, at 12:18 AM, Yedidyah Bar David
 d...@redhat.commailto:d...@redhat.com wrote:
 
 - Original Message -
 From: Mikola Rose mr...@power-soft.commailto:mr...@power-soft.com
 To: Yedidyah Bar David d...@redhat.commailto:d...@redhat.com
 Cc: users@ovirt.orgmailto:users@ovirt.org
 Sent: Wednesday, January 7, 

Re: [ovirt-users] Using gluster on other hosts?

2015-01-08 Thread Sahina Bose


On 01/08/2015 12:07 AM, Will K wrote:

Hi

I would like to see if anyone has good suggestion.

I have two physical hosts with 1GB connections to switched networks. 
The hosts also have 10GB interface connected directly using Twinax 
cable like copper crossover cable.  The idea was to use the 10GB as a 
private network for GlusterFS till the day we want to grow out of 
this 2 node setup.


GlusterFS was setup with the 10GB ports using non-routable IPs and 
hostnames in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2 
192.168.1.2.  I'm following example from 
community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/ 
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/ 
, Currently I'm only using Gluster volume on node1, but `gluster probe 
peer` test worked fine with node2 through the 10GB connection.


oVirt engine was setup on physical host1 with hosted engine.  Now, 
when I try to create new Gluster storage domain, I can only see the 
host node1 available.


Is there anyway I can setup oVirt on node1 and node2, while using 
gfs1 and gfs2 for GlusterFS? or some way to take advantage of the 
10GB connection?


If I understand right, you have 2 interfaces on each of your hosts, and 
you want oVirt to communicate via 1 interface and glusterfs to use other?


While adding the hosts to oVirt you could use ip1 and then.while 
creating the volume, add the brick using the other ip address.

For instance, gluster volume create volname 192.168.1.2:/bricks/b1

Currently, there's no way to specify the IP address to use while adding 
a brick from oVirt UI (we're working on this for 3.6), but you could do 
this from the gluster CLI commands. This would then be detected in the 
oVirt UI.





Thanks
W


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create volume in OVirt with gluster

2015-01-08 Thread Punit Dambiwal
Hi Martin,

The steps are below :-

1. Step the ovirt engine on the one server...
2. Installed centos 7 on 4 host node servers..
3. I am using host node (compute+storage)now i have added all 4 nodes
to engine...
4. Create the gluster volume from GUI...

Network :-
eth0 :- public network (1G)
eth1+eth2=bond0= VM public network (1G)
eth3+eth4=bond1=ovirtmgmt+storage (10G private network)

every hostnode has 24 bricks=24*4(distributed replicated)

Thanks,
Punit


On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík mpav...@redhat.com wrote:

 Hi Punit,

 can you please provide also errors from /var/log/vdsm/vdsm.log and
 /var/log/vdsm/vdsmd.log

 it would be really helpful if you provided exact steps how to reproduce
 the problem.

 regards

 Martin Pavlik - rhev QE
  On 08 Jan 2015, at 03:06, Punit Dambiwal hypu...@gmail.com wrote:
 
  Hi,
 
  I try to add gluster volume but it failed...
 
  Ovirt :- 3.5
  VDSM :- vdsm-4.16.7-1.gitdb83943.el7
  KVM :- 1.5.3 - 60.el7_0.2
  libvirt-1.1.1-29.el7_0.4
  Glusterfs :- glusterfs-3.5.3-1.el7
 
  Engine Logs :-
 
  2015-01-08 09:57:52,569 INFO
 [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
 (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock
 EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
 value: GLUSTER
  , sharedLocks= ]
  2015-01-08 09:57:52,609 INFO
 [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
 (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock
 EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
 value: GLUSTER
  , sharedLocks= ]
  2015-01-08 09:57:55,582 INFO
 [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
 (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock
 EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
 value: GLUSTER
  , sharedLocks= ]
  2015-01-08 09:57:55,591 INFO
 [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
 (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock
 EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
 value: GLUSTER
  , sharedLocks= ]
  2015-01-08 09:57:55,596 INFO
 [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
 (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock
 EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
 value: GLUSTER
  , sharedLocks= ]
  2015-01-08 09:57:55,633 INFO
 [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
 (DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait lock
 EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
 value: GLUSTER
  , sharedLocks= ]
  ^C
 
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create volume in OVirt with gluster

2015-01-08 Thread Kanagaraj

Do you see any errors in the UI?

Also please provide the engine.log and vdsm.log when the failure occured.

Thanks,
Kanagaraj

On 01/08/2015 02:25 PM, Punit Dambiwal wrote:

Hi Martin,

The steps are below :-

1. Step the ovirt engine on the one server...
2. Installed centos 7 on 4 host node servers..
3. I am using host node (compute+storage)now i have added all 4 
nodes to engine...

4. Create the gluster volume from GUI...

Network :-
eth0 :- public network (1G)
eth1+eth2=bond0= VM public network (1G)
eth3+eth4=bond1=ovirtmgmt+storage (10G private network)

every hostnode has 24 bricks=24*4(distributed replicated)

Thanks,
Punit


On Thu, Jan 8, 2015 at 3:20 PM, Martin Pavlík mpav...@redhat.com 
mailto:mpav...@redhat.com wrote:


Hi Punit,

can you please provide also errors from /var/log/vdsm/vdsm.log and
/var/log/vdsm/vdsmd.log

it would be really helpful if you provided exact steps how to
reproduce the problem.

regards

Martin Pavlik - rhev QE
 On 08 Jan 2015, at 03:06, Punit Dambiwal hypu...@gmail.com
mailto:hypu...@gmail.com wrote:

 Hi,

 I try to add gluster volume but it failed...

 Ovirt :- 3.5
 VDSM :- vdsm-4.16.7-1.gitdb83943.el7
 KVM :- 1.5.3 - 60.el7_0.2
 libvirt-1.1.1-29.el7_0.4
 Glusterfs :- glusterfs-3.5.3-1.el7

 Engine Logs :-

 2015-01-08 09:57:52,569 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
0001-0001-0001-0001-0300 value: GLUSTER
 , sharedLocks= ]
 2015-01-08 09:57:52,609 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
0001-0001-0001-0001-0300 value: GLUSTER
 , sharedLocks= ]
 2015-01-08 09:57:55,582 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
0001-0001-0001-0001-0300 value: GLUSTER
 , sharedLocks= ]
 2015-01-08 09:57:55,591 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
0001-0001-0001-0001-0300 value: GLUSTER
 , sharedLocks= ]
 2015-01-08 09:57:55,596 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
0001-0001-0001-0001-0300 value: GLUSTER
 , sharedLocks= ]
 2015-01-08 09:57:55,633 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-16) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
0001-0001-0001-0001-0300 value: GLUSTER
 , sharedLocks= ]
 ^C






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM Affinity groups ovirt 3.4.4

2015-01-08 Thread Gary Lloyd
Hi we have recently updated our production environment to ovirt 3.4.4 .

I have created a positive enforcing VM Affinity Group with 2 vms in one of
our clusters, but they don't seem to be moving (currently on different
hosts). Is there something else I need to activate ?

Thanks

*Gary Lloyd*
--
IT Services
Keele University
---
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failover with ovirt3.5

2015-01-08 Thread Artyom Lukianov
So, behavior for not HE HA vm is:
1) If vm crash(from some reason) it restarted automatically on another host in 
the same cluster.
2) If something happen with host where HA vm run(network problem, power 
outage), vm dropped to unknown state, and if you want from engine to start this 
vm on another host, you need click under problematic host menu Confirm Host 
has been Rebooted, when you confirm this, engine will start vm on another host 
and also release SPM role from problematic host(if it SPM sure).
 

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org, 
Jiri Moskovcak jmosk...@redhat.com, Yedidyah Bar David d...@redhat.com, 
Sandro Bonazzola sbona...@redhat.com
Sent: Wednesday, January 7, 2015 3:00:26 AM
Subject: RE: [ovirt-users] VM failover with ovirt3.5

For case 1, I got the avicde that I need to change 
'migration_max_time_per_gib_mem'  value inside vdsm.conf, I am doing it and 
when I get the result, I will also share with you. Thanks.

For case 2, do you mean I did the wrong way to test normal VM failover? Now 
although I shut down host 3 forcely, the vm on the top of it will not do 
failover.
What is your advice for this?

Thanks,
Cong



-Original Message-
From: Artyom Lukianov [mailto:aluki...@redhat.com]
Sent: Tuesday, January 06, 2015 12:34 AM
To: Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org; Jiri Moskovcak; Yedidyah 
Bar David; Sandro Bonazzola
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Case 1:
In vdsm.log I can see this one:
Thread-674407::ERROR::2015-01-05 12:09:43,264::migration::259::vm.Vm::(run) 
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/virt/migration.py, line 245, in run
self._startUnderlyingMigration(time.time())
  File /usr/share/vdsm/virt/migration.py, line 324, in 
_startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/virt/vm.py, line 670, in f
ret = attr(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py, line 111, 
in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 1264, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: operation aborted: migration job: canceled by client
I see that this kind can be happen, because migration time exceeding the 
configured maximum time for migrations, but anyway we need help from devs, I 
added some to CC.

Case 2:
HA vm must migrate only in case of some fail on host3, so if your host_3 is ok 
vm will continue run on it.


- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
Sent: Monday, January 5, 2015 7:38:08 PM
Subject: RE: [ovirt-users] VM failover with ovirt3.5

I collected the agent.log and vdsm.log in 2 cases.

Case1 HE VM failover trail
What I did
1, make all host be engine up
2, set host1 be with local maintenance mode. In host1, there is HE VM.
3, Then HE VM is trying to migrate, but finally it fails. This can be found 
from agent.log_hosted_engine_1
As for the log is very large, I uploaded into google dirve. The link is as
https://drive.google.com/drive/#folders/0B9Pi5vvimKTdNU82bWVpZDhDQmM/0B9Pi5vvimKTdRGJhUXUwejNGRHc
The logs are for 3 hosts in my environment.

Case2 non-HE VM failover trail
1, make all host be engine up
2,set host2 be with local maintenance mode. In host3, there is one vm with ha 
enabled. Also for the cluster, Enable HA reservation and Resilience policy is 
set as migrating virtual machines
3,But the vm on the top of host3 does not migrate at all.
The logs are uploaded to good drive as
https://drive.google.com/drive/#folders/0B9Pi5vvimKTdNU82bWVpZDhDQmM/0B9Pi5vvimKTdd3MzTXZBbmxpNmc


Thanks,
Cong




-Original Message-
From: Artyom Lukianov [mailto:aluki...@redhat.com]
Sent: Sunday, January 04, 2015 3:22 AM
To: Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Can you provide vdsm logs:
1) for HE vm case
2) for not HE vm case
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
Sent: Thursday, January 1, 2015 2:32:18 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Thanks for the advice. I applied the patch for clientIF.py as
- port = config.getint('addresses', 'management_port')
+ port = config.get('addresses', 'management_port')

Now there is no fatal error in beam.log, also migration can start to happen 
when I set the host where HE VM is to be local maintenance mode. But it finally 
fail with the following log. Also HE VM can not be done with live 

Re: [ovirt-users] Possibility to regenerate answer file on primary node

2015-01-08 Thread Christopher Young
Would anyone be able to provide me any insight here?  I can include any
logs if needed as I'm at a loss trying to get this 2nd hosted engine up and
running.

On Tue, Jan 6, 2015 at 7:37 PM, Christopher Young mexigaba...@gmail.com
wrote:

 For added information (a portion from the ovirt-hosted-setup log):

 --
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:138 Stage
 customization METHOD
 otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._vm_start
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:144
 condition False
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:138 Stage
 customization METHOD
 otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._customization
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:490
 ENVIRONMENT DUMP - BEGIN
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:500 ENV
 OVEHOSTED_VM/vmBoot=str:'disk'
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:504
 ENVIRONMENT DUMP - END
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:138 Stage
 customization METHOD
 otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._customization
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:144
 condition False
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:138 Stage
 customization METHOD
 otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu.Plugin._customization
 2015-01-06 19:18:23 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu
 cpu._getCompatibleCpuModels:68 Attempting to load the caps vdsm module
 2015-01-06 19:18:23 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu cpu._customization:137
 Compatible CPU models are: [u'model_Nehalem', u'model_Conroe',
 u'model_coreduo', u'model_core2duo', u'model_Penryn', u'model_Westmere',
 u'model_n270']
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND The following CPU types
 are supported by this host:
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND- model_Westmere:
 Intel Westmere Family
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND- model_Nehalem:
 Intel Nehalem Family
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND- model_Penryn: Intel
 Penryn Family
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND- model_Conroe: Intel
 Conroe Family
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:152 method
 exception
 Traceback (most recent call last):
   File /usr/lib/python2.7/site-packages/otopi/context.py, line 142, in
 _executeMethod
 method['method']()
   File
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/vdsmd/cpu.py,
 line 194, in _customization
 ohostedcons.VDSMEnv.VDSM_CPU
 RuntimeError: Invalid CPU type specified: None
 2015-01-06 19:18:23 ERROR otopi.context context._executeMethod:161 Failed
 to execute stage 'Environment customization': Invalid CPU type specified:
 None
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:490
 ENVIRONMENT DUMP - BEGIN
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:500 ENV
 BASE/error=bool:'True'
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:500 ENV
 BASE/exceptionInfo=list:'[(type 'exceptions.RuntimeError',
 RuntimeError('Invalid CPU type specified: None',), traceback object at
 0x39782d8)]'
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:504
 ENVIRONMENT DUMP - END
 2015-01-06 19:18:23 INFO otopi.context context.runSequence:417 Stage:
 Clean up
 2015-01-06 19:18:23 DEBUG otopi.context context.runSequence:421 STAGE
 cleanup
 --

 On Tue, Jan 6, 2015 at 7:26 PM, Christopher Young mexigaba...@gmail.com
 wrote:

 I believe that I'm stuck with trying to follow the deployment via the
 popular guide at:

 http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part-two/

 I'm trying to get my second hosted-engine deployed and seem to keep
 running into errors with '--deploy' on the second node.

 ---
   --== NETWORK CONFIGURATION ==--

   Please indicate a pingable gateway IP address [x.x.x.x]:
   The following CPU types are supported by this host:
  - model_Westmere: Intel Westmere Family
  - model_Nehalem: Intel Nehalem Family
  - model_Penryn: Intel Penryn Family
  - model_Conroe: Intel Conroe Family
 [ ERROR ] Failed to execute stage 'Environment customization': Invalid
 CPU type specified: None

 ---
 It seems like the answer file that is copied from the first node is
 either not being copied or is incorrect.  In any case, I was wondering if
 there was a way to verify this as well as 

Re: [ovirt-users] VM failover with ovirt3.5

2015-01-08 Thread Yue, Cong
Thanks for the advice.

For
2) If something happen with host where HA vm run(network problem, power 
outage), vm dropped to unknown state, and if you want from engine to start 
this vm on another host, you need click under problematic host menu Confirm 
Host has been Rebooted, when you confirm this, engine will start vm on 
another host and also release SPM role from problematic host(if it SPM sure).

Is there any way to make the VM failover happen(move to another host in the 
same cluster) automatically  as for sometimes the administrator may also can 
not recognize the sudden power outage immediately.

Also How I can test in my environment? Please kindly advise.
1) If vm crash(from some reason) it restarted automatically on another host in 
the same cluster.


Thanks,
Cong

-Original Message-
From: Artyom Lukianov [mailto:aluki...@redhat.com]
Sent: Thursday, January 08, 2015 8:46 AM
To: Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org; Jiri Moskovcak; Yedidyah 
Bar David; Sandro Bonazzola
Subject: Re: [ovirt-users] VM failover with ovirt3.5

So, behavior for not HE HA vm is:
1) If vm crash(from some reason) it restarted automatically on another host in 
the same cluster.
2) If something happen with host where HA vm run(network problem, power 
outage), vm dropped to unknown state, and if you want from engine to start this 
vm on another host, you need click under problematic host menu Confirm Host 
has been Rebooted, when you confirm this, engine will start vm on another host 
and also release SPM role from problematic host(if it SPM sure).


- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org, 
Jiri Moskovcak jmosk...@redhat.com, Yedidyah Bar David d...@redhat.com, 
Sandro Bonazzola sbona...@redhat.com
Sent: Wednesday, January 7, 2015 3:00:26 AM
Subject: RE: [ovirt-users] VM failover with ovirt3.5

For case 1, I got the avicde that I need to change 
'migration_max_time_per_gib_mem'  value inside vdsm.conf, I am doing it and 
when I get the result, I will also share with you. Thanks.

For case 2, do you mean I did the wrong way to test normal VM failover? Now 
although I shut down host 3 forcely, the vm on the top of it will not do 
failover.
What is your advice for this?

Thanks,
Cong



-Original Message-
From: Artyom Lukianov [mailto:aluki...@redhat.com]
Sent: Tuesday, January 06, 2015 12:34 AM
To: Yue, Cong
Cc: cong yue; stira...@redhat.com; users@ovirt.org; Jiri Moskovcak; Yedidyah 
Bar David; Sandro Bonazzola
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Case 1:
In vdsm.log I can see this one:
Thread-674407::ERROR::2015-01-05 12:09:43,264::migration::259::vm.Vm::(run) 
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/virt/migration.py, line 245, in run
self._startUnderlyingMigration(time.time())
  File /usr/share/vdsm/virt/migration.py, line 324, in 
_startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/virt/vm.py, line 670, in f
ret = attr(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py, line 111, 
in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.7/site-packages/libvirt.py, line 1264, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: operation aborted: migration job: canceled by client
I see that this kind can be happen, because migration time exceeding the 
configured maximum time for migrations, but anyway we need help from devs, I 
added some to CC.

Case 2:
HA vm must migrate only in case of some fail on host3, so if your host_3 is ok 
vm will continue run on it.


- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: cong yue yuecong1...@gmail.com, stira...@redhat.com, users@ovirt.org
Sent: Monday, January 5, 2015 7:38:08 PM
Subject: RE: [ovirt-users] VM failover with ovirt3.5

I collected the agent.log and vdsm.log in 2 cases.

Case1 HE VM failover trail
What I did
1, make all host be engine up
2, set host1 be with local maintenance mode. In host1, there is HE VM.
3, Then HE VM is trying to migrate, but finally it fails. This can be found 
from agent.log_hosted_engine_1
As for the log is very large, I uploaded into google dirve. The link is as
https://drive.google.com/drive/#folders/0B9Pi5vvimKTdNU82bWVpZDhDQmM/0B9Pi5vvimKTdRGJhUXUwejNGRHc
The logs are for 3 hosts in my environment.

Case2 non-HE VM failover trail
1, make all host be engine up
2,set host2 be with local maintenance mode. In host3, there is one vm with ha 
enabled. Also for the cluster, Enable HA reservation and Resilience policy is 
set as migrating virtual machines
3,But the vm on the top of host3 does not migrate at all.
The logs are uploaded to good drive as

Re: [ovirt-users] Possibility to regenerate answer file on primary node

2015-01-08 Thread Simone Tiraboschi


- Original Message -
 From: Christopher Young mexigaba...@gmail.com
 To: users@ovirt.org
 Sent: Thursday, January 8, 2015 5:36:44 PM
 Subject: Re: [ovirt-users] Possibility to regenerate answer file on primary   
 node
 
 Would anyone be able to provide me any insight here? I can include any logs
 if needed as I'm at a loss trying to get this 2nd hosted engine up and
 running.
 
 On Tue, Jan 6, 2015 at 7:37 PM, Christopher Young  mexigaba...@gmail.com 
 wrote:
 
 
 
 For added information (a portion from the ovirt-hosted-setup log):
 
 --
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:138 Stage
 customization METHOD
 otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._vm_start
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:144 condition
 False
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:138 Stage
 customization METHOD
 otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._customization
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:490
 ENVIRONMENT DUMP - BEGIN
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:500 ENV
 OVEHOSTED_VM/vmBoot=str:'disk'
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:504
 ENVIRONMENT DUMP - END
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:138 Stage
 customization METHOD
 otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._customization
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:144 condition
 False
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:138 Stage
 customization METHOD
 otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu.Plugin._customization
 2015-01-06 19:18:23 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu
 cpu._getCompatibleCpuModels:68 Attempting to load the caps vdsm module
 2015-01-06 19:18:23 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu
 cpu._customization:137 Compatible CPU models are: [u'model_Nehalem',
 u'model_Conroe', u'model_coreduo', u'model_core2duo', u'model_Penryn',
 u'model_Westmere', u'model_n270']
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND The following CPU types are supported by
 this host:
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND - model_Westmere: Intel Westmere Family
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND - model_Nehalem: Intel Nehalem Family
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND - model_Penryn: Intel Penryn Family
 2015-01-06 19:18:23 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND - model_Conroe: Intel Conroe Family
 2015-01-06 19:18:23 DEBUG otopi.context context._executeMethod:152 method
 exception
 Traceback (most recent call last):
 File /usr/lib/python2.7/site-packages/otopi/context.py, line 142, in
 _executeMethod
 method['method']()
 File
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/vdsmd/cpu.py,
 line 194, in _customization
 ohostedcons.VDSMEnv.VDSM_CPU
 RuntimeError: Invalid CPU type specified: None
 2015-01-06 19:18:23 ERROR otopi.context context._executeMethod:161 Failed to
 execute stage 'Environment customization': Invalid CPU type specified: None
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:490
 ENVIRONMENT DUMP - BEGIN
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:500 ENV
 BASE/error=bool:'True'
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:500 ENV
 BASE/exceptionInfo=list:'[(type 'exceptions.RuntimeError',
 RuntimeError('Invalid CPU type specified: None',), traceback object at
 0x39782d8)]'
 2015-01-06 19:18:23 DEBUG otopi.context context.dumpEnvironment:504
 ENVIRONMENT DUMP - END
 2015-01-06 19:18:23 INFO otopi.context context.runSequence:417 Stage: Clean
 up
 2015-01-06 19:18:23 DEBUG otopi.context context.runSequence:421 STAGE cleanup
 --
 
 On Tue, Jan 6, 2015 at 7:26 PM, Christopher Young  mexigaba...@gmail.com 
 wrote:
 
 
 
 I believe that I'm stuck with trying to follow the deployment via the popular
 guide at:
 http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part-two/
 
 I'm trying to get my second hosted-engine deployed and seem to keep running
 into errors with '--deploy' on the second node.
 
 ---
 --== NETWORK CONFIGURATION ==--
 
 Please indicate a pingable gateway IP address [x.x.x.x]:
 The following CPU types are supported by this host:
 - model_Westmere: Intel Westmere Family
 - model_Nehalem: Intel Nehalem Family
 - model_Penryn: Intel Penryn Family
 - model_Conroe: Intel Conroe Family
 [ ERROR ] Failed to execute stage 'Environment customization': Invalid CPU
 type specified: None
 
 ---
 It seems like the answer file that is copied from the first node is either
 not being copied or is incorrect. In any case, I was wondering 

Re: [ovirt-users] Issues with vm start up

2015-01-08 Thread VONDRA Alain
Hi,
Thanks A LOT Koen, you saved me with your solution 
Regards






Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.frhttp://www.unicef.fr/

http://www.unicef.fr

[cid:urgence_eboladc6d2e]http://www.unicef.fr/









[cid:siganture_bonne-annee_taille1707bdcfa]http://www.unicef.fr

De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la part de 
Koen Vanoppen
Envoyé : lundi 8 décembre 2014 07:49
À : users@ovirt.org
Objet : Re: [ovirt-users] Issues with vm start up

We also had a issue when starting our vms. This was due the fact of the NUMA 
option under the host section. I attached a procedure I created for work. Maybe 
it is also useful for your case...

2014-12-02 5:58 GMT+01:00 Shanil S 
xielessha...@gmail.commailto:xielessha...@gmail.com:
Hi Omer,
We have opened this as a bug, you can view the new bug here 
https://bugzilla.redhat.com/show_bug.cgi?id=1169625.

--
Regards
Shanil

On Tue, Dec 2, 2014 at 8:42 AM, Shanil S 
xielessha...@gmail.commailto:xielessha...@gmail.com wrote:
Hi Omer,

Thanks for your reply. We will open it as a bug.

--
Regards
Shanil

On Mon, Dec 1, 2014 at 4:38 PM, Omer Frenkel 
ofren...@redhat.commailto:ofren...@redhat.com wrote:


- Original Message -
 From: Shanil S xielessha...@gmail.commailto:xielessha...@gmail.com
 To: Omer Frenkel ofren...@redhat.commailto:ofren...@redhat.com, 
 users@ovirt.orgmailto:users@ovirt.org, Juan Hernandez 
 jhern...@redhat.commailto:jhern...@redhat.com
 Sent: Monday, December 1, 2014 12:39:12 PM
 Subject: Re: [ovirt-users] Issues with vm start up

 Hi Omer,

 Thanks for your reply.

 We are deploying those VM's through templates and all the templates has
 first boot from hard disk and second with cdrom. We are using cdrom in
 cloud-init section to insert the cloud-init data .Why we need this boot
 loader (VM boot order or in the cloud-init section) as in the template the
 first boot order is harddisk ?



if the template's boot order has hard-disk it should be ok,
not sure why it seems for this vm the boot order is only cd.
does this happen to any vm created from this template?
can you check the boot order is corrent in the edit vm dialog?

if the boot order is ok in the template,
and it happens to any new vm from this template,
you should open a bug so we could investigate this.


 --
 Regards
 Shanil

 On Mon, Dec 1, 2014 at 3:37 PM, Omer Frenkel 
 ofren...@redhat.commailto:ofren...@redhat.com wrote:

 
 
  - Original Message -
   From: Shanil S xielessha...@gmail.commailto:xielessha...@gmail.com
   To: Shahar Havivi shah...@redhat.commailto:shah...@redhat.com
   Cc: users@ovirt.orgmailto:users@ovirt.org, Juan Hernandez 
   jhern...@redhat.commailto:jhern...@redhat.com, Omer
  Frenkel ofren...@redhat.commailto:ofren...@redhat.com
   Sent: Thursday, November 27, 2014 10:32:54 AM
   Subject: Re: [ovirt-users] Issues with vm start up
  
   Hi Omer,
  
   I have attached the engine.log and vdsm.log. Please check it. The vm id
  is
   - 7c7aa9dedb57248f5da291117164f0d7
  
 
  sorry Shanil for the delay,
  it looks that in the vm settings, you have chosen to boot only from cd ?
  this is why the vm doest boot, the cloud-init disk is a settings disk, not
  a boot disk
  please go to update vm, change the boot order to use hard-disk, and boot
  the vm.
 
  let me know if it helps,
  Omer.
 
   --
   Regards
   Shanil
  
   On Thu, Nov 27, 2014 at 1:46 PM, Shahar Havivi 
   shah...@redhat.commailto:shah...@redhat.com
  wrote:
  
Try to remove content and run the vm,
ie remove the runcmd: or some of it - try to use the xml with out CDATA
maybe
you can pinpoint the problem that way...
   
   
On 27.11.14 10:03, Shanil S wrote:
 Hi All,


 I am using the ovirt version 3.5 and having some issues with the vm
startup
 with cloud-init using api in run-once mode.

 Below is the steps i follow :-

 1. Create the VM by API from precreated Template..
 2. Start the VM in run-once mode and push the cloud-init data from
  API..
 3. VM stuck and from console it display the following :-
 Booting from DVD/CD.. ...
 Boot failed : could not read from CDROM (code 004)

 I am using the following xml for this operation :-

 action
 vm
  os
   boot dev='cdrom'/
  /os
  initialization
   cloud_init
host
 addresstest/address
/host
network_configuration
 nics
  nic
   interfacevirtIO/interface
   nameeth0/name
   boot_protocolstatic/boot_protocol
   mac address=''/
   network
ip address='' netmask='' gateway=''/
   /network
   

Re: [ovirt-users] Using gluster on other hosts?

2015-01-08 Thread Will K
That's what I did, but didn't work for me.
1. use the 192.168.x interface to setup gluster. I used hostname in 
/etc/hosts.2. setup oVirt using the switched network hostnames, let's say 
10.10.10.x3. oVirt and all that comes up fine.4. When try to create a storage 
domain, it only shows the 10.10.10.x hostnames available.

Tried to add a brick and I would get something like    Host gfs2 is not in 
'Peer in Cluster' state  (while node2 is the hostname and gfs2 is the 192.168 
name)
Ran command `gluster probe peer gfs2` or `gluster probe peer 192.168.x.x` 
didn't work    peer probe: failed: Probe returned with unknown errno 107
Ran probe again with the switched network hostname or IP worked fine. May be it 
is not possible with current GlusterFS version?  
http://www.gluster.org/community/documentation/index.php/Features/SplitNetwork


Will 

 On Thursday, January 8, 2015 3:43 AM, Sahina Bose sab...@redhat.com 
wrote:
   

  
 On 01/08/2015 12:07 AM, Will K wrote:
  
  Hi 
  I would like to see if anyone has good suggestion. 
  I have two physical hosts with 1GB connections to switched networks. The 
hosts also have 10GB interface connected directly using Twinax cable like 
copper crossover cable.  The idea was to use the 10GB as a private network 
for GlusterFS till the day we want to grow out of this 2 node setup.
  
  GlusterFS was setup with the 10GB ports using non-routable IPs and hostnames 
in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2 192.168.1.2.  I'm 
following example from 
community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/ , Currently 
I'm only using Gluster volume on node1, but `gluster probe peer` test worked 
fine with node2 through the 10GB connection.
  
  oVirt engine was setup on physical host1 with hosted engine.  Now, when I try 
to create new Gluster storage domain, I can only see the host node1 
available. 
  
  Is there anyway I can setup oVirt on node1 and node2, while using gfs1 and 
gfs2 for GlusterFS? or some way to take advantage of the 10GB connection?
   
 
 If I understand right, you have 2 interfaces on each of your hosts, and you 
want oVirt to communicate via 1 interface and glusterfs to use other?
 
 While adding the hosts to oVirt you could use ip1 and then.while creating the 
volume, add the brick using the other ip address. 
 For instance, gluster volume create volname 192.168.1.2:/bricks/b1 
 
 Currently, there's no way to specify the IP address to use while adding a 
brick from oVirt UI (we're working on this for 3.6), but you could do this from 
the gluster CLI commands. This would then be detected in the oVirt UI.
 
 
 
  
  Thanks W
   
  
 ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users 
 
 

   ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Community ads on StackOverflow, 1st half 2015

2015-01-08 Thread Brian Proffitt
We have a new community ad set to go for the next StackOverflow campaign for 
the first half of the year. Allon Mureinik has posted it at:

http://meta.stackoverflow.com/a/283016/2422776

Now we just need some upvotes there to have the ad approved. The current 
threshold is +6. If you are a member of the StackOverflow network, we could use 
the support!

Thanks!

BKP


-- 
Brian Proffitt

Community Liaison
oVirt
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users