Re: [ovirt-users] Problem Upgrading 3.4.4 - 3.5

2014-12-29 Thread InterNetX - Juergen Gotteswinter
Hello both of you,

thanks for your detailed explainations and support, still thinking which
way i will go. tending to try the dirty way in a lab setup before to see
what happens.

Will post updates when i got more :)

Cheers,

Juergen

 It seems that somebody had deleted manually the constraint
 fk_event_subscriber_event_notification_methods from your database
 Therefor, the first line that attempts to drop this constraint in
 03_05_0050_event_notification_methods.sql:  ALTER TABLE event_subscriber
 DROP CONSTRAINT fk_event_subscriber_event_notification_methods;
 fails.

 uhm, interesting. could this be caused be deinstallation of dwh
 reporting?

 How exactly did you do that?


 very good question, thats a few months ago. i whould guess with rpm -e
 before an engine upgrade (if i remember correctly there was one ovirt
 release where dwh was missing for el6).


 Note that partial cleanup is not supported yet [1].

 checking right after that mail :)


 Can you please post all of /var/log/ovirt-engine/setup/* ?

 sure, sending you the dl link in a private mail. since i am not sure if
 i sed´ed out all private things
 
 Based on these logs, it seems to me that:
 
 1. At some point you upgraded to a snapshot of master (then-3.4), installing
 ovirt-engine-3.4.0-0.12.master.20140228075627.el6.
 
 2. This package had an older version of the script
 dbscripts/upgrade/03_04_0600_event_notification_methods.sql .
 
 3. Therefore, when you now try to upgrade, engine-setup tries to run the
 newer version, and fails. Why? Because it keeps in the database the checksum
 of every upgrade script it runs, and does not run again scripts with same
 checksum. But in your case the checksums are different, so it does try that.
 It fails, because the older version already dropped the table 
 event_notification_methods.
 
 How to fix this?
 
 First, note that upgrades between dev/beta/rc/etc versions is not supported.
 So the official answer is to remove everything and start from scratch. Or, 
 if you
 have good backups of the latest 3.3 version you had, restore to that one and 
 then
 upgrade to 3.4 and then 3.5.
 
 If you want to try and force an upgrade, you can do the following, but note 
 that
 it might fail elsewhere, or even fail in some future upgrade:
 
 1. Following a 'git log' of this file, it seems to me that the only change it
 went through between the version you installed and the one in final 3.4, is 
 [1].
 It seems that the relevant part of this change can be done by you by running:
 
 ALTER TABLE event_subscriber ADD COLUMN notification_method CHARACTER 
 VARYING(32) DEFAULT 'EMAIL' CHECK (notification_method IN ('EMAIL', 
 'SNMP_TRAP'));
 
 2. After you do that, you can convince engine-setup that you already ran the
 version of the script you now have, by running:
 
 update schema_version set checksum='feabc7bc7bb7ff749f075be48538c92e' where 
 version='03040600';
 
 Backup everything before you start.
 
 No guarantee. Use at your own risk.
 
 As I said, better remove everything and setup again clean or restore your
 latest backup of a supported version and upgrade from that one.
 
 Good luck. Please report back :-) Thanks,
 
 [1] http://gerrit.ovirt.org/25393
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vdsm noipspoof.py vdsm hook problem

2014-12-29 Thread InterNetX - Juergen Gotteswinter
Hi,

i am trying to get the noipspoof.py Hook up and running, which works
fine so far if i only feed it with a single ip. when trying to add 2+,
like described in the ource (comma seperated), the gui tells me that
this isnt expected / nice and wont let me do this.

I already tried modding the Regex, which made the engine to take a
2nd/3rd ip (comma seperated), but it seems that theres somehere else
something wrong with parsing this.

VDSM throws this:

vdsm vm.Vm ERROR vmId=`4c9cb160-2283-4769-a69c-434e6c992c2b`::The vm
start process failed#012Traceback (most recent call last):#012  File
/usr/share/vdsm/virt/vm.py, line 2266, in _startUnderlyingVm#012
self._run()#012  File /usr/share/vdsm/virt/vm.py, line 3332, in
_run#012domxml = hooks.before_vm_start(self._buildCmdLine(),
self.conf)#012  File /usr/share/vdsm/hooks.py, line 142, in
before_vm_start#012return _runHooksDir(domxml, 'before_vm_start',
vmconf=vmconf)#012  File /usr/share/vdsm/hooks.py, line 110, in
_runHooksDir#012raise HookError()#012HookError


The VM fails to start, engine tries this on every available host (which,
not surprising fail, too).

Anyone any ideas / patches / hints how to mod this hook ?


Thanks

Juergen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Artyom Lukianov
Can you also provide output of hosted-engine --vm-status please, previous time 
it was useful, because I do not see something unusual.
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
Sent: Monday, December 29, 2014 7:15:24 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Also I change the maintenance mode to local in another host. But also the VM in 
this host can not be migrated. The logs are as follows.

[root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host 10.0.0.94 (id 1)
MainThread::INFO::2014-12-28
21:09:35,236::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:35,236::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:45,604::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:55,691::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419829795.7 type=state_transition
detail=EngineDown-LocalMaintenance hostname='compute2-2'
MainThread::INFO::2014-12-28
21:09:55,761::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineDown-LocalMaintenance) sent? sent
MainThread::INFO::2014-12-28
21:09:55,990::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-28
21:09:55,990::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
21:09:55,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
^C
You have new mail in /var/spool/mail/root
[root@compute2-2 ~]# ps -ef | grep qemu
root 18420  2777  0 21:10x-apple-data-detectors://39 pts/0
00:00:00x-apple-data-detectors://40 grep --color=auto qemu
qemu 29809 1  0 Dec19 ?01:17:20 /usr/libexec/qemu-kvm
-name testvm2-2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem
-m 500 -realtime mlock=off -smp
1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
c31e97d0-135e-42da-9954-162b5228dce3 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0059-3610-8033-B4C04F395931,uuid=c31e97d0-135e-42da-9954-162b5228dce3
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-12-19T20:17:17x-apple-data-detectors://42,driftfix=slew 
-no-kvm-pit-reinjection
-no-hpet -no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive 

Re: [ovirt-users] Backup and Restore of VMs

2014-12-29 Thread Nathanaël Blanchet


Le 29/12/2014 12:10, Nathanaël Blanchet a écrit :

Hello,

Thank you for the script, yes, it is clearer now.
However, there is a something I misunderstand, my raisoning may be 
stupid, just tell me.
It is closely about the backup process, precisely when the disk is 
attached to the vm... At this moment, an extern process should do this 
step. If we consider using the dd command to have a byte-to-byte copy 
from the snapshot disk, why not directly attaching this cloned raw 
virtual disk to the new OVF cloned VM instead of creating a new 
provisonned disk?
But you just might consider doing file copy during the backup process 
(rsnyc like) which implies to format the new created disk and many 
additionnal steps as creating Logical Volumes if needed, etc...

Can anybody help me with understanding this step?
Thank you.

Le 28/12/2014 10:02, Liron Aravot a écrit :

Hi All,
I've uploaded an example script (oVirt python-sdk) that contains 
examples to the steps
described on 
http://www.ovirt.org/Features/Backup-Restore_API_Integration


let me know how it works out for you -
https://github.com/laravot/backuprestoreapi


- Original Message -

From: Liron Aravot lara...@redhat.com
To: Soeren Malchow soeren.malc...@mcon.net
Cc: Vered Volansky ve...@redhat.com, Users@ovirt.org
Sent: Wednesday, December 24, 2014 12:20:36 PM
Subject: Re: [ovirt-users] Backup and Restore of VMs

Hi guys,
I'm currently working on complete example of the steps appear in -
http://www.ovirt.org/Features/Backup-Restore_API_Integration

will share with you as soon as i'm done with it.

thanks,
Liron

- Original Message -

From: Soeren Malchow soeren.malc...@mcon.net
To: Vered Volansky ve...@redhat.com
Cc: Users@ovirt.org
Sent: Wednesday, December 24, 2014 11:58:01 AM
Subject: Re: [ovirt-users] Backup and Restore of VMs

Dear Vered,

at some point we have to start, and right now we are getting 
closer, even
with the documentation it is sometime hard to find the correct 
place to

start, especially without specific examples (and I have decades of
experience now)

with the backup plugin that came from Lucas Vandroux we have a 
starting

point
right now, and we will continue form here and try to work with him 
on this.


Regards
Soeren


-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On 
Behalf Of

Blaster
Sent: Tuesday, December 23, 2014 5:49 PM
To: Vered Volansky
Cc: Users@ovirt.org
Subject: Re: [ovirt-users] Backup and Restore of VMs

Sounds like a Chicken/Egg problem.



On 12/23/2014 12:03 AM, Vered Volansky wrote:

Well, real world is community...
Maybe change the name of the thread in order to make this more 
clear for

someone from the community that might be able to could help.
Maybe something like:
Request for sharing real world example of VM backups.

We obviously use it as part as developing, but I don't have what 
you're

asking for.
If you try it yourself and stumble onto questions in the process, 
please

ask the list and we'll do our best to help.

Best Regards,
Vered

- Original Message -

From: Blaster blas...@556nato.com
To: Vered Volansky ve...@redhat.com
Cc: Users@ovirt.org
Sent: Tuesday, December 23, 2014 5:56:13 AM
Subject: Re: [ovirt-users] Backup and Restore of VMs


Vered,

It sounds like Soeren already knows about that page. His issue seems
to be, as well as the issue of others judging by comments on 
here, is

that there aren’t any real world examples of how the API is used.



On Dec 22, 2014, at 9:26 AM, Vered Volansky ve...@redhat.com 
wrote:



Please take a look at:
http://www.ovirt.org/Features/Backup-Restore_API_Integration

Specifically:
http://www.ovirt.org/Features/Backup-Restore_API_Integration#Full_VM 


_Backups

Regards,
Vered

- Original Message -

From: Soeren Malchow soeren.malc...@mcon.net
To: Users@ovirt.org
Sent: Friday, December 19, 2014 1:44:38 PM
Subject: [ovirt-users] Backup and Restore of VMs



Dear all,



ovirt: 3.5

gluster: 3.6.1

OS: CentOS 7 (except ovirt hosted engine = centos 6.6)



i spent quite a while researching backup and restore for VMs right
now, so far I have come up with this as a start for us



- API calls to create schedule snapshots of virtual machines This
is or short term storage and to guard against accidential deletion
within the VM but not for storage corruption



- Since we are using a gluster backend, gluster snapshots I wasn’t
able so far to really test it since the LV needs to be thin
provisioned and we did not do that in the setup



For the API calls we have the problem that we can not find any
existing scripts or something like that to do those snapshots (and
i/we are not developers enough to do that).



As an additional information, we have a ZFS based storage with
deduplication that we use for other backup purposes which does a
great job especially because of the deduplication (we can storage
generations of backups without problems), this storage can be NFS
exported and used as 

Re: [ovirt-users] Unable to reinstall hosts after network removal.

2014-12-29 Thread Arman Khalatyan
My setup  has 3 networks: 1xIB,1x10Gbit+1Gbit for ovirt-management.
The ovirt network does not have any trouble, it is always there.
I was trying to rename or remove my IB network which was used for VM
migrations.
I was using web-GUI, which was removing IB0 network w/o problem. After
removal the hosts where ok. Then I put them to maintenance mode. To refresh
iptables rules I did reinstall. then reinstall was failing with message
that IB0 not attached to any interface. But IB0 interface is not possible
to attach it is already deleted and not visible in any network dialog.
After creating interface with the same name everything is online now.
My current interface list is following:
 virsh -r net-list
Name State  Autostart Persistent
--
;vdsmdummy;  active nono
vdsm-cls10G  active yes   yes
vdsm-IB0 active yes   yes
vdsm-ovirtmgmt   active yes   yes

On this host the IB0 is not atached to any interface, I wondered, if it
should show up  in the net list?

I think the GUI does not rename/remove the interface from the DB. Some
constrain keeps still IB0 in DB.



***

Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany

***


On Sun, Dec 28, 2014 at 10:31 AM, Moti Asayag masa...@redhat.com wrote:



 - Original Message -
  From: Arman Khalatyan arm2...@gmail.com
  To: users users@ovirt.org
  Sent: Wednesday, December 24, 2014 1:22:43 PM
  Subject: [ovirt-users] Unable to reinstall hosts after network removal.
 
  Hello,
  I have a little trouble with ovirt 3.5 on CentOS6.6:
  I was removing all networks from all hosts.

 Did you use the setup networks dialog from the UI in order to remove those
 networks ?
 Or have you removed those networks from the host directly (where you
 should used the:
 1. virsh net-destroy 'the-network-name'
 2. virsh net-undefine 'the-network-name'
 )

 can you report the output of 'virsh -r net-list'  ?

  Then after removing network from data center the hosts went to unusable.

 What was the host's status prior to removing its networks ? Was it up ?

  Every time after reinstall the host claims that the network is not
  configured, but it s already removed from network tab in DC.

 What is the missing network name ? Is it 'ovirtmgmt' ?

  Where from it gets the old configuration? the old interfaces also
 restored
  every time on the reinstalled hosts.

 The hosts via vdsm reports their network configuration via the
 'getCapabilities' verb
 of vdsm. You can try running it on the host:

 vdsClient -s 0 getVdsCaps

 and examine the nics / neworks / bridges / vlans / bonds elements.

  Which DB table is in charge of dc-networks?
 

 The retrieved information from vdsm is reported to 'vds_interace' table.
 The dc networks are stored in 'networks' table and networks attached to
 clusters are stored in network_cluster table.

 I wouldn't recommend on deleting entries from the tables directly. There
 are
 certain constraints which shouldn't be violated, i.e. the management
 network
 'ovirtmgnt' is blocked for removal from the engine.

  Thanks,
  Arman.
 
 
  ***
  Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
 Astrophysik
  Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
  ***
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to reinstall hosts after network removal.

2014-12-29 Thread Lior Vernia
I think pressing the Refresh Capabilities button at the hosts main tab
might help after you remove the IB interface - this forces the engine to
update its DB state to what's reported by the hypervisor. Then I would
expect things to work better...

On 29/12/14 15:59, Arman Khalatyan wrote:
 My setup  has 3 networks: 1xIB,1x10Gbit+1Gbit for ovirt-management.
 The ovirt network does not have any trouble, it is always there.
 I was trying to rename or remove my IB network which was used for VM
 migrations.
 I was using web-GUI, which was removing IB0 network w/o problem. After
 removal the hosts where ok. Then I put them to maintenance mode. To
 refresh iptables rules I did reinstall. then reinstall was failing with
 message that IB0 not attached to any interface. But IB0 interface is not
 possible to attach it is already deleted and not visible in any network
 dialog.
 After creating interface with the same name everything is online now.
 My current interface list is following:
  virsh -r net-list
 Name State  Autostart Persistent
 --
 ;vdsmdummy;  active nono
 vdsm-cls10G  active yes   yes
 vdsm-IB0 active yes   yes
 vdsm-ovirtmgmt   active yes   yes
 
 On this host the IB0 is not atached to any interface, I wondered, if it
 should show up  in the net list?
 
 I think the GUI does not rename/remove the interface from the DB. Some
 constrain keeps still IB0 in DB.
 
 
 
 ***
 
  Dr. Arman Khalatyan  eScience -SuperComputing 
  Leibniz-Institut für Astrophysik Potsdam (AIP)
  An der Sternwarte 16, 14482 Potsdam, Germany  
 
 ***
 
 
 On Sun, Dec 28, 2014 at 10:31 AM, Moti Asayag masa...@redhat.com
 mailto:masa...@redhat.com wrote:
 
 
 
 - Original Message -
  From: Arman Khalatyan arm2...@gmail.com mailto:arm2...@gmail.com
  To: users users@ovirt.org mailto:users@ovirt.org
  Sent: Wednesday, December 24, 2014 1:22:43 PM
  Subject: [ovirt-users] Unable to reinstall hosts after network removal.
 
  Hello,
  I have a little trouble with ovirt 3.5 on CentOS6.6:
  I was removing all networks from all hosts.
 
 Did you use the setup networks dialog from the UI in order to remove
 those networks ?
 Or have you removed those networks from the host directly (where you
 should used the:
 1. virsh net-destroy 'the-network-name'
 2. virsh net-undefine 'the-network-name'
 )
 
 can you report the output of 'virsh -r net-list'  ?
 
  Then after removing network from data center the hosts went to unusable.
 
 What was the host's status prior to removing its networks ? Was it up ?
 
  Every time after reinstall the host claims that the network is not
  configured, but it s already removed from network tab in DC.
 
 What is the missing network name ? Is it 'ovirtmgmt' ?
 
  Where from it gets the old configuration? the old interfaces also 
 restored
  every time on the reinstalled hosts.
 
 The hosts via vdsm reports their network configuration via the
 'getCapabilities' verb
 of vdsm. You can try running it on the host:
 
 vdsClient -s 0 getVdsCaps
 
 and examine the nics / neworks / bridges / vlans / bonds elements.
 
  Which DB table is in charge of dc-networks?
 
 
 The retrieved information from vdsm is reported to 'vds_interace' table.
 The dc networks are stored in 'networks' table and networks attached to
 clusters are stored in network_cluster table.
 
 I wouldn't recommend on deleting entries from the tables directly.
 There are
 certain constraints which shouldn't be violated, i.e. the management
 network
 'ovirtmgnt' is blocked for removal from the engine.
 
  Thanks,
  Arman.
 
 
  ***
  Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für 
 Astrophysik
  Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
  ***
 
  ___
  Users mailing list
  Users@ovirt.org mailto:Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Yue, Cong
Thanks and the --vm-status log is as follows:
[root@compute2-2 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.94
Host ID: 1
Engine status  : {health: good, vm: up,
detail: up}
Score  : 2400
Local maintenance  : False
Host timestamp : 1008087tel:1008087
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1008087tel:1008087 (Mon Dec 29 11:25:51 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.93
Host ID: 2
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 0
Local maintenance  : True
Host timestamp : 859142
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=859142 (Mon Dec 29 08:25:08 2014)
host-id=2
score=0
maintenance=True
state=LocalMaintenance


--== Host 3 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.92
Host ID: 3
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 0
Local maintenance  : True
Host timestamp : 853615
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=853615 (Mon Dec 29 08:25:57 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
You have new mail in /var/spool/mail/root
[root@compute2-2 ~]#

Could you please explain how VM failover works inside ovirt? Is there any other 
debug option I can enable to check the problem?

Thanks,
Cong


On 2014/12/29, at 1:39, Artyom Lukianov 
aluki...@redhat.commailto:aluki...@redhat.com wrote:

Can you also provide output of hosted-engine --vm-status please, previous time 
it was useful, because I do not see something unusual.
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.commailto:cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.commailto:aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.commailto:stira...@redhat.com, 
users@ovirt.orgmailto:users@ovirt.org
Sent: Monday, December 29, 2014 7:15:24 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Also I change the maintenance mode to local in another host. But also the VM in 
this host can not be migrated. The logs are as follows.

[root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host 10.0.0.94 (id 1)
MainThread::INFO::2014-12-28
21:09:35,236::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:35,236::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:45,604::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:55,691::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Artyom Lukianov
I see that HE vm run on host with ip 10.0.0.94, and two another hosts in Local 
Maintenance state, so vm will not migrate to any of them, can you try disable 
local maintenance on all hosts in HE environment and after enable local 
maintenance on host where HE vm run, and provide also output of hosted-engine 
--vm-status.
Failover works in next way:
1) if host where run HE vm have score less by 800 that some other host in HE 
environment, HE vm will migrate on host with best score
2) if something happen to vm(kernel panic, crash of service...), agent will 
restart HE vm on another host in HE environment with positive score
3) if put to local maintenance host with HE vm, vm will migrate to another host 
with positive score
Thanks.

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
Sent: Monday, December 29, 2014 6:30:42 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Thanks and the --vm-status log is as follows:
[root@compute2-2 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.94
Host ID: 1
Engine status  : {health: good, vm: up,
detail: up}
Score  : 2400
Local maintenance  : False
Host timestamp : 1008087
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1008087tel:1008087 (Mon Dec 29 11:25:51 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.93
Host ID: 2
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 0
Local maintenance  : True
Host timestamp : 859142
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=859142 (Mon Dec 29 08:25:08 2014)
host-id=2
score=0
maintenance=True
state=LocalMaintenance


--== Host 3 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.92
Host ID: 3
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 0
Local maintenance  : True
Host timestamp : 853615
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=853615 (Mon Dec 29 08:25:57 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
You have new mail in /var/spool/mail/root
[root@compute2-2 ~]#

Could you please explain how VM failover works inside ovirt? Is there any other 
debug option I can enable to check the problem?

Thanks,
Cong


On 2014/12/29, at 1:39, Artyom Lukianov 
aluki...@redhat.commailto:aluki...@redhat.com wrote:

Can you also provide output of hosted-engine --vm-status please, previous time 
it was useful, because I do not see something unusual.
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.commailto:cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.commailto:aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.commailto:stira...@redhat.com, 
users@ovirt.orgmailto:users@ovirt.org
Sent: Monday, December 29, 2014 7:15:24 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Also I change the maintenance mode to local in another host. But also the VM in 
this host can not be migrated. The logs are as follows.

[root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host 

Re: [ovirt-users] ??: bond mode balance-alb

2014-12-29 Thread Nikolai Sednev
I'd like to add that using of floating MAC balance-tlb for mode 5 or ARP 
negotiation for mode 6 load balancing  balance-alb will influence latency and 
performance, using such mode should be avoided. 
Mode zero or balance-rr should be also avoided as it is the only mode that 
will allow a single TCP/IP stream to utilize more than one interface, hence 
will create additional jitter, latency and performance impacts, as 
frames/packets will be sent and arrive from different interfaces, while 
preferred is to balance on per flow. Unless in your data center you're not 
using L2-only based traffic, I really don't see any usage for mode zero. 
In Cisco routers the is a functionality called IP-CEF, which is turned on by 
default and balancing traffic on per TCP/IP flow, instead of per-packet, it is 
being used for better routing decisions for per-flow load balancing, if turned 
off, then per-packet load balancing will be enforced, causing high performance 
impact on router's CPU and memory resources, as decision have to be made on 
per-packet level, the higher the bit rate, the harder impact on resources of 
the router will be, especially for small sized packets. 


Thanks in advance. 

Best regards, 
Nikolai 
 
Nikolai Sednev 
Senior Quality Engineer at Compute team 
Red Hat Israel 
34 Jerusalem Road, 
Ra'anana, Israel 43501 

Tel: +972 9 7692043 
Mobile: +972 52 7342734 
Email: nsed...@redhat.com 
IRC: nsednev 

- Original Message -

From: users-requ...@ovirt.org 
To: users@ovirt.org 
Sent: Monday, December 29, 2014 6:53:59 AM 
Subject: Users Digest, Vol 39, Issue 163 

Send Users mailing list submissions to 
users@ovirt.org 

To subscribe or unsubscribe via the World Wide Web, visit 
http://lists.ovirt.org/mailman/listinfo/users 
or, via email, send a message with subject or body 'help' to 
users-requ...@ovirt.org 

You can reach the person managing the list at 
users-ow...@ovirt.org 

When replying, please edit your Subject line so it is more specific 
than Re: Contents of Users digest... 


Today's Topics: 

1. Re: Problem after update ovirt to 3.5 (Juan Jose) 
2. Re: ??: bond mode balance-alb (Dan Kenigsberg) 
3. Re: VM failover with ovirt3.5 (Yue, Cong) 


-- 

Message: 1 
Date: Sun, 28 Dec 2014 20:08:37 +0100 
From: Juan Jose jj197...@gmail.com 
To: Simone Tiraboschi stira...@redhat.com 
Cc: users@ovirt.org users@ovirt.org 
Subject: Re: [ovirt-users] Problem after update ovirt to 3.5 
Message-ID: 
cadre9wytndmpnsyjjzxa3zbykzhyb5da03wq17dtlfubbta...@mail.gmail.com 
Content-Type: text/plain; charset=utf-8 

Many thanks Simone, 

Juanjo. 

On Tue, Dec 16, 2014 at 1:48 PM, Simone Tiraboschi stira...@redhat.com 
wrote: 

 
 
 - Original Message - 
  From: Juan Jose jj197...@gmail.com 
  To: Yedidyah Bar David d...@redhat.com, sbona...@redhat.com 
  Cc: users@ovirt.org 
  Sent: Tuesday, December 16, 2014 1:03:17 PM 
  Subject: Re: [ovirt-users] Problem after update ovirt to 3.5 
  
  Hello everybody, 
  
  It was the firewall, after upgrade my engine the NFS configuration had 
  disappered, I have configured again as Red Hat says and now it works 
  properly again. 
  
  Many thank again for the indications. 
 
 We already had a patch for it [1], 
 it will released next month with oVirt 3.5.1 
 
 [1] http://gerrit.ovirt.org/#/c/32874/ 
 
  Juanjo. 
  
  On Mon, Dec 15, 2014 at 2:32 PM, Yedidyah Bar David  d...@redhat.com  
  wrote: 
  
  
  - Original Message - 
   From: Juan Jose  jj197...@gmail.com  
   To: users@ovirt.org 
   Sent: Monday, December 15, 2014 3:17:15 PM 
   Subject: [ovirt-users] Problem after update ovirt to 3.5 
   
   Hello everybody, 
   
   After upgrade my engine to oVirt 3.5, I have also upgraded one of my 
 hosts 
   to 
   oVirt 3.5. After that it seems that all have gone good aparently. 
   
   But in some seconds my ISO domain is desconnected and it is impossible 
 to 
   Activate. I'm attaching my engine.log. The below error is showed each 
 time 
   I 
   try to Activate the ISO domain. Before the upgrade it was working 
 without 
   problems: 
   
   2014-12-15 13:25:07,607 ERROR 
   [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
   (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null, 
 Call 
   Stack: null, Custom Event ID: -1, Message: Failed to connect Host 
 host1 to 
   the Storage Domains ISO_DOMAIN. 
   2014-12-15 13:25:07,608 INFO 
   
 [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
   (org.ovirt.thread.pool-8-thread-5) [460733dd] FINISH, 
   ConnectStorageServerVDSCommand, return: 
   {81c0a853-715c-4478-a812-6a74808fc482=477}, log id: 3590969e 
   2014-12-15 13:25:07,615 ERROR 
   [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
   (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null, 
 Call 
   Stack: null, Custom Event ID: -1, Message: The error message 

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Yue, Cong
Thanks for detailed explanation. Do you mean only HE VM can be failover? I want 
to have a try with the VM on any host to check whether VM can be failover to 
other host automatically like VMware or Xenserver?
I will have a try as you advised and provide the log for your further advice.

Thanks,
Cong



 On 2014/12/29, at 8:43, Artyom Lukianov aluki...@redhat.com wrote:

 I see that HE vm run on host with ip 10.0.0.94, and two another hosts in 
 Local Maintenance state, so vm will not migrate to any of them, can you try 
 disable local maintenance on all hosts in HE environment and after enable 
 local maintenance on host where HE vm run, and provide also output of 
 hosted-engine --vm-status.
 Failover works in next way:
 1) if host where run HE vm have score less by 800 that some other host in HE 
 environment, HE vm will migrate on host with best score
 2) if something happen to vm(kernel panic, crash of service...), agent will 
 restart HE vm on another host in HE environment with positive score
 3) if put to local maintenance host with HE vm, vm will migrate to another 
 host with positive score
 Thanks.

 - Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
 Sent: Monday, December 29, 2014 6:30:42 PM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 Thanks and the --vm-status log is as follows:
 [root@compute2-2 ~]# hosted-engine --vm-status


 --== Host 1 status ==--

 Status up-to-date  : True
 Hostname   : 10.0.0.94
 Host ID: 1
 Engine status  : {health: good, vm: up,
 detail: up}
 Score  : 2400
 Local maintenance  : False
 Host timestamp : 1008087
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1008087tel:1008087 (Mon Dec 29 11:25:51 2014)
 host-id=1
 score=2400
 maintenance=False
 state=EngineUp


 --== Host 2 status ==--

 Status up-to-date  : True
 Hostname   : 10.0.0.93
 Host ID: 2
 Engine status  : {reason: vm not running on
 this host, health: bad, vm: down, detail: unknown}
 Score  : 0
 Local maintenance  : True
 Host timestamp : 859142
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=859142 (Mon Dec 29 08:25:08 2014)
 host-id=2
 score=0
 maintenance=True
 state=LocalMaintenance


 --== Host 3 status ==--

 Status up-to-date  : True
 Hostname   : 10.0.0.92
 Host ID: 3
 Engine status  : {reason: vm not running on
 this host, health: bad, vm: down, detail: unknown}
 Score  : 0
 Local maintenance  : True
 Host timestamp : 853615
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=853615 (Mon Dec 29 08:25:57 2014)
 host-id=3
 score=0
 maintenance=True
 state=LocalMaintenance
 You have new mail in /var/spool/mail/root
 [root@compute2-2 ~]#

 Could you please explain how VM failover works inside ovirt? Is there any 
 other debug option I can enable to check the problem?

 Thanks,
 Cong


 On 2014/12/29, at 1:39, Artyom Lukianov 
 aluki...@redhat.commailto:aluki...@redhat.com wrote:

 Can you also provide output of hosted-engine --vm-status please, previous 
 time it was useful, because I do not see something unusual.
 Thanks

 - Original Message -
 From: Cong Yue 
 cong_...@alliedtelesis.commailto:cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.commailto:aluki...@redhat.com
 Cc: Simone Tiraboschi stira...@redhat.commailto:stira...@redhat.com, 
 users@ovirt.orgmailto:users@ovirt.org
 Sent: Monday, December 29, 2014 7:15:24 AM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 Also I change the maintenance mode to local in another host. But also the VM 
 in this host can not be migrated. The logs are as follows.

 [root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
 [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
 MainThread::INFO::2014-12-28
 21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Best remote host 10.0.0.94 (id: 1, score: 2400)
 MainThread::INFO::2014-12-28
 21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Current state EngineDown (score: 2400)
 MainThread::INFO::2014-12-28
 21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Best remote host 10.0.0.94 (id: 1, score: 2400)
 MainThread::INFO::2014-12-28
 

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Nikolai Sednev
Hi, 
Your guest-vm have to be defined as  Highly Available 
Highly Available 

Select this check box if the virtual machine is to be highly available. For 
example, in cases of host maintenance or failure, the virtual machine is 
automatically moved to or re-launched on another host. If the host is manually 
shut down by the system administrator, the virtual machine is not automatically 
moved to another host. 
Note that this option is unavailable if the Migration Options setting in the 
Hosts tab is set to either Allow manual migration only or No migration . For a 
virtual machine to be highly available, it must be possible for the Manager to 
migrate the virtual machine to other available hosts as necessary. 


Thanks in advance. 

Best regards, 
Nikolai 
 
Nikolai Sednev 
Senior Quality Engineer at Compute team 
Red Hat Israel 
34 Jerusalem Road, 
Ra'anana, Israel 43501 

Tel: +972 9 7692043 
Mobile: +972 52 7342734 
Email: nsed...@redhat.com 
IRC: nsednev 

- Original Message -

From: users-requ...@ovirt.org 
To: users@ovirt.org 
Sent: Monday, December 29, 2014 7:50:07 PM 
Subject: Users Digest, Vol 39, Issue 169 

Send Users mailing list submissions to 
users@ovirt.org 

To subscribe or unsubscribe via the World Wide Web, visit 
http://lists.ovirt.org/mailman/listinfo/users 
or, via email, send a message with subject or body 'help' to 
users-requ...@ovirt.org 

You can reach the person managing the list at 
users-ow...@ovirt.org 

When replying, please edit your Subject line so it is more specific 
than Re: Contents of Users digest... 


Today's Topics: 

1. Re: VM failover with ovirt3.5 (Yue, Cong) 


-- 

Message: 1 
Date: Mon, 29 Dec 2014 09:49:58 -0800 
From: Yue, Cong cong_...@alliedtelesis.com 
To: Artyom Lukianov aluki...@redhat.com 
Cc: users@ovirt.org users@ovirt.org 
Subject: Re: [ovirt-users] VM failover with ovirt3.5 
Message-ID: 11a51118-8b03-41fe-8fd0-c81ac8897...@alliedtelesis.com 
Content-Type: text/plain; charset=us-ascii 

Thanks for detailed explanation. Do you mean only HE VM can be failover? I want 
to have a try with the VM on any host to check whether VM can be failover to 
other host automatically like VMware or Xenserver? 
I will have a try as you advised and provide the log for your further advice. 

Thanks, 
Cong 



 On 2014/12/29, at 8:43, Artyom Lukianov aluki...@redhat.com wrote: 
 
 I see that HE vm run on host with ip 10.0.0.94, and two another hosts in 
 Local Maintenance state, so vm will not migrate to any of them, can you try 
 disable local maintenance on all hosts in HE environment and after enable 
 local maintenance on host where HE vm run, and provide also output of 
 hosted-engine --vm-status. 
 Failover works in next way: 
 1) if host where run HE vm have score less by 800 that some other host in HE 
 environment, HE vm will migrate on host with best score 
 2) if something happen to vm(kernel panic, crash of service...), agent will 
 restart HE vm on another host in HE environment with positive score 
 3) if put to local maintenance host with HE vm, vm will migrate to another 
 host with positive score 
 Thanks. 
 
 - Original Message - 
 From: Cong Yue cong_...@alliedtelesis.com 
 To: Artyom Lukianov aluki...@redhat.com 
 Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org 
 Sent: Monday, December 29, 2014 6:30:42 PM 
 Subject: Re: [ovirt-users] VM failover with ovirt3.5 
 
 Thanks and the --vm-status log is as follows: 
 [root@compute2-2 ~]# hosted-engine --vm-status 
 
 
 --== Host 1 status ==-- 
 
 Status up-to-date : True 
 Hostname : 10.0.0.94 
 Host ID : 1 
 Engine status : {health: good, vm: up, 
 detail: up} 
 Score : 2400 
 Local maintenance : False 
 Host timestamp : 1008087 
 Extra metadata (valid at timestamp): 
 metadata_parse_version=1 
 metadata_feature_version=1 
 timestamp=1008087tel:1008087 (Mon Dec 29 11:25:51 2014) 
 host-id=1 
 score=2400 
 maintenance=False 
 state=EngineUp 
 
 
 --== Host 2 status ==-- 
 
 Status up-to-date : True 
 Hostname : 10.0.0.93 
 Host ID : 2 
 Engine status : {reason: vm not running on 
 this host, health: bad, vm: down, detail: unknown} 
 Score : 0 
 Local maintenance : True 
 Host timestamp : 859142 
 Extra metadata (valid at timestamp): 
 metadata_parse_version=1 
 metadata_feature_version=1 
 timestamp=859142 (Mon Dec 29 08:25:08 2014) 
 host-id=2 
 score=0 
 maintenance=True 
 state=LocalMaintenance 
 
 
 --== Host 3 status ==-- 
 
 Status up-to-date : True 
 Hostname : 10.0.0.92 
 Host ID : 3 
 Engine status : {reason: vm not running on 
 this host, health: bad, vm: down, detail: unknown} 
 Score : 0 
 Local maintenance : True 
 Host timestamp : 853615 
 Extra metadata (valid at timestamp): 
 metadata_parse_version=1 
 metadata_feature_version=1 
 timestamp=853615 (Mon Dec 29 08:25:57 2014) 
 host-id=3 
 score=0 
 maintenance=True 
 state=LocalMaintenance 
 You have new mail in 

Re: [ovirt-users] Users Digest, Vol 39, Issue 171

2014-12-29 Thread Nikolai Sednev
Hi, 
Can you please provide engine.log from /var/log/ovirt-engine/engine.log and to 
try as follows: 


1. Revert all tree hosts to maintenance-mode=none. 
2. Check that engine up and running. 
3. Turn one of the hosts that is not running the engine, to maintenance 
mode local. 
4. Turn host that is running the engine to maintenance mode local. 
5. Check that engine migrated to one and the only remaining host, that had 
not been put in to maintenance mode at all. 

Can you also provide your engine version, is it 3.4 something? 


Thanks in advance. 

Best regards, 
Nikolai 
 
Nikolai Sednev 
Senior Quality Engineer at Compute team 
Red Hat Israel 
34 Jerusalem Road, 
Ra'anana, Israel 43501 

Tel: +972 9 7692043 
Mobile: +972 52 7342734 
Email: nsed...@redhat.com 
IRC: nsednev 

- Original Message -

From: users-requ...@ovirt.org 
To: users@ovirt.org 
Sent: Monday, December 29, 2014 8:29:36 PM 
Subject: Users Digest, Vol 39, Issue 171 

Send Users mailing list submissions to 
users@ovirt.org 

To subscribe or unsubscribe via the World Wide Web, visit 
http://lists.ovirt.org/mailman/listinfo/users 
or, via email, send a message with subject or body 'help' to 
users-requ...@ovirt.org 

You can reach the person managing the list at 
users-ow...@ovirt.org 

When replying, please edit your Subject line so it is more specific 
than Re: Contents of Users digest... 


Today's Topics: 

1. Re: VM failover with ovirt3.5 (Yue, Cong) 


-- 

Message: 1 
Date: Mon, 29 Dec 2014 10:29:04 -0800 
From: Yue, Cong cong_...@alliedtelesis.com 
To: Artyom Lukianov aluki...@redhat.com 
Cc: users@ovirt.org users@ovirt.org 
Subject: Re: [ovirt-users] VM failover with ovirt3.5 
Message-ID: 21d302cf-ad6f-4e8c-a373-52adac1c1...@alliedtelesis.com 
Content-Type: text/plain; charset=utf-8 

I disabled local maintenance mode for all hosts, and then only set the host 
where HE VM is there to local maintenance mode. The logs are as follows. During 
the migration of HE VM , it shows some fatal error happen. By the way, also HE 
VM can not work with live migration. Instead, other VMs can do live migration. 

--- 
[root@compute2-3 ~]# hosted-engine --set-maintenance --mode=local 
You have new mail in /var/spool/mail/root 
[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log 
MainThread::INFO::2014-12-29 
13:16:12,435::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 
Best remote host 10.0.0.92 (id: 3, score: 2400) 
MainThread::INFO::2014-12-29 
13:16:22,711::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-29 
13:16:22,711::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 
Best remote host 10.0.0.92 (id: 3, score: 2400) 
MainThread::INFO::2014-12-29 
13:16:32,978::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-29 
13:16:32,978::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-29 
13:16:43,272::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-29 
13:16:43,272::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-29 
13:16:53,316::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 
Engine vm running on localhost 
MainThread::INFO::2014-12-29 
13:16:53,562::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-29 
13:16:53,562::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-29 
13:17:03,600::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
 
Local maintenance detected 
MainThread::INFO::2014-12-29 
13:17:03,611::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 
Trying: notify time=1419877023.61 type=state_transition 
detail=EngineUp-LocalMaintenanceMigrateVm hostname='compute2-3' 
MainThread::INFO::2014-12-29 
13:17:03,672::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 
Success, was notification of state_transition 
(EngineUp-LocalMaintenanceMigrateVm) sent? sent 
MainThread::INFO::2014-12-29 
13:17:03,911::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
 
Score 

Re: [ovirt-users] 答复: bond mode balance-alb

2014-12-29 Thread Jorick Astrego

On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:
 On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
 On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
 Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM networks
 https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0

Sorry, no mode 0. So only mode 2 or 3 for your environment

Kind regards,

Jorick



Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] HostedEngine Deployment Woes

2014-12-29 Thread Mikola Rose

Hi List Members;

I have been struggling with deploying oVirt hosted engine  I keep running into 
a timeout during the Misc Configuration  any suggestion on how I can trouble 
shoot this?

Redhat 2.6.32-504.3.3.el6.x86_64

Installed Packages
ovirt-host-deploy.noarch
   1.2.5-1.el6ev
 @rhel-6-server-rhevm-3.4-rpms
ovirt-host-deploy-java.noarch   
   1.2.5-1.el6ev
 @rhel-6-server-rhevm-3.4-rpms
ovirt-hosted-engine-ha.noarch   
   1.1.6-3.el6ev
 @rhel-6-server-rhevm-3.4-rpms
ovirt-hosted-engine-setup.noarch
   1.1.5-1.el6ev
 @rhel-6-server-rhevm-3.4-rpms
rhevm-setup-plugin-ovirt-engine.noarch  
   3.4.4-2.2.el6ev  
 @rhel-6-server-rhevm-3.4-rpms
rhevm-setup-plugin-ovirt-engine-common.noarch   
   3.4.4-2.2.el6ev  
 @rhel-6-server-rhevm-3.4-rpms


  Please confirm installation settings (Yes, No)[No]: Yes
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Connecting Storage Domain
[ INFO  ] Connecting Storage Pool
[ INFO  ] Verifying sanlock lockspace initialization
[ INFO  ] sanlock lockspace already initialized
[ INFO  ] sanlock metadata already initialized
[ INFO  ] Creating VM Image
[ INFO  ] Disconnecting Storage Pool
[ INFO  ] Start monitoring domain
[ ERROR ] Failed to execute stage 'Misc configuration': The read operation 
timed out
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination



2014-12-29 14:53:41 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace lockspace._misc:133 
Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: 
/rhev/data-center/mnt/192.168.0.75:_Volumes_Raid1/8094d528-7aa2-4c28-839f-73d7c8bcfebb/ha_agent/hosted-engine.lockspace)
2014-12-29 14:53:41 INFO 
otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace lockspace._misc:144 
sanlock lockspace already initialized
2014-12-29 14:53:41 INFO 
otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace lockspace._misc:157 
sanlock metadata already initialized
2014-12-29 14:53:41 DEBUG otopi.context context._executeMethod:138 Stage misc 
METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._misc
2014-12-29 14:53:41 INFO otopi.plugins.ovirt_hosted_engine_setup.vm.image 
image._misc:162 Creating VM Image
2014-12-29 14:53:41 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vm.image 
image._misc:163 createVolume
2014-12-29 14:53:42 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vm.image 
image._misc:184 Created volume d8e7eed4-c763-4b3d-8a71-35f2d692a73d, request 
was:
- image: 9043e535-ea94-41f8-98df-6fdbfeb107c3
- volume: e6a9291d-ac21-4a95-b43c-0d6e552baaa2
2014-12-29 14:53:42 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:48 
Waiting for existing tasks to complete
2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:48 
Waiting for existing tasks to complete
2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138 Stage misc 
METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._misc
2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:144 condition 
False
2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138 Stage misc 
METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._disconnect_pool
2014-12-29 14:53:43 INFO 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._disconnect_pool:971 Disconnecting Storage Pool
2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:48 
Waiting for existing tasks to complete
2014-12-29 14:53:43 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._spmStop:602 
spmStop
2014-12-29 14:53:43 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._spmStop:611
2014-12-29 14:53:43 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 

Re: [ovirt-users] Unable to reinstall hosts after network removal.

2014-12-29 Thread Moti Asayag


- Original Message -
 From: Arman Khalatyan arm2...@gmail.com
 To: Moti Asayag masa...@redhat.com
 Cc: users users@ovirt.org
 Sent: Monday, December 29, 2014 3:59:46 PM
 Subject: Re: [ovirt-users] Unable to reinstall hosts after network removal.
 
 My setup  has 3 networks: 1xIB,1x10Gbit+1Gbit for ovirt-management.
 The ovirt network does not have any trouble, it is always there.
 I was trying to rename or remove my IB network which was used for VM
 migrations.
 I was using web-GUI, which was removing IB0 network w/o problem. After
 removal the hosts where ok. 

Have you checked the option of save network configuration in the setup 
networks
dialog (iirc should have been checked by default) ? 

Could you also attach the /var/log/ovirt-engine/engine.log from the engine 
server
and /var/log/vdsm/vdsm.log and /var/log/vdsm/supervdsm.log from the node so
we can see which request was sent to vdsm and its result ? 

 Then I put them to maintenance mode. To refresh
 iptables rules I did reinstall. then reinstall was failing with message
 that IB0 not attached to any interface. But IB0 interface is not possible
 to attach it is already deleted and not visible in any network dialog.
 After creating interface with the same name everything is online now.
 My current interface list is following:
  virsh -r net-list
 Name State  Autostart Persistent
 --
 ;vdsmdummy;  active nono
 vdsm-cls10G  active yes   yes
 vdsm-IB0 active yes   yes
 vdsm-ovirtmgmt   active yes   yes
 
 On this host the IB0 is not atached to any interface, I wondered, if it
 should show up  in the net list?
 
 I think the GUI does not rename/remove the interface from the DB. Some
 constrain keeps still IB0 in DB.
 
 
 
 ***
 
 Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
 Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
 
 ***
 
 
 On Sun, Dec 28, 2014 at 10:31 AM, Moti Asayag masa...@redhat.com wrote:
 
 
 
  - Original Message -
   From: Arman Khalatyan arm2...@gmail.com
   To: users users@ovirt.org
   Sent: Wednesday, December 24, 2014 1:22:43 PM
   Subject: [ovirt-users] Unable to reinstall hosts after network removal.
  
   Hello,
   I have a little trouble with ovirt 3.5 on CentOS6.6:
   I was removing all networks from all hosts.
 
  Did you use the setup networks dialog from the UI in order to remove those
  networks ?
  Or have you removed those networks from the host directly (where you
  should used the:
  1. virsh net-destroy 'the-network-name'
  2. virsh net-undefine 'the-network-name'
  )
 
  can you report the output of 'virsh -r net-list'  ?
 
   Then after removing network from data center the hosts went to unusable.
 
  What was the host's status prior to removing its networks ? Was it up ?
 
   Every time after reinstall the host claims that the network is not
   configured, but it s already removed from network tab in DC.
 
  What is the missing network name ? Is it 'ovirtmgmt' ?
 
   Where from it gets the old configuration? the old interfaces also
  restored
   every time on the reinstalled hosts.
 
  The hosts via vdsm reports their network configuration via the
  'getCapabilities' verb
  of vdsm. You can try running it on the host:
 
  vdsClient -s 0 getVdsCaps
 
  and examine the nics / neworks / bridges / vlans / bonds elements.
 
   Which DB table is in charge of dc-networks?
  
 
  The retrieved information from vdsm is reported to 'vds_interace' table.
  The dc networks are stored in 'networks' table and networks attached to
  clusters are stored in network_cluster table.
 
  I wouldn't recommend on deleting entries from the tables directly. There
  are
  certain constraints which shouldn't be violated, i.e. the management
  network
  'ovirtmgnt' is blocked for removal from the engine.
 
   Thanks,
   Arman.
  
  
   ***
   Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
  Astrophysik
   Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
   ***
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Artyom Lukianov
HE vm migrated only by ovirt-ha-agent and not by engine, but FatalError it's 
more interesting, can you provide vdsm.log for this one please.

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
Sent: Monday, December 29, 2014 8:29:04 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

I disabled local maintenance mode for all hosts, and then only set the host 
where HE VM is there to local maintenance mode. The logs are as follows. During 
the migration of HE VM , it shows some fatal error happen. By the way, also HE 
VM can not work with live migration. Instead, other VMs can do live migration.

---
[root@compute2-3 ~]# hosted-engine --set-maintenance --mode=local
You have new mail in /var/spool/mail/root
[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-29
13:16:12,435::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.92 (id: 3, score: 2400)
MainThread::INFO::2014-12-29
13:16:22,711::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:22,711::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.92 (id: 3, score: 2400)
MainThread::INFO::2014-12-29
13:16:32,978::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:32,978::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:16:43,272::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:43,272::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:16:53,316::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-29
13:16:53,562::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:53,562::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:17:03,600::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-29
13:17:03,611::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419877023.61 type=state_transition
detail=EngineUp-LocalMaintenanceMigrateVm hostname='compute2-3'
MainThread::INFO::2014-12-29
13:17:03,672::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineUp-LocalMaintenanceMigrateVm) sent? sent
MainThread::INFO::2014-12-29
13:17:03,911::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-29
13:17:03,912::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenanceMigrateVm (score: 0)
MainThread::INFO::2014-12-29
13:17:03,912::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:17:03,960::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419877023.96 type=state_transition
detail=LocalMaintenanceMigrateVm-EngineMigratingAway
hostname='compute2-3'
MainThread::INFO::2014-12-29
13:17:03,980::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(LocalMaintenanceMigrateVm-EngineMigratingAway) sent? sent
MainThread::INFO::2014-12-29
13:17:04,218::states::66::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_penalize_memory)
Penalizing score by 400 due to low free memory
MainThread::INFO::2014-12-29
13:17:04,218::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineMigratingAway (score: 2000)
MainThread::INFO::2014-12-29
13:17:04,219::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::ERROR::2014-12-29

Re: [ovirt-users] VM failover with ovirt3.5

2014-12-29 Thread Artyom Lukianov
If you want enable failover for some vm, you can enter under vm 
properties-High Availability and enable Highly Available checkbox. But HE vm 
already automatically Highly Available.

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Artyom Lukianov aluki...@redhat.com
Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
Sent: Monday, December 29, 2014 7:49:58 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Thanks for detailed explanation. Do you mean only HE VM can be failover? I want 
to have a try with the VM on any host to check whether VM can be failover to 
other host automatically like VMware or Xenserver?
I will have a try as you advised and provide the log for your further advice.

Thanks,
Cong



 On 2014/12/29, at 8:43, Artyom Lukianov aluki...@redhat.com wrote:

 I see that HE vm run on host with ip 10.0.0.94, and two another hosts in 
 Local Maintenance state, so vm will not migrate to any of them, can you try 
 disable local maintenance on all hosts in HE environment and after enable 
 local maintenance on host where HE vm run, and provide also output of 
 hosted-engine --vm-status.
 Failover works in next way:
 1) if host where run HE vm have score less by 800 that some other host in HE 
 environment, HE vm will migrate on host with best score
 2) if something happen to vm(kernel panic, crash of service...), agent will 
 restart HE vm on another host in HE environment with positive score
 3) if put to local maintenance host with HE vm, vm will migrate to another 
 host with positive score
 Thanks.

 - Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: Simone Tiraboschi stira...@redhat.com, users@ovirt.org
 Sent: Monday, December 29, 2014 6:30:42 PM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 Thanks and the --vm-status log is as follows:
 [root@compute2-2 ~]# hosted-engine --vm-status


 --== Host 1 status ==--

 Status up-to-date  : True
 Hostname   : 10.0.0.94
 Host ID: 1
 Engine status  : {health: good, vm: up,
 detail: up}
 Score  : 2400
 Local maintenance  : False
 Host timestamp : 1008087
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1008087tel:1008087 (Mon Dec 29 11:25:51 2014)
 host-id=1
 score=2400
 maintenance=False
 state=EngineUp


 --== Host 2 status ==--

 Status up-to-date  : True
 Hostname   : 10.0.0.93
 Host ID: 2
 Engine status  : {reason: vm not running on
 this host, health: bad, vm: down, detail: unknown}
 Score  : 0
 Local maintenance  : True
 Host timestamp : 859142
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=859142 (Mon Dec 29 08:25:08 2014)
 host-id=2
 score=0
 maintenance=True
 state=LocalMaintenance


 --== Host 3 status ==--

 Status up-to-date  : True
 Hostname   : 10.0.0.92
 Host ID: 3
 Engine status  : {reason: vm not running on
 this host, health: bad, vm: down, detail: unknown}
 Score  : 0
 Local maintenance  : True
 Host timestamp : 853615
 Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=853615 (Mon Dec 29 08:25:57 2014)
 host-id=3
 score=0
 maintenance=True
 state=LocalMaintenance
 You have new mail in /var/spool/mail/root
 [root@compute2-2 ~]#

 Could you please explain how VM failover works inside ovirt? Is there any 
 other debug option I can enable to check the problem?

 Thanks,
 Cong


 On 2014/12/29, at 1:39, Artyom Lukianov 
 aluki...@redhat.commailto:aluki...@redhat.com wrote:

 Can you also provide output of hosted-engine --vm-status please, previous 
 time it was useful, because I do not see something unusual.
 Thanks

 - Original Message -
 From: Cong Yue 
 cong_...@alliedtelesis.commailto:cong_...@alliedtelesis.com
 To: Artyom Lukianov aluki...@redhat.commailto:aluki...@redhat.com
 Cc: Simone Tiraboschi stira...@redhat.commailto:stira...@redhat.com, 
 users@ovirt.orgmailto:users@ovirt.org
 Sent: Monday, December 29, 2014 7:15:24 AM
 Subject: Re: [ovirt-users] VM failover with ovirt3.5

 Also I change the maintenance mode to local in another host. But also the VM 
 in this host can not be migrated. The logs are as follows.

 [root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
 [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
 MainThread::INFO::2014-12-28
 21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Best