Re: [ovirt-users] Backup and Restore of VMs

2014-12-28 Thread Liron Aravot
Hi All,
I've uploaded an example script (oVirt python-sdk) that contains examples to 
the steps
described on http://www.ovirt.org/Features/Backup-Restore_API_Integration

let me know how it works out for you -
https://github.com/laravot/backuprestoreapi

 

- Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Soeren Malchow soeren.malc...@mcon.net
 Cc: Vered Volansky ve...@redhat.com, Users@ovirt.org
 Sent: Wednesday, December 24, 2014 12:20:36 PM
 Subject: Re: [ovirt-users] Backup and Restore of VMs
 
 Hi guys,
 I'm currently working on complete example of the steps appear in -
 http://www.ovirt.org/Features/Backup-Restore_API_Integration
 
 will share with you as soon as i'm done with it.
 
 thanks,
 Liron
 
 - Original Message -
  From: Soeren Malchow soeren.malc...@mcon.net
  To: Vered Volansky ve...@redhat.com
  Cc: Users@ovirt.org
  Sent: Wednesday, December 24, 2014 11:58:01 AM
  Subject: Re: [ovirt-users] Backup and Restore of VMs
  
  Dear Vered,
  
  at some point we have to start, and right now we are getting closer, even
  with the documentation it is sometime hard to find the correct place to
  start, especially without specific examples (and I have decades of
  experience now)
  
  with the backup plugin that came from Lucas Vandroux we have a starting
  point
  right now, and we will continue form here and try to work with him on this.
  
  Regards
  Soeren
  
  
  -Original Message-
  From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
  Blaster
  Sent: Tuesday, December 23, 2014 5:49 PM
  To: Vered Volansky
  Cc: Users@ovirt.org
  Subject: Re: [ovirt-users] Backup and Restore of VMs
  
  Sounds like a Chicken/Egg problem.
  
  
  
  On 12/23/2014 12:03 AM, Vered Volansky wrote:
   Well, real world is community...
   Maybe change the name of the thread in order to make this more clear for
   someone from the community that might be able to could help.
   Maybe something like:
   Request for sharing real world example of VM backups.
  
   We obviously use it as part as developing, but I don't have what you're
   asking for.
   If you try it yourself and stumble onto questions in the process, please
   ask the list and we'll do our best to help.
  
   Best Regards,
   Vered
  
   - Original Message -
   From: Blaster blas...@556nato.com
   To: Vered Volansky ve...@redhat.com
   Cc: Users@ovirt.org
   Sent: Tuesday, December 23, 2014 5:56:13 AM
   Subject: Re: [ovirt-users] Backup and Restore of VMs
  
  
   Vered,
  
   It sounds like Soeren already knows about that page.  His issue seems
   to be, as well as the issue of others judging by comments on here, is
   that there aren’t any real world examples of how the API is used.
  
  
  
   On Dec 22, 2014, at 9:26 AM, Vered Volansky ve...@redhat.com wrote:
  
   Please take a look at:
   http://www.ovirt.org/Features/Backup-Restore_API_Integration
  
   Specifically:
   http://www.ovirt.org/Features/Backup-Restore_API_Integration#Full_VM
   _Backups
  
   Regards,
   Vered
  
   - Original Message -
   From: Soeren Malchow soeren.malc...@mcon.net
   To: Users@ovirt.org
   Sent: Friday, December 19, 2014 1:44:38 PM
   Subject: [ovirt-users] Backup and Restore of VMs
  
  
  
   Dear all,
  
  
  
   ovirt: 3.5
  
   gluster: 3.6.1
  
   OS: CentOS 7 (except ovirt hosted engine = centos 6.6)
  
  
  
   i spent quite a while researching backup and restore for VMs right
   now, so far I have come up with this as a start for us
  
  
  
   - API calls to create schedule snapshots of virtual machines This
   is or short term storage and to guard against accidential deletion
   within the VM but not for storage corruption
  
  
  
   - Since we are using a gluster backend, gluster snapshots I wasn’t
   able so far to really test it since the LV needs to be thin
   provisioned and we did not do that in the setup
  
  
  
   For the API calls we have the problem that we can not find any
   existing scripts or something like that to do those snapshots (and
   i/we are not developers enough to do that).
  
  
  
   As an additional information, we have a ZFS based storage with
   deduplication that we use for other backup purposes which does a
   great job especially because of the deduplication (we can storage
   generations of backups without problems), this storage can be NFS
   exported and used as backup repository.
  
  
  
   Are there any backup and restore procedure you guys are using for
   backup and restore that works for you and can you point me into the
   right direction ?
  
   I am a little bit list right now and would appreciate any help.
  
  
  
   Regards
  
   Soeren
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
   

Re: [ovirt-users] Stucked VM Migration and now only run once

2014-12-28 Thread Oved Ourfali
Can you provide the engine and host logs?
Also, please specify when the migration happened, and in addition when did you 
try to run the VM.
It will help understand the flow in the logs.

Thanks,
Oved

- Original Message -
 From: Kurt Woitschach kurt.woitschach-muel...@tngtech.com
 To: users@ovirt.org
 Sent: Saturday, December 27, 2014 9:22:35 PM
 Subject: [ovirt-users] Stucked VM Migration and now only run once
 
 Hi all,
 
 we have a Problem with a VM that can only be started in run-once mode.
 
 After a temporary network disconnect on the hosting node, the vm (and
 some others) was down. When I tried to start regularly, it showed a
 currently beeing migrated status.
 I only could start it with run-once.
 
 Reboot didn't make a change.
 
 Any ideas?
 
 
 Greets
 Kurt
 
 --
 
 
 Kurt Woitschach-Müller  kurt.woitschach-muel...@tngtech.com * +49-1743180076
 TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
 Geschäftsführer: Henrik Klagges, Gerhard Müller, Christoph Stock
 Sitz: Unterföhring * Amtsgericht München * HRB 135082
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to reinstall hosts after network removal.

2014-12-28 Thread Moti Asayag


- Original Message -
 From: Arman Khalatyan arm2...@gmail.com
 To: users users@ovirt.org
 Sent: Wednesday, December 24, 2014 1:22:43 PM
 Subject: [ovirt-users] Unable to reinstall hosts after network removal.
 
 Hello,
 I have a little trouble with ovirt 3.5 on CentOS6.6:
 I was removing all networks from all hosts.

Did you use the setup networks dialog from the UI in order to remove those 
networks ?
Or have you removed those networks from the host directly (where you should 
used the: 
1. virsh net-destroy 'the-network-name'
2. virsh net-undefine 'the-network-name'
)

can you report the output of 'virsh -r net-list'  ?

 Then after removing network from data center the hosts went to unusable.

What was the host's status prior to removing its networks ? Was it up ?

 Every time after reinstall the host claims that the network is not
 configured, but it s already removed from network tab in DC.

What is the missing network name ? Is it 'ovirtmgmt' ?

 Where from it gets the old configuration? the old interfaces also restored
 every time on the reinstalled hosts.

The hosts via vdsm reports their network configuration via the 
'getCapabilities' verb
of vdsm. You can try running it on the host:

vdsClient -s 0 getVdsCaps 

and examine the nics / neworks / bridges / vlans / bonds elements.

 Which DB table is in charge of dc-networks?
 

The retrieved information from vdsm is reported to 'vds_interace' table.
The dc networks are stored in 'networks' table and networks attached to
clusters are stored in network_cluster table.

I wouldn't recommend on deleting entries from the tables directly. There are
certain constraints which shouldn't be violated, i.e. the management network
'ovirtmgnt' is blocked for removal from the engine.

 Thanks,
 Arman.
 
 
 ***
 Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik
 Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
 ***
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failover with ovirt3.5

2014-12-28 Thread Artyom Lukianov
I see that you set local maintenance on host3 that do not have engine vm on it, 
so it nothing to migrate from this host.
If you set local maintenance on host1, vm must migrate to another host with 
positive score.
Thanks

- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Simone Tiraboschi stira...@redhat.com
Cc: users@ovirt.org
Sent: Saturday, December 27, 2014 6:58:32 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5

Hi

I had a try with hosted-engine --set-maintence --mode=local on
compute2-1, which is host 3 in my cluster. From the log, it shows
maintence mode is dectected, but migration does not happen.

The logs are as follows. Is there any other config I need to check?

[root@compute2-1 vdsm]# hosted-engine --vm-status


--== Host 1 status ==-

Status up-to-date  : True
Hostname   : 10.0.0.94
Host ID: 1
Engine status  : {health: good, vm: up,
detail: up}
Score  : 2400
Local maintenance  : False
Host timestamp : 836296
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=836296 (Sat Dec 27 11:42:39 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.93
Host ID: 2
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 2400
Local maintenance  : False
Host timestamp : 687358
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=687358 (Sat Dec 27 08:42:04 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown


--== Host 3 status ==--

Status up-to-date  : True
Hostname   : 10.0.0.92
Host ID: 3
Engine status  : {reason: vm not running on
this host, health: bad, vm: down, detail: unknown}
Score  : 0
Local maintenance  : True
Host timestamp : 681827
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=681827 (Sat Dec 27 08:42:40 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)



[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27

Re: [ovirt-users] Stucked VM Migration and now only run once

2014-12-28 Thread Arik Hadas
You wrote that reboot didn't help, is it the host that you rebooted? Because 
engine restart will release the migration's lock and you'll be able to run the 
VM normally for sure.

Since you managed to run the VM using run-once while it was locked, I guess 
you're using ovirt 3.3.1/3.3.2/3.3.3/3.3.4 right?
We fixed several flows in which the migration's lock was not released since 
then, so I suggest to upgrade the system.
If it happens with any other version, please provide the logs Oved mentioned 
and specify which version of engine you're using.

Thanks,
Arik

- Original Message -
 Can you provide the engine and host logs?
 Also, please specify when the migration happened, and in addition when did
 you try to run the VM.
 It will help understand the flow in the logs.
 
 Thanks,
 Oved
 
 - Original Message -
  From: Kurt Woitschach kurt.woitschach-muel...@tngtech.com
  To: users@ovirt.org
  Sent: Saturday, December 27, 2014 9:22:35 PM
  Subject: [ovirt-users] Stucked VM Migration and now only run once
  
  Hi all,
  
  we have a Problem with a VM that can only be started in run-once mode.
  
  After a temporary network disconnect on the hosting node, the vm (and
  some others) was down. When I tried to start regularly, it showed a
  currently beeing migrated status.
  I only could start it with run-once.
  
  Reboot didn't make a change.
  
  Any ideas?
  
  
  Greets
  Kurt
  
  --
  
  
  Kurt Woitschach-Müller  kurt.woitschach-muel...@tngtech.com *
  +49-1743180076
  TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
  Geschäftsführer: Henrik Klagges, Gerhard Müller, Christoph Stock
  Sitz: Unterföhring * Amtsgericht München * HRB 135082
  
  
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] feedback-on-oVirt-engine-3.5.0.1-1.el6

2014-12-28 Thread Itamar Heim

On 12/26/2014 05:33 AM, bingozhou2013 wrote:

Dear Sir,
When  I  try to install the oVirt-engine 3.5  in the CentOS 6.6  .Below
error is show :
-- Finished Dependency Resolution
Error: Package: ovirt-engine-backend-3.5.0.1-1.el6.noarch (ovirt-3.5)
Requires: novnc
  You could try using --skip-broken to work around the problem
  You could try running: rpm -Va --nofiles --nodigest
I have  added the  EPEL source and installed ovirt-release35.rpm ,but it
still shows requires:novnc . Please help me to check this .Thank you
very much !

bingozhou2013


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



http://lists.ovirt.org/pipermail/users/2014-December/030275.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem after update ovirt to 3.5

2014-12-28 Thread Juan Jose
Many thanks Simone,

Juanjo.

On Tue, Dec 16, 2014 at 1:48 PM, Simone Tiraboschi stira...@redhat.com
wrote:



 - Original Message -
  From: Juan Jose jj197...@gmail.com
  To: Yedidyah Bar David d...@redhat.com, sbona...@redhat.com
  Cc: users@ovirt.org
  Sent: Tuesday, December 16, 2014 1:03:17 PM
  Subject: Re: [ovirt-users] Problem after update ovirt to 3.5
 
  Hello everybody,
 
  It was the firewall, after upgrade my engine the NFS configuration had
  disappered, I have configured again as Red Hat says and now it works
  properly again.
 
  Many thank again for the indications.

 We already had a patch for it [1],
 it will released next month with oVirt 3.5.1

 [1] http://gerrit.ovirt.org/#/c/32874/

  Juanjo.
 
  On Mon, Dec 15, 2014 at 2:32 PM, Yedidyah Bar David  d...@redhat.com 
  wrote:
 
 
  - Original Message -
   From: Juan Jose  jj197...@gmail.com 
   To: users@ovirt.org
   Sent: Monday, December 15, 2014 3:17:15 PM
   Subject: [ovirt-users] Problem after update ovirt to 3.5
  
   Hello everybody,
  
   After upgrade my engine to oVirt 3.5, I have also upgraded one of my
 hosts
   to
   oVirt 3.5. After that it seems that all have gone good aparently.
  
   But in some seconds my ISO domain is desconnected and it is impossible
 to
   Activate. I'm attaching my engine.log. The below error is showed each
 time
   I
   try to Activate the ISO domain. Before the upgrade it was working
 without
   problems:
  
   2014-12-15 13:25:07,607 ERROR
   [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
   (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null,
 Call
   Stack: null, Custom Event ID: -1, Message: Failed to connect Host
 host1 to
   the Storage Domains ISO_DOMAIN.
   2014-12-15 13:25:07,608 INFO
  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
   (org.ovirt.thread.pool-8-thread-5) [460733dd] FINISH,
   ConnectStorageServerVDSCommand, return:
   {81c0a853-715c-4478-a812-6a74808fc482=477}, log id: 3590969e
   2014-12-15 13:25:07,615 ERROR
   [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
   (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null,
 Call
   Stack: null, Custom Event ID: -1, Message: The error message for
 connection
   ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 returned by
   VDSM
   was: Problem while trying to mount target
   2014-12-15 13:25:07,616 ERROR
   [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
   (org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with
 details
   ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 failed
 because
   of error code 477 and error message is: problem while trying to mount
   target
  
   If any other information is required, please tell me.
 
  Is the ISO domain on the engine host?
 
  Please check there iptables and /etc/exports, /etc/exports.d.
 
  Please post the setup (upgrade) log, check /var/log/ovirt-engine/setup.
 
  Thanks,
  --
  Didi
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 答复: bond mode balance-alb

2014-12-28 Thread Dan Kenigsberg
On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
 On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
 Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM networks
 https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
 
 Dan,
 
 What is bad about these modes that oVirt can't use them?

I can only quote jpirko's workds from the link above:

Do not use tlb or alb in bridge, never! It does not work, that's it. The 
reason
is it mangles source macs in xmit frames and arps. When it is possible, just
use mode 4 (lacp). That should be always possible because all enterprise
switches support that. Generally, for 99% of use cases, you *should* use 
mode
4. There is no reason to use other modes.

 
 I just tested mode 4, and the LACP with Fedora 20 appears to not be
 compatible with the LAG mode on my Dell 2824.
 
 Would there be any issues with bringing two NICS into the VM and doing
 balance-alb at the guest level?
 
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM failover with ovirt3.5

2014-12-28 Thread Yue, Cong
I checked it again and confirmed there is one guest VM is running on the top of 
this host. The log is as follows:

[root@compute2-1 vdsm]# ps -ef | grep qemu
qemu  2983   846  0 Dec19 ?00:00:00x-apple-data-detectors://0 
[supervdsmServer] defunct
root  5489  3053  0 20:49x-apple-data-detectors://1 pts/0
00:00:00x-apple-data-detectors://2 grep --color=auto qemu
qemu 26128 1  0 Dec19 ?01:09:19 /usr/libexec/qemu-kvm
-name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m
500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
-uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-12-19T20:18:01x-apple-data-detectors://4,driftfix=slew 
-no-kvm-pit-reinjection
-no-hpet -no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive 
file=/rhev/data-center/0002-0002-0002-0002-01e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3
-chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice 
tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864tel:67108864 -global
qxl-vga.vram_size=33554432tel:33554432 -incoming tcp:[::]:49152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)

Thanks,
Cong


On 2014/12/28, at 3:46, Artyom Lukianov