Re: [ovirt-users] Storage QOS

2015-07-16 Thread Matthew Lagoe
Nope no luck :(

 

Been tryin to get it working for a few months, even opened a case (01376573)
and was told it was broken, that's why I was asking if anyone else has been
able to get it working. Seems strange to me if the feature is completely
broken since it was one of the main features recently. I imagine there is
some work around out there, somewhere.

 

From: Roy Golan [mailto:rgo...@redhat.com] 
Sent: Thursday, July 16, 2015 03:58 AM
To: Matthew Lagoe
Subject: Re: [ovirt-users] Storage QOS

 

On 07/16/2015 01:49 AM, Matthew Lagoe wrote:

Has anyone been able to get storage qos to work with 3.5? 

 

I setup a policy with total 150 iops and total 100MBps but it's still
unconstrained when i do a benchtest at ~1200iops and 800MBps, is there
anything else i have to do except set the qos policy in data centers and
assign the qos policy to the disk profile?






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Sorry for getting so late into that mail. Have you got storage QoS working
already? 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot run specific VM in one node

2015-07-16 Thread Omer Frenkel


- Original Message -
 From: Diego Remolina dijur...@gmail.com
 To: Users@ovirt.org
 Sent: Thursday, July 16, 2015 7:45:43 AM
 Subject: [ovirt-users] Cannot run specific VM in one node
 
 Hi,
 
 Was wondering if I can get some help with this particular situation. I
 have two ovirt cluster nodes. I had a VM running in node2 and tried to
 move it to node1. The move failed and the machine was created and
 paused in both nodes. I tried stopping migration, shutting down the
 machine, etc but none of that worked.
 
 So I decided to simply look for the process number and I killed it for
 that VM. After that, I was not able to get the VM to run in any of the
 nodes, so I rebooted them both.
 
 At this point, the vm will *not* start in node2 at all. When I try to
 start it, it just sits there and if I do:
 
 virsh -r list
 
 from the command line, the output says the vm state is shut off.
 
 I am able to user Run Once to fire up the VM in node 1, but I cannot
 migrate it to node2.
 
 How can I clear this problematic state for node 2?

please attach engine + vdsm logs for the time of the failure

 
 Thanks,
 
 Diego
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem to start VMs on ovirt

2015-07-16 Thread Omer Frenkel


- Original Message -
 From: Miguel Angel Costas migacos2...@gmail.com
 To: users@ovirt.org
 Sent: Monday, July 13, 2015 3:00:33 PM
 Subject: [ovirt-users] Problem to start VMs on ovirt
 
 Dear,
 
 I have implemented the lastest version of Ovirt (Centos 6), and every time I
 want to start a VM appear this message VM lalala is down with error. Exit
 message: internal error Cannot parse sensitivity level in s0.
 
 I tried to find the error on Internet and Blogs but I can't find the
 solutions. Could you help me to solve the problem???
 

please attach engine + vdsm logs for the time of the failure

 Best Regards
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM has been paused due to unknown storage error

2015-07-16 Thread Konstantinos Christidis

Hello oVirt users,

I am facing a serious problem regarding my GlusterFS storage and virtual 
machines that have *bootable* disks on this storage.


All my VMs that have GlusterFS disks are occasionally (1-2 times/hour) 
becoming paused with the following Error: VM vm02.mytld has been paused 
due to unknown storage error.


Engine Log
INFO  [org.ovirt.engine.core.vdsbroker.VmAnalyzer] 
(DefaultQuartzScheduler_Worker-69) [] VM 
'247bb0f3-1a77-44e4-a404-3271eaee94be'(vm02.mytld) moved from 'Up' -- 
'Paused'
INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VM vm02.mytld has been paused.
ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VM vm02.mytld has been paused due to 
unknown storage error


My iSCSI VM's, some of which may have mounted (not bootable) disks from 
the same GlusterFS storage, do NOT suffer from this issue AFAIK.


My installation (oVirt 3.6/CentOS 7) is pretty much a typical one, with 
a GlusterFS enabled cluster with 4 hosts, 2-3 networks, and 6-7 VMs..


Thanks,

K.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.6 - deep dive sessions

2015-07-16 Thread Barak Azulay
oVirt 3.6 will be released soon.

Deep dive sessions on various features / aspects of this release will be 
scheduled In the upcoming several weeks. 
You have a chance to take a look into this new exciting release.

Stay tuned and join these sessions.

Thanks
Barak Azulay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] IPV6 on vms

2015-07-16 Thread Dan Kenigsberg
On Thu, Jul 16, 2015 at 12:29:32AM +0200, Adam Popik wrote:
 Hello;
 I have 2 ovirt nodes (v3.5.3, all on centos 7) with ovirt engine on separate
 machine, few vlans, shared storage etc. Installation is mostly standard.
 Virtual machines with ipv4 only works well without problems. But I have lot
 of issues with ipv6 on virtual machines. When I turn on virtual machine it
 works well (ipv4 and ipv6). But when I migrate vm than ipv6 simply stop
 works. Time to time works well but I don't known why  My bridges are
 bonded interfaces with vlans.


Can you explain what stops working in ipv6? can you ping6 VMs on the
same bridge from the migrated VM? Can you do it to VMs on the original
host?


 I search Internet for this issue and I found some information about issues
 with multicast on bridge (I try to set few parameters but without positive
 results ... ).
 Is there any solution for this IPV6 issues ?
 Greetings
 PS. Sorry for my broken English.
 Adam
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot run specific VM in one node

2015-07-16 Thread Diego Remolina
Hi Joop,

There is currently no split brain in my gluster file systems. The
virtualization setup is a two node hypervisor (ysmha01 and ysmha02),
but I have a 3 node gluster where one node has no bricks
(10.0.1.6-ysmha01, 10.0.1.7-ysmha02 and 10.0.1.5 no bricks), but
helps define quorum, see below:

[root@ysmha01 ~]# gluster volume status engine
Status of volume: engine
Gluster process PortOnline  Pid
--
Brick 10.0.1.6:/bricks/she/brick49152   Y   4620
NFS Server on localhost 2049Y   4637
Self-heal Daemon on localhost   N/A Y   4648
NFS Server on 10.0.1.5  N/A N   N/A
Self-heal Daemon on 10.0.1.5N/A Y   14563

Task Status of Volume engine
--
There are no active volume tasks

[root@ysmha01 ~]# gluster volume heal engine info split-brain
Gathering list of split brain entries on volume engine has been successful

Brick 10.0.1.7:/bricks/she/brick
Number of entries: 0

Brick 10.0.1.6:/bricks/she/brick
Number of entries: 0
[root@ysmha01 ~]# gluster volume heal vmstorage info split-brain
Gathering list of split brain entries on volume vmstorage has been successful

Brick 10.0.1.7:/bricks/vmstorage/brick
Number of entries: 0

Brick 10.0.1.6:/bricks/vmstorage/brick
Number of entries: 0
[root@ysmha01 ~]# gluster volume heal export info split-brain
Gathering list of split brain entries on volume export has been successful

Brick 10.0.1.7:/bricks/hdds/brick
Number of entries: 0

Brick 10.0.1.6:/bricks/hdds/brick
Number of entries: 0

Diego

On Thu, Jul 16, 2015 at 8:25 AM, Joop jvdw...@xs4all.nl wrote:
 On 16-7-2015 14:20, Diego Remolina wrote:
 I have two virtualization/storage servers, ysmha01 and ysmha02 running
 Ovirt hosted engine on top of glusterfs storage. I have two Windows
 server vms called ysmad01 and ysmad02. The current problem is that
 ysmad02 will *not* start on ysmha02 any more.

 I might have missed it but did you check for a split-brain situation
 since you're using a 2-node gluster?

 Regards,

 Joop



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to unknown storage error

2015-07-16 Thread Amit Aviram
Hi again Konstantinos.

Can you please attach the full VDSM  Engine logs so we can understand the 
reason for your problem?

Thanks.

- Original Message -
From: Konstantinos Christidis kochr...@ekt.gr
To: users@ovirt.org
Sent: Thursday, July 16, 2015 11:29:26 AM
Subject: [ovirt-users] VM has been paused due to unknown storage error

Hello oVirt users,

I am facing a serious problem regarding my GlusterFS storage and virtual 
machines that have *bootable* disks on this storage.

All my VMs that have GlusterFS disks are occasionally (1-2 times/hour) 
becoming paused with the following Error: VM vm02.mytld has been paused 
due to unknown storage error.

Engine Log
INFO  [org.ovirt.engine.core.vdsbroker.VmAnalyzer] 
(DefaultQuartzScheduler_Worker-69) [] VM 
'247bb0f3-1a77-44e4-a404-3271eaee94be'(vm02.mytld) moved from 'Up' -- 
'Paused'
INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VM vm02.mytld has been paused.
ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VM vm02.mytld has been paused due to 
unknown storage error

My iSCSI VM's, some of which may have mounted (not bootable) disks from 
the same GlusterFS storage, do NOT suffer from this issue AFAIK.

My installation (oVirt 3.6/CentOS 7) is pretty much a typical one, with 
a GlusterFS enabled cluster with 4 hosts, 2-3 networks, and 6-7 VMs..

Thanks,

K.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot run specific VM in one node

2015-07-16 Thread Joop
On 16-7-2015 14:20, Diego Remolina wrote:
 I have two virtualization/storage servers, ysmha01 and ysmha02 running
 Ovirt hosted engine on top of glusterfs storage. I have two Windows
 server vms called ysmad01 and ysmad02. The current problem is that
 ysmad02 will *not* start on ysmha02 any more.

I might have missed it but did you check for a split-brain situation
since you're using a 2-node gluster?

Regards,

Joop



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt in the 16th edition of the International Free Software Forum (FISL)

2015-07-16 Thread Douglas Schilling Landgraf

Report available at:
http://dougsland.livejournal.com/124552.html

--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to unknown storage error

2015-07-16 Thread m...@ohnewald.net
Check your vdsm Logs on your nodes. I bet you find something about I/O 
errors i guess...


Also check your glusterfs logs. Maybe you can find some problems, too.

Mario



Am 16.07.15 um 10:29 schrieb Konstantinos Christidis:

Hello oVirt users,

I am facing a serious problem regarding my GlusterFS storage and virtual
machines that have *bootable* disks on this storage.

All my VMs that have GlusterFS disks are occasionally (1-2 times/hour)
becoming paused with the following Error: VM vm02.mytld has been paused
due to unknown storage error.

Engine Log
INFO  [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-69) [] VM
'247bb0f3-1a77-44e4-a404-3271eaee94be'(vm02.mytld) moved from 'Up' --
'Paused'
INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm02.mytld has been paused.
ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm02.mytld has been paused due to
unknown storage error

My iSCSI VM's, some of which may have mounted (not bootable) disks from
the same GlusterFS storage, do NOT suffer from this issue AFAIK.

My installation (oVirt 3.6/CentOS 7) is pretty much a typical one, with
a GlusterFS enabled cluster with 4 hosts, 2-3 networks, and 6-7 VMs..

Thanks,

K.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problem with Mac Spoof Filter

2015-07-16 Thread InterNetX - Juergen Gotteswinter
Hi,

seems like the Setting EnableMACAntiSpoofingFilterRules only applies to
the main IP of a VM, additional IP Adresses on Alias Interfaces (eth0:x)
are not included in the generated ebtables ruleset.

Is there any Workaround / Setting / whatever to allow more than one IP
without completly disabling this Filter?

Thanks,

Juergen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot run specific VM in one node

2015-07-16 Thread Diego Remolina
These are the links to the files, if there is other better/preffered
way to post them, let me know:

https://www.dropbox.com/s/yziky6f9nk3e8aw/engine.log.xz?dl=0
https://www.dropbox.com/s/qsweiizwxk37qzg/vdsm.log.4.xz?dl=0

A bit more of an explanation on the infrastructure:

I have two virtualization/storage servers, ysmha01 and ysmha02 running
Ovirt hosted engine on top of glusterfs storage. I have two Windows
server vms called ysmad01 and ysmad02. The current problem is that
ysmad02 will *not* start on ysmha02 any more.


Timeline

My problems started at around 8:30PM 7/15/2015 when migrating
everything to ysmha01 after having patched and rebooted the server.

I got things back up at around 10:30PM after rebooting servers, etc.
The hosted engine running on ysmha02. I got ysmad01 running on
ysmha01, but ysmad02 just would not start at all on ysmha02. I did a
run once and set ysmad02 to start on ysmha01 and that works.

When attempting to start or migrate ysmad02 on ysmha02, if I do a
virsh -r list on ysmha02, I just see the state as: Shut off and the
VM just does not run on that hypervisor.

Diego



On Thu, Jul 16, 2015 at 3:01 AM, Omer Frenkel ofren...@redhat.com wrote:


 - Original Message -
 From: Diego Remolina dijur...@gmail.com
 To: Users@ovirt.org
 Sent: Thursday, July 16, 2015 7:45:43 AM
 Subject: [ovirt-users] Cannot run specific VM in one node

 Hi,

 Was wondering if I can get some help with this particular situation. I
 have two ovirt cluster nodes. I had a VM running in node2 and tried to
 move it to node1. The move failed and the machine was created and
 paused in both nodes. I tried stopping migration, shutting down the
 machine, etc but none of that worked.

 So I decided to simply look for the process number and I killed it for
 that VM. After that, I was not able to get the VM to run in any of the
 nodes, so I rebooted them both.

 At this point, the vm will *not* start in node2 at all. When I try to
 start it, it just sits there and if I do:

 virsh -r list

 from the command line, the output says the vm state is shut off.

 I am able to user Run Once to fire up the VM in node 1, but I cannot
 migrate it to node2.

 How can I clear this problematic state for node 2?

 please attach engine + vdsm logs for the time of the failure


 Thanks,

 Diego
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] reinstall hosted-engine with ovirt 3.5?

2015-07-16 Thread Alastair Neil
Due to a moment of idiocy I accidentally upgraded my hosted-engine vm to
Fedora 22 and  now ovirt-engine will not start, I was able to get postgesql
up an running so I was able to make a backup of the engine.  As far as I
know Ovirt 3.5 is not supported on F22, so my options seem limited.

1, update to the 3.6 prerelease
2, reinstall the VM, if I were doing this I would use CentOS 7


my preference would be to fresh install the hosted-engine.  I am guessing
the way to go about this would be to shutdown the HE broker and agent
daemons on all the nodes, possibly clean the metadata? and the do a hosted
engine deploy as though migrating from an external engine.

Can anyone comment if this is reasonable?

Thanks,

-Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to unknown storage error

2015-07-16 Thread Konstantinos Christidis

Hello Mario,

On 16/07/2015 04:12 μμ, m...@ohnewald.net wrote:
Check your vdsm Logs on your nodes. I bet you find something about I/O 
errors i guess...

Yes there are many IO errors
libvirtEventLoop::INFO::2015-07-16 
22:30:02,237::vm::3609::virt.vm::(onIOError) 
vmId=`bb46929c-0b4e-4f01-868a-7e7638fa943b`::abnormal vm stop device 
virtio-disk0 error eother
libvirtEventLoop::INFO::2015-07-16 
22:30:02,237::vm::4889::virt.vm::(_logGuestCpuStatus) 
vmId=`bb46929c-0b4e-4f01-868a-7e7638fa943b`::CPU stopped: onIOError

Full vdsm log - https://paste.fedoraproject.org/245148/43707759/


and glusterfs errors
W [MSGID: 114031] [client-rpc-fops.c:2973:client3_3_lookup_cbk] 
0-distributed_vol-client-0: remote operation failed: Transport endpoint 
is not connected. Path: / (----0001) 
[Transport endpoint is not connected]
W [fuse-bridge.c:2273:fuse_writev_cbk] 0-glusterfs-fuse: 362694: WRITE 
= -1 (Transport endpoint is not connected)



K.



Also check your glusterfs logs. Maybe you can find some problems, too.

Mario



Am 16.07.15 um 10:29 schrieb Konstantinos Christidis:

Hello oVirt users,

I am facing a serious problem regarding my GlusterFS storage and virtual
machines that have *bootable* disks on this storage.

All my VMs that have GlusterFS disks are occasionally (1-2 times/hour)
becoming paused with the following Error: VM vm02.mytld has been paused
due to unknown storage error.

Engine Log
INFO  [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-69) [] VM
'247bb0f3-1a77-44e4-a404-3271eaee94be'(vm02.mytld) moved from 'Up' --
'Paused'
INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm02.mytld has been paused.
ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm02.mytld has been paused due to
unknown storage error

My iSCSI VM's, some of which may have mounted (not bootable) disks from
the same GlusterFS storage, do NOT suffer from this issue AFAIK.

My installation (oVirt 3.6/CentOS 7) is pretty much a typical one, with
a GlusterFS enabled cluster with 4 hosts, 2-3 networks, and 6-7 VMs..

Thanks,

K.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted_engine, fencing does not work

2015-07-16 Thread martirosov.d

martirosov.d писал 2015-07-14 17:24:

Николаев Алексей писал 2015-07-10 10:45:

Ovirt больше не поддерживает CentOS 6 для
гипервизоров. CentOS 6 поддерживается
только для Ovirt Engine.
Считаю, что сначало надо
воспроизвести проблему на CentOS 7.1. Если
она сохранится, то будут нужны логи с
гипервизоров (/var/log/vdsm/vdsm.log) и лог Ovirt
Engine (/var/log/ovirt-engine/engine.log).

ЛОГИ МОГУТ СОДЕРЖАТЬ
КОНФИДЕНЦИАЛЬНУЮ ИНФОРМАЦИЮ О ВАШЕЙ
СИСТЕМЕ.

10.07.2015, 03:06, martirosov.d martiroso...@emk.ru:


Hi.

ovirt3.5

Have two servers(node1 and node2) that are runnig Centos6.6.

1. engine on node2. Disables network node1, engine restarts after
some
time node1 and everything works fine.
2. engine on node1. Disables network node1, engine moved to node2,
node1
but does not reset, although attempts.

That log messages:

2015-Jul-03, 10:42 Host node1 is not responding. It will stay in
Connecting state for a grace period of 120 seconds and after that an

attempt to fence the host will be issued.
2015-Jul-03, 10:39 Host node1 is not responding. It will stay in
Connecting state for a grace period of 120 seconds and after that an

attempt to fence the host will be issued.
2015-Jul-03, 10:36 Host node1 is not responding. It will stay in
Connecting state for a grace period of 120 seconds and after that an

attempt to fence the host will be issued.
2015-Jul-03, 10:34 User admin@internal logged in.
2015-Jul-03, 10:33 Host node1 became non responsive. It has no power

management configured. Please check the host status, manually reboot
it,
and click Confirm Host Has Been Rebooted
2015-Jul-03, 10:33 Host node2 from cluster emk-cluster was chosen as
a
proxy to execute Status command on Host node1.
2015-Jul-03, 10:33 Host node1 is non responsive.

Manual fencing to node1 works from the node2:
# fence_ipmilan -A password -i 10.64.1.103 -l admin -p *** -o status
Getting status of IPMI:10.64.1.103...Chassis power = On
Done

Power Management on node2 test succeeded in oVirt Manager.

i.e. If turn off the host on which the engine, after moving to a
healthy
host, engine does not fencing to a problematic host and all VMs that

were on problematic host does not migrate.
If the problem is not with the host on which the engine, then it
works:
making fence, and migrates Vms.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [1]



Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users


Здравствуйте.

Установил CentOS 7.1 на гипервизоры и на engine из ovirt-release35.rpm.

Ситуация повторилась. Т.е. отключил сеть на node2, на котором был
engine и тестовая ВМ.

engine мигрировал на node1, но fence для node2 не сделал и тестовая ВМ
осталась не доступна и хост node2 тоже.

Ошибка та же.

Логи engine и с node1 во вложении.


Если проблемному хосту сделать: Confirm 'Host has been Rebooted', то 
после этого ВМ мигрируют с этого хоста, после этого Maintenance - 
Activate и хосту делается fence.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users