[ovirt-users] Unable to start vdsm, upgrade 4.0 to 4.1

2019-04-12 Thread Todd Barton
Looking for some help/suggestions to correct an issue I'm having.  I have a 3 
host HA setup running a hosted-engine and gluster storage.  The hosts are 
identical hardware configurations and have been running for several years very 
solidly.  I was performing an upgrade to 4.1.  1st host when fine.  The second 
upgrade didn't go well...On server reboot, it went into kernel panic and I had 
to load previous kernel to diagnose.  



I couldn't get it out of panic and I had to revert the system to the previous 
kernel which was a big PITA. I updated it to current and verified installation 
of ovirt/vdsm.  Everything seemed to be ok, but vdsm won't start. Gluster is 
working fine.  It appears I have a authentication issue with libvirt.  I'm 
getting the message "libvirt: XML-RPC error : authentication failed: 
authentication failed" which seems to be the core issue.



I've looked at all the past issues/resolutions to this issue and tried them, 
but I can't get it to work.  For example, I do a vdsm-tool configure --force 
and I get this...





Checking configuration status...



abrt is already configured for vdsm

lvm is configured for vdsm

libvirt is already configured for vdsm

SUCCESS: ssl configured to true. No conflicts

Current revision of multipath.conf detected, preserving



Running configure...

Reconfiguration of abrt is done.

Traceback (most recent call last):

  File "/usr/bin/vdsm-tool", line 219, in main

    return tool_command[cmd]["command"](*args)

  File "/usr/lib/python2.7/site-packages/vdsm/tool/__init__.py", line 38, in 
wrapper

    func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py", line 141, 
in configure

    _configure(c)

  File "/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py", line 88, 
in _configure

    getattr(module, 'configure', lambda: None)()

  File "/usr/lib/python2.7/site-packages/vdsm/tool/configurators/passwd.py", 
line 68, in configure

    configure_passwd()

  File "/usr/lib/python2.7/site-packages/vdsm/tool/configurators/passwd.py", 
line 98, in configure_passwd

    raise RuntimeError("Set password failed: %s" % (err,))

RuntimeError: Set password failed: ['saslpasswd2: invalid parameter supplied']



...and help would be greatly appreciated.  I'm not a linux/ovirt expert by any 
means, but I desperately need to get this setup back to being stable.  This 
happened many months ago and I gave up fixing, but I really need to get this 
back online again.



Thank you 



Todd Barton___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/354QKTKZTQGJIXYM5Q4RDOFKLZK5ORBE/


[ovirt-users] Tuning Gluster Writes

2019-04-12 Thread Alex McWhirter
I have 8 machines acting as gluster servers. They each have 12 drives 
raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as 
one).


They connect to the compute hosts and to each other over lacp'd 10GB 
connections split across two cisco nexus switched with VPC.


Gluster has the following set.

performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.stat-prefetch: on
server.event-threads: 4
client.event-threads: 8
performance.io-thread-count: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: off
performance.client-io-threads: on


I have the following sysctl values on gluster client and servers, using 
libgfapi, MTU 9K


net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.core.netdev_max_backlog = 30
net.ipv4.tcp_moderate_rcvbuf =1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_congestion_control=htcp

reads with this setup are perfect, benchmarked in VM to be about 770MB/s 
sequential with disk access times of < 1ms. Writes on the other hand are 
all over the place. They peak around 320MB/s sequential write, which is 
what i expect but it seems as if there is some blocking going on.


During the write test i will hit 320MB/s briefly, then 0MB/s as disk 
access time shoot to over 3000ms, then back to 320MB/s. It averages out 
to about 110MB/s afterwards.


Gluster version is 3.12.15 ovirt is 4.2.7.5

Any ideas on what i could tune to eliminate or minimize that blocking?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/


[ovirt-users] Re: Global maintenance and fencing of hosts

2019-04-12 Thread Alex K
On Fri, Apr 12, 2019, 12:53 Andreas Elvers <
andreas.elvers+ovirtfo...@solutions.work> wrote:

> I am wondering whether global maintenance inhibits fencing of
> non-responsive hosts. Is this so?
>
> Background: I plan on migrating the engine from one cluster to another. I
> understand this means to backup/restore the engine. While migrating the
> engine it is shut down and all VMs will continue running. This is good.
> When starting the engine in the new location, I really don't want the
> engine to fence any host on its own, because of reasons I can not yet know.
>
> So is global maintenance enough to suppress fencing, or do I have to
> deactivate fencing on all hosts?
>
Global maintenance disables fencing.

> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3KRJRAWBFC4QAXVYXUGXVA3324USBHBN/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6N7W56J63WQY76QWTLFQPLOWZO5OPOOV/


[ovirt-users] Re: HostedEngine cleaned up

2019-04-12 Thread Alex K
On Fri, Apr 12, 2019, 12:16  wrote:

> Adding to what me and my colleague shared
>
> I am able to locate the disk images of the VMs, I copied some of them and
> tried to boot them from another standalone kvm host, however booting the
> disk images wasn't succesful as it landed on a rescue mode. The strange
> part is that the VM disk images are 64MB in size, which doesn't seem to be
> normal for a disk image(see the below command extract).
>
> root@gohan 019a7072-43d5-44b5-bb86-7a7327f02087]# pwd
>
> /gluster_bricks/data/data/659de125-5671-4777-b27e-974aec0a4c9c/images/019a7072-43d5-44b5-bb86-7a7327f02087
> [root@gohan 019a7072-43d5-44b5-bb86-7a7327f02087]# ll -h
> total 66M
> -rw-rw. 2 vdsm kvm  64M Mar 22 13:48
> f5f97478-6ccb-48bc-93b7-2fd5939f40bf
> -rw-rw. 2 vdsm kvm 1.0M Mar  4 12:10
> f5f97478-6ccb-48bc-93b7-2fd5939f40bf.lease
> -rw-r--r--. 2 vdsm kvm  317 Mar 22 11:06
> f5f97478-6ccb-48bc-93b7-2fd5939f40bf.meta
>
Id suggest to mount the gluster volume and get the disk image from there,
instead directly from the brick.

>
>
> Please share insights on how I can reconstruct the disk image so that it
> can become bootable on the kvm host.
>
> Thanks in advance for the reply.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZIQNXSY5GAKY2KFOUEH3SMFXGQKIX7V4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NNFPHGVXU74HOXK5B4IUBMMVZ7KEF3O3/


[ovirt-users] oVirt 4.3.2.1-1.el7 Errors at VM boot

2019-04-12 Thread Wood Peter
Hi all,

A few weeks ago I did a clean install of the latest oVirt-4.3.2 and
imported some VMs from oVirt-3. Three nodes running oVirt Node and oVirt
Engine installed on a separate system.

I noticed that some times some VMs will boot successfully but the Web UI
will still show "Powering UP" for days after the VM has been up. I can
power down the VM and power back up and it may update the Web UI status to
UP.

While debugging the above issue I noticed that some VMs will trigger errors
during boot. I can power on a VM on one node, see the errors below started
happening every 4-5 seconds, then power down the VM, errors stop, then
power up the VM on a different node without a problem. Another VM though
may trigger the errors on the same node.

Everything is very inconsistent. I can't find a pattern. I tried different
VMs, different nodes, and I'm getting mixed results. Hopefully the errors
will give some clue.

Here is what I'm seeing scrolling every 4-5 seconds:

-
On oVirt Node:

==> vdsm.log <==
2019-04-12 10:50:31,543-0700 ERROR (jsonrpc/3) [jsonrpc.JsonRpcServer]
Internal server error (__init__:350)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345,
in _handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in
_dynamicMethod
result = fn(*methodArgs)
  File "", line 2, in getAllVmStats
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1388, in
getAllVmStats
statsList = self._cif.getAllVmStats()
  File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 567, in
getAllVmStats
return [v.getStats() for v in self.vmContainer.values()]
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1766, in
getStats
oga_stats = self._getGuestStats()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1967, in
_getGuestStats
stats = self.guestAgent.getGuestInfo()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line
505, in getGuestInfo
del qga['appsList']
KeyError: 'appsList'

==> mom.log <==
2019-04-12 10:50:31,547 - mom.VdsmRpcBase - ERROR - Command
Host.getAllVmStats with args {} failed:
(code=-32603, message=Internal JSON-RPC error: {'reason': "'appsList'"})

--
On oVirt Engine

2019-04-12 10:50:35,692-07 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-53) [] Unexpected return
value: Status [code=-32603, message=Internal JSON-RPC error: {'reason':
"'appsList'"}]
2019-04-12 10:50:35,693-07 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-53) [] Failed in
'GetAllVmStatsVDS' method
2019-04-12 10:50:35,693-07 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-53) [] Command
'GetAllVmStatsVDSCommand(HostName = sdod-ovnode-03,
VdsIdVDSCommandParametersBase:{hostId='12e38ad3-6327-4c94-8be4-88912d283729'})'
execution failed: VDSGenericException: VDSErrorException: Failed to
GetAllVmStatsVDS, error = Internal JSON-RPC error: {'reason':
"'appsList'"}, code = -32603

Thank you,
-- Peter
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EE22ADRNIFF6UPK2GUXH7G27N4AICASB/


[ovirt-users] Re: Migrate self-hosted engine between cluster

2019-04-12 Thread Andreas Elvers
This one is probably saving my weekend. Thanks a lot for your great work.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2IJDVBDF5ZOUL2MJMXBSSBVF2ZBZKNIG/


[ovirt-users] Re: oVirt 4.3.2 Error: genev_sys_6081 is not present in the system

2019-04-12 Thread Dominik Holler
On Fri, 12 Apr 2019 12:31:15 -
"Dee Slaw"  wrote:

> Hello, I've installed oVirt 4.3.2 and the problem is that it log messages:
> 
> VDSM ovirt-04 command Get Host Statistics failed: Internal JSON-RPC error:
> {'reason': '[Errno 19] genev_sys_6081 is not present in the system'} in
> Open Virtualization Manager.
> 
> 

This might happen if genev_sys_6081 flickers.

> 
> It also keeps on logging in /var/log/messages:
> 
> Mar 22 11:51:03 ovirt-04 NetworkManager[8725]:  [1553244663.9861] device
> (genev_sys_6081): carrier: link connected
> Mar 22 11:51:03 ovirt-04 NetworkManager[8725]:  [1553244663.9864] 
> manager:
> (genev_sys_6081): new Generic device 
> (/org/freedesktop/NetworkManager/Devices/34160)
> Mar 22 11:51:03 ovirt-04 NetworkManager[8725]:  [1553244663.9866] device
> (genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
> Mar 22 11:51:03 ovirt-04 NetworkManager[8725]:  [1553244663.9906] device
> (genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
> Mar 22 11:51:03 ovirt-04 kernel: device genev_sys_6081 left promiscuous mode
> Mar 22 11:51:04 ovirt-04 kernel: device genev_sys_6081 entered promiscuous 
> mode
> Mar 22 11:51:04 ovirt-04 NetworkManager[8725]:  [1553244664.0038] device
> (genev_sys_6081): carrier: link connected
> Mar 22 11:51:04 ovirt-04 NetworkManager[8725]:  [1553244664.0042] 
> manager:
> (genev_sys_6081): new Generic device 
> (/org/freedesktop/NetworkManager/Devices/34161)
> Mar 22 11:51:04 ovirt-04 NetworkManager[8725]:  [1553244664.0044] device
> (genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
> Mar 22 11:51:04 ovirt-04 NetworkManager[8725]:  [1553244664.0082] device
> (genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
> Mar 22 11:51:04 ovirt-04 kernel: device genev_sys_6081 left promiscuous mode
> Mar 22 11:51:04 ovirt-04 kernel: device genev_sys_6081 entered promiscuous 
> mode
> 
> 
> 
> Also I can see the following in /var/log/openvswitch:
> 
> 2019-03-22T08:53:12.413Z|131047|bridge|WARN|could not add network device 
> ovn-034a1c-0 to
> ofproto (File exists)
> 2019-03-22T08:53:12.431Z|131048|tunnel|WARN|ovn-a21088-0: attempting to add 
> tunnel port
> with same config as port 'ovn-0836d7-0' (::->192.168.10.24, key=flow,
> legacy_l2, dp
> port=2)
> 
> 2019-03-22T08:53:12.466Z|131055|connmgr|WARN|accept failed (Too many open 
> files)
> 2019-03-22T08:53:12.466Z|131056|unixctl|WARN|punix:/var/run/openvswitch/ovs-vswitchd.9103.ctl:
> accept failed: Too many open files
> 

Are there too files opened on the host?

> 
> 
> In ovsdb-server.log:
> 
> 2019-03-22T03:24:12.583Z|05792|jsonrpc|WARN|unix#28684: receive error: 
> Connection reset by
> peer
> 2019-03-22T03:24:12.583Z|05793|reconnect|WARN|unix#28684: connection dropped 
> (Connection
> reset by peer)
> 
> 
> 
> How to fix this issue with geneve tunnels?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PVJJPK7MQOFLXJEUUWYDBQGJD3332ALF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T3NVKTZOKCPQY46V7X4EHOV7HVDDA5QO/


[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-12 Thread Nir Soffer
On Fri, Apr 12, 2019, 12:07 Ladislav Humenik 
wrote:

> Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version
> (actually 9 ovirt engine nodes), where the live storage migration
> stopped to work, and leave auto-generated snapshot behind.
>
> If we power the guest VM down, the migration works as expected. Is there
> a known bug for this? Shall we open a new one?
>
> Setup:
> ovirt - Dell PowerEdge R630
>  - CentOS Linux release 7.6.1810 (Core)
>  - ovirt-engine-4.2.8.2-1.el7.noarch
>  - kernel-3.10.0-957.10.1.el7.x86_64
> hypervisors- Dell PowerEdge R640
>  - CentOS Linux release 7.6.1810 (Core)
>  - kernel-3.10.0-957.10.1.el7.x86_64
>  - vdsm-4.20.46-1.el7.x86_64
>  - libvirt-5.0.0-1.el7.x86_64
>

This is known issue in libvirt < 5.2.

How did you get this version on CentOS 7.6?

On my CentOS 7.6 I have libvirt 4.5, which is not affected by this issue.

Nir

 - qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
> storage domain  - netapp NFS share
>
>
> logs are attached
>
> --
> Ladislav Humenik
>
> System administrator
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSKUEPUOPJDSRWYYMZEKAVTZ62YP6UK2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3B3TLAJ7QPC6LLPBZYRD7WXUJZXQE5P6/


[ovirt-users] Re: Migrate self-hosted engine between cluster

2019-04-12 Thread Simone Tiraboschi
On Fri, Apr 12, 2019 at 11:47 AM Andreas Elvers <
andreas.elvers+ovirtfo...@solutions.work> wrote:

> I am in the process of migrating the engine to a new cluster. I hope I
> will accomplish it this weekend. Fingers crossed.
>
> What you need to know:
>
> The migration is really a backup and restore process.
>
> 1. You create a backup of the engine.
> 2. Place the cluster into global maintenance and shutdown the engine.
> 3. Then you create a new engine on the other cluster and before
> engine-setup you restore the engine backup.
>

Please notice that now we have
  hosted-engine --deploy --restore-from-file=backup.tar.gz
and this is fully automated


> 4. Start the new engine.
>
> From here I'm not yet sure what should happen in what order:
>
> - re-install the nodes with the old engine to remove the Engine HA Setup
> from that host.
> - re-install nodes on the new cluster to receive the Engine HA Setup.
>
> There is documentation about backup and restore of the engine here:
> https://ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment.html
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4IDZJHKCLOVCDEN4ZRJI4KSPNPLWP5H/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QJVURECZ5TPF4YJC6IBZLJ7VJLRAGBRS/


[ovirt-users] Re: HostedEngine cleaned up

2019-04-12 Thread Simone Tiraboschi
On Fri, Apr 12, 2019 at 11:16 AM  wrote:

> Adding to what me and my colleague shared
>
> I am able to locate the disk images of the VMs, I copied some of them and
> tried to boot them from another standalone kvm host, however booting the
> disk images wasn't succesful as it landed on a rescue mode. The strange
> part is that the VM disk images are 64MB in size, which doesn't seem to be
> normal for a disk image(see the below command extract).
>
> root@gohan 019a7072-43d5-44b5-bb86-7a7327f02087]# pwd
>
> /gluster_bricks/data/data/659de125-5671-4777-b27e-974aec0a4c9c/images/019a7072-43d5-44b5-bb86-7a7327f02087
> [root@gohan 019a7072-43d5-44b5-bb86-7a7327f02087]# ll -h
> total 66M
> -rw-rw. 2 vdsm kvm  64M Mar 22 13:48
> f5f97478-6ccb-48bc-93b7-2fd5939f40bf
> -rw-rw. 2 vdsm kvm 1.0M Mar  4 12:10
> f5f97478-6ccb-48bc-93b7-2fd5939f40bf.lease
> -rw-r--r--. 2 vdsm kvm  317 Mar 22 11:06
> f5f97478-6ccb-48bc-93b7-2fd5939f40bf.meta
>
>
> Please share insights on how I can reconstruct the disk image so that it
> can become bootable on the kvm host.
>
> Thanks in advance for the reply.
>

I'd suggest to double check all the gluster logs because 64 M doesn't seem
anyhow reasonable there.


>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZIQNXSY5GAKY2KFOUEH3SMFXGQKIX7V4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XCKZCHA2K4BXGDY7KRRFDERNFFU3EMT4/


[ovirt-users] Re: Cannot allocate and run VM from VM-Pool. There are no available VMs in the VM-Pool

2019-04-12 Thread nicolas
Are the VMs from the pool 'up'? If so, no assignation can be done unless 
they are powered off.


El 2019-04-12 14:31, Florian Rädler escribió:

I am getting the following Error after a Pool was generated and
migrated to another host.

START_POOL fehlgeschlagen [Cannot allocate and run VM from VM-Pool.
There are no available VMs in the VM-Pool.]

No user is connected to any of the running VMs. What can I do to solve
this problem?

-

 Pflichtangaben anzeigen [1]

 Nähere Informationen zur Datenverarbeitung im DB-Konzern finden Sie
hier: http://www.deutschebahn.com/de/konzern/datenschutz [2]

Links:
--
[1] http://www.deutschebahn.com/pflichtangaben/20190408
[2] http://www.deutschebahn.com/de/konzern/datenschutz

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UL2KJKETQYFZR4HTVJI42IIAKHHJ2NWW/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WNB2EQKIFRDUF5RO4CZCIJ75OH75E7LU/


[ovirt-users] Cannot allocate and run VM from VM-Pool. There are no available VMs in the VM-Pool

2019-04-12 Thread Florian Rädler
I am getting the following Error after a Pool was generated and migrated to 
another host.

START_POOL fehlgeschlagen [Cannot allocate and run VM from VM-Pool. There are 
no available VMs in the VM-Pool.]

No user is connected to any of the running VMs. What can I do to solve this 
problem?



Pflichtangaben anzeigen

Nähere Informationen zur Datenverarbeitung im DB-Konzern finden Sie hier: 
http://www.deutschebahn.com/de/konzern/datenschutz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UL2KJKETQYFZR4HTVJI42IIAKHHJ2NWW/


[ovirt-users] [ANN] oVirt 4.3.3 Fourth Release Candidate is now available

2019-04-12 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.3 Fourth Release Candidate, as of April 12th, 2019.

This update is a release candidate of the third in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.

This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)

Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.

See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance will be available soon
- oVirt Node will be available soon[2]

Additional Resources:
* Read more about the oVirt 4.3.3 release highlights:
http://www.ovirt.org/release/4.3.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.3.3/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/

-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLPD2HYVANPNVVPEV6GCTUAKOPXIO25E/


[ovirt-users] oVirt 4.3.2 Error: genev_sys_6081 is not present in the system

2019-04-12 Thread Dee Slaw
Hello, I've installed oVirt 4.3.2 and the problem is that it log messages:

VDSM ovirt-04 command Get Host Statistics failed: Internal JSON-RPC error:
{'reason': '[Errno 19] genev_sys_6081 is not present in the system'} in
Open Virtualization Manager.



It also keeps on logging in /var/log/messages:

Mar 22 11:51:03 ovirt-04 NetworkManager[8725]:  [1553244663.9861] device
(genev_sys_6081): carrier: link connected
Mar 22 11:51:03 ovirt-04 NetworkManager[8725]:  [1553244663.9864] manager:
(genev_sys_6081): new Generic device 
(/org/freedesktop/NetworkManager/Devices/34160)
Mar 22 11:51:03 ovirt-04 NetworkManager[8725]:  [1553244663.9866] device
(genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
Mar 22 11:51:03 ovirt-04 NetworkManager[8725]:  [1553244663.9906] device
(genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
Mar 22 11:51:03 ovirt-04 kernel: device genev_sys_6081 left promiscuous mode
Mar 22 11:51:04 ovirt-04 kernel: device genev_sys_6081 entered promiscuous mode
Mar 22 11:51:04 ovirt-04 NetworkManager[8725]:  [1553244664.0038] device
(genev_sys_6081): carrier: link connected
Mar 22 11:51:04 ovirt-04 NetworkManager[8725]:  [1553244664.0042] manager:
(genev_sys_6081): new Generic device 
(/org/freedesktop/NetworkManager/Devices/34161)
Mar 22 11:51:04 ovirt-04 NetworkManager[8725]:  [1553244664.0044] device
(genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
Mar 22 11:51:04 ovirt-04 NetworkManager[8725]:  [1553244664.0082] device
(genev_sys_6081): enslaved to non-master-type device ovs-system; ignoring
Mar 22 11:51:04 ovirt-04 kernel: device genev_sys_6081 left promiscuous mode
Mar 22 11:51:04 ovirt-04 kernel: device genev_sys_6081 entered promiscuous mode



Also I can see the following in /var/log/openvswitch:

2019-03-22T08:53:12.413Z|131047|bridge|WARN|could not add network device 
ovn-034a1c-0 to
ofproto (File exists)
2019-03-22T08:53:12.431Z|131048|tunnel|WARN|ovn-a21088-0: attempting to add 
tunnel port
with same config as port 'ovn-0836d7-0' (::->192.168.10.24, key=flow,
legacy_l2, dp
port=2)

2019-03-22T08:53:12.466Z|131055|connmgr|WARN|accept failed (Too many open files)
2019-03-22T08:53:12.466Z|131056|unixctl|WARN|punix:/var/run/openvswitch/ovs-vswitchd.9103.ctl:
accept failed: Too many open files



In ovsdb-server.log:

2019-03-22T03:24:12.583Z|05792|jsonrpc|WARN|unix#28684: receive error: 
Connection reset by
peer
2019-03-22T03:24:12.583Z|05793|reconnect|WARN|unix#28684: connection dropped 
(Connection
reset by peer)



How to fix this issue with geneve tunnels?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PVJJPK7MQOFLXJEUUWYDBQGJD3332ALF/


[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
 I hope this is the last update on the issue -> opened a bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1699309

Best regards,Strahil Nikolov

В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hi All,
I have tested gluster snapshot without systemd.automount units and it works as 
follows:

[root@ovirt1 system]# gluster snapshot create isos-snap-2019-04-11 isos  
description TEST
snapshot create: success: Snap isos-snap-2019-04-11_GMT-2019.04.12-11.18.24 
created successfully

[root@ovirt1 system]# gluster snapshot list
isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
[root@ovirt1 system]# gluster snapshot info 
isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snapshot  : isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snap UUID : 70d5716e-4633-43d4-a562-8e29a96b0104
Description   : TEST
Created   : 2019-04-12 11:18:24
Snap Volumes:

    Snap Volume Name  : 584e88eab0374c0582cc544a2bc4b79e
    Origin Volume name    : isos
    Snaps taken for isos  : 1
    Snaps available for isos  : 255
    Status    : Stopped


Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:32:18 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd 
unit.
[root@ovirt1 system]# systemctl cat gluster_bricks-isos.automount
# /etc/systemd/system/gluster_bricks-isos.automount
[Unit]
Description=automount for gluster brick ISOS

[Automount]
Where=/gluster_bricks/isos

[Install]
WantedBy=multi-user.target



Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:12:31 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
I have tried to enable debug and see the reason for the issue. Here is the 
relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077] 
[glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get 
pool name for device systemd-1
[2019-04-12 07:56:54.527509] E [MSGID: 106121] 
[glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: 
Failed to pre validate
[2019-04-12 07:56:54.527525] E [MSGID: 106024] 
[glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: 
Snapshot is supported only for thin provisioned LV. Ensure that all bricks of 
isos are thinly provisioned LV.
[2019-04-12 07:56:54.527539] W [MSGID: 106029] 
[glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot 
create pre-validation failed
[2019-04-12 07:56:54.527552] W [MSGID: 106121] 
[glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot 
Prevalidate Failed
[2019-04-12 07:56:54.527568] E [MSGID: 106121] 
[glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Snapshot on local node
[2019-04-12 07:56:54.527583] E [MSGID: 106121] 
[glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre 
Validation Failed

here is the output of lvscan & lvs:
[root@ovirt1 ~]# lvscan
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [9.86 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [168.59 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/swap' [6.70 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/root' [60.00 GiB] inherit
[root@ovirt1 ~]# lvs --noheadings -o pool_lv



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt2 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [<9.77 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [<161.40 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/root' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/swap' [16.00 GiB] inherit



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt3 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_sda3/gluster_thinpool_sda3' [41.00 GiB] 
inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_data' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_isos' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_engine' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/root' [20.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/home' [1.00 

[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
 Hi All,
I have tested gluster snapshot without systemd.automount units and it works as 
follows:

[root@ovirt1 system]# gluster snapshot create isos-snap-2019-04-11 isos  
description TEST
snapshot create: success: Snap isos-snap-2019-04-11_GMT-2019.04.12-11.18.24 
created successfully

[root@ovirt1 system]# gluster snapshot list
isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
[root@ovirt1 system]# gluster snapshot info 
isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snapshot  : isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snap UUID : 70d5716e-4633-43d4-a562-8e29a96b0104
Description   : TEST
Created   : 2019-04-12 11:18:24
Snap Volumes:

    Snap Volume Name  : 584e88eab0374c0582cc544a2bc4b79e
    Origin Volume name    : isos
    Snaps taken for isos  : 1
    Snaps available for isos  : 255
    Status    : Stopped


Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:32:18 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd 
unit.
[root@ovirt1 system]# systemctl cat gluster_bricks-isos.automount
# /etc/systemd/system/gluster_bricks-isos.automount
[Unit]
Description=automount for gluster brick ISOS

[Automount]
Where=/gluster_bricks/isos

[Install]
WantedBy=multi-user.target



Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:12:31 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
I have tried to enable debug and see the reason for the issue. Here is the 
relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077] 
[glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get 
pool name for device systemd-1
[2019-04-12 07:56:54.527509] E [MSGID: 106121] 
[glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: 
Failed to pre validate
[2019-04-12 07:56:54.527525] E [MSGID: 106024] 
[glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: 
Snapshot is supported only for thin provisioned LV. Ensure that all bricks of 
isos are thinly provisioned LV.
[2019-04-12 07:56:54.527539] W [MSGID: 106029] 
[glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot 
create pre-validation failed
[2019-04-12 07:56:54.527552] W [MSGID: 106121] 
[glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot 
Prevalidate Failed
[2019-04-12 07:56:54.527568] E [MSGID: 106121] 
[glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Snapshot on local node
[2019-04-12 07:56:54.527583] E [MSGID: 106121] 
[glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre 
Validation Failed

here is the output of lvscan & lvs:
[root@ovirt1 ~]# lvscan
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [9.86 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [168.59 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/swap' [6.70 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/root' [60.00 GiB] inherit
[root@ovirt1 ~]# lvs --noheadings -o pool_lv



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt2 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [<9.77 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [<161.40 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/root' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/swap' [16.00 GiB] inherit



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt3 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_sda3/gluster_thinpool_sda3' [41.00 GiB] 
inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_data' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_isos' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_engine' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/root' [20.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/swap' [8.00 GiB] inherit



  gluster_thinpool_sda3
  gluster_thinpool_sda3
  gluster_thinpool_sda3


I am mounting my bricks via systemd , as I have issues with bricks being 

[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-12 Thread Benny Zlotnik
2019-04-12 10:39:25,643+0200 ERROR (jsonrpc/0) [virt.vm]
(vmId='71f27df0-f54f-4a2e-a51c-e61aa26b370d') Unable to start
replication for vda to {'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'volumeInfo': {'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
'type': 'file'}, 'diskType': 'file', 'format': 'cow', 'cache': 'none',
'volumeID': '5c2738a4-4279-4cc3-a0de-6af1095f8879', 'imageID':
'9a66bf0f-1333-4931-ad58-f6f1aa1143be', 'poolID':
'b1a475aa-c084-46e5-b65a-bf4a47143c88', 'device': 'disk', 'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
'propagateErrors': 'off', 'volumeChain': [{'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2',
'volumeID': u'cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2', 'leasePath':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2.lease',
'imageID': '9a66bf0f-1333-4931-ad58-f6f1aa1143be'}, {'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
'volumeID': u'5c2738a4-4279-4cc3-a0de-6af1095f8879', 'leasePath':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879.lease',
'imageID': '9a66bf0f-1333-4931-ad58-f6f1aa1143be'}]} (vm:4710)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4704,
in diskReplicateStart
self._startDriveReplication(drive)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4843,
in _startDriveReplication
self._dom.blockCopy(drive.name, destxml, flags=flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 130, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py",
line 92, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 728, in blockCopy
ret = libvirtmod.virDomainBlockCopy(self._o, disk, destxml, params, flags)
TypeError: block params must be a dictionary


It looks like a bug in libvirt[1]

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1687114

On Fri, Apr 12, 2019 at 12:06 PM Ladislav Humenik
 wrote:
>
> Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version
> (actually 9 ovirt engine nodes), where the live storage migration
> stopped to work, and leave auto-generated snapshot behind.
>
> If we power the guest VM down, the migration works as expected. Is there
> a known bug for this? Shall we open a new one?
>
> Setup:
> ovirt - Dell PowerEdge R630
>  - CentOS Linux release 7.6.1810 (Core)
>  - ovirt-engine-4.2.8.2-1.el7.noarch
>  - kernel-3.10.0-957.10.1.el7.x86_64
> hypervisors- Dell PowerEdge R640
>  - CentOS Linux release 7.6.1810 (Core)
>  - kernel-3.10.0-957.10.1.el7.x86_64
>  - vdsm-4.20.46-1.el7.x86_64
>  - libvirt-5.0.0-1.el7.x86_64
>  - qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
> storage domain  - netapp NFS share
>
>
> logs are attached
>
> --
> Ladislav Humenik
>
> System administrator
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSKUEPUOPJDSRWYYMZEKAVTZ62YP6UK2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YVDEMZED7TSZNRIV3CURBI3YUKUXV5ZT/


[ovirt-users] Global maintenance and fencing of hosts

2019-04-12 Thread Andreas Elvers
I am wondering whether global maintenance inhibits fencing of non-responsive 
hosts. Is this so? 

Background: I plan on migrating the engine from one cluster to another. I 
understand this means to backup/restore the engine. While migrating the engine 
it is shut down and all VMs will continue running. This is good. When starting 
the engine in the new location, I really don't want the engine to fence any 
host on its own, because of reasons I can not yet know.

So is global maintenance enough to suppress fencing, or do I have to deactivate 
fencing on all hosts? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3KRJRAWBFC4QAXVYXUGXVA3324USBHBN/


[ovirt-users] Re: Migrate self-hosted engine between cluster

2019-04-12 Thread Andreas Elvers
I am in the process of migrating the engine to a new cluster. I hope I will 
accomplish it this weekend. Fingers crossed.

What you need to know:

The migration is really a backup and restore process. 

1. You create a backup of the engine.
2. Place the cluster into global maintenance and shutdown the engine. 
3. Then you create a new engine on the other cluster and before engine-setup 
you restore the engine backup.
4. Start the new engine.

From here I'm not yet sure what should happen in what order:

- re-install the nodes with the old engine to remove the Engine HA Setup from 
that host.
- re-install nodes on the new cluster to receive the Engine HA Setup.

There is documentation about backup and restore of the engine here: 
https://ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4IDZJHKCLOVCDEN4ZRJI4KSPNPLWP5H/


[ovirt-users] Re: spam

2019-04-12 Thread Sandro Bonazzola
Il giorno mar 9 apr 2019 alle ore 13:13 Jorick Astrego 
ha scritto:

> We get a lot of spam lately, anything that can be done about this?
>
> I see the list is powered by Mailman
>
>
> https://wikitech.wikimedia.org/wiki/Lists.wikimedia.org#Fighting_spam_in_mailman
>

Opening a ticket to infra



>
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> *Netbulae Virtualization Experts *
> --
> Tel: 053 20 30 270 i...@netbulae.eu Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
> --
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5CEVNY337NJAT7JMRPX4D34WQN6JUPTE/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LP34QK2WGD5HBTMYFYA6ZELODHPZBCKX/


[ovirt-users] Re: HostedEngine cleaned up

2019-04-12 Thread tau
Adding to what me and my colleague shared

I am able to locate the disk images of the VMs, I copied some of them and tried 
to boot them from another standalone kvm host, however booting the disk images 
wasn't succesful as it landed on a rescue mode. The strange part is that the VM 
disk images are 64MB in size, which doesn't seem to be normal for a disk 
image(see the below command extract).

root@gohan 019a7072-43d5-44b5-bb86-7a7327f02087]# pwd
/gluster_bricks/data/data/659de125-5671-4777-b27e-974aec0a4c9c/images/019a7072-43d5-44b5-bb86-7a7327f02087
[root@gohan 019a7072-43d5-44b5-bb86-7a7327f02087]# ll -h
total 66M
-rw-rw. 2 vdsm kvm  64M Mar 22 13:48 f5f97478-6ccb-48bc-93b7-2fd5939f40bf
-rw-rw. 2 vdsm kvm 1.0M Mar  4 12:10 
f5f97478-6ccb-48bc-93b7-2fd5939f40bf.lease
-rw-r--r--. 2 vdsm kvm  317 Mar 22 11:06 
f5f97478-6ccb-48bc-93b7-2fd5939f40bf.meta


Please share insights on how I can reconstruct the disk image so that it can 
become bootable on the kvm host.

Thanks in advance for the reply. 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZIQNXSY5GAKY2KFOUEH3SMFXGQKIX7V4/


[ovirt-users] Live storage migration is failing in 4.2.8

2019-04-12 Thread Ladislav Humenik
Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version 
(actually 9 ovirt engine nodes), where the live storage migration 
stopped to work, and leave auto-generated snapshot behind.


If we power the guest VM down, the migration works as expected. Is there 
a known bug for this? Shall we open a new one?


Setup:
ovirt - Dell PowerEdge R630
        - CentOS Linux release 7.6.1810 (Core)
        - ovirt-engine-4.2.8.2-1.el7.noarch
        - kernel-3.10.0-957.10.1.el7.x86_64
hypervisors    - Dell PowerEdge R640
        - CentOS Linux release 7.6.1810 (Core)
        - kernel-3.10.0-957.10.1.el7.x86_64
        - vdsm-4.20.46-1.el7.x86_64
        - libvirt-5.0.0-1.el7.x86_64
        - qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
storage domain  - netapp NFS share


logs are attached

--
Ladislav Humenik

System administrator

2019-04-12 10:39:25,503+0200 INFO  (jsonrpc/0) [api.virt] START 
diskReplicateStart(srcDisk={'device': 'disk', 'poolID': 
'b1a475aa-c084-46e5-b65a-bf4a47143c88', 'volumeID': 
'5c2738a4-4279-4cc3-a0de-6af1095f8879', 'domainID': 
'e5bb3e8a-a9c6-4581-8c6a-67d4ee7609f5', 'imageID': 
'9a66bf0f-1333-4931-ad58-f6f1aa1143be'}, dstDisk={'device': 'disk', 'poolID': 
'b1a475aa-c084-46e5-b65a-bf4a47143c88', 'volumeID': 
'5c2738a4-4279-4cc3-a0de-6af1095f8879', 'domainID': 
'244dfdfb-2662-4103-9d39-2b13153f2047', 'imageID': 
'9a66bf0f-1333-4931-ad58-f6f1aa1143be'}) from=:::10.76.98.4,57566, 
flow_id=97b620d9-6e65-4573-9fdf-5b119764fbb7, 
vmId=71f27df0-f54f-4a2e-a51c-e61aa26b370d (api:46)
2019-04-12 10:39:25,513+0200 INFO  (jsonrpc/0) [vdsm.api] START 
prepareImage(sdUUID='244dfdfb-2662-4103-9d39-2b13153f2047', 
spUUID='b1a475aa-c084-46e5-b65a-bf4a47143c88', 
imgUUID='9a66bf0f-1333-4931-ad58-f6f1aa1143be', 
leafUUID='5c2738a4-4279-4cc3-a0de-6af1095f8879', allowIllegal=False) 
from=:::10.76.98.4,57566, flow_id=97b620d9-6e65-4573-9fdf-5b119764fbb7, 
task_id=78dde3c9-74fb-4588-8cfa-117f0bbe2d2d (api:46)
2019-04-12 10:39:25,630+0200 INFO  (jsonrpc/0) [storage.StorageDomain] Fixing 
permissions on 
/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2
 (fileSD:623)
2019-04-12 10:39:25,631+0200 INFO  (jsonrpc/0) [storage.StorageDomain] Fixing 
permissions on 
/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879
 (fileSD:623)
2019-04-12 10:39:25,632+0200 INFO  (jsonrpc/0) [storage.StorageDomain] Creating 
domain run directory 
u'/var/run/vdsm/storage/244dfdfb-2662-4103-9d39-2b13153f2047' (fileSD:577)
2019-04-12 10:39:25,632+0200 INFO  (jsonrpc/0) [storage.fileUtils] Creating 
directory: /var/run/vdsm/storage/244dfdfb-2662-4103-9d39-2b13153f2047 mode: 
None (fileUtils:197)
2019-04-12 10:39:25,632+0200 INFO  (jsonrpc/0) [storage.StorageDomain] Creating 
symlink from 
/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be
 to 
/var/run/vdsm/storage/244dfdfb-2662-4103-9d39-2b13153f2047/9a66bf0f-1333-4931-ad58-f6f1aa1143be
 (fileSD:580)
2019-04-12 10:39:25,637+0200 INFO  (jsonrpc/0) [vdsm.api] FINISH prepareImage 
return={'info': {'path': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
 'type': 'file'}, 'path': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
 'imgVolumesInfo': [{'domainID': '244dfdfb-2662-4103-9d39-2b13153f2047', 
'leaseOffset': 0, 'path': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2',
 'volumeID': u'cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2', 'leasePath': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2.lease',
 'imageID': '9a66bf0f-1333-4931-ad58-f6f1aa1143be'}, {'domainID': 
'244dfdfb-2662-4103-9d39-2b13153f2047', 'leaseOffset': 0, 'path': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
 'volumeID': u'5c2738a4-4279-4cc3-a0de-6af1095f8879', 'leasePath': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879.lease',
 'imageID': '9a66bf0f-1333-4931-ad58-f6f1aa1143be'}]} 
from=:::10.76.98.4,57566, flow_id=97b620d9-6e65-4573-9fdf-5b119764fbb7, 
task_id=78dde3c9-74fb-4588-8cfa-117f0bbe2d2d (api:52)
2019-04-12 10:39:25,637+0200 

[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
 Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd 
unit.
[root@ovirt1 system]# systemctl cat gluster_bricks-isos.automount
# /etc/systemd/system/gluster_bricks-isos.automount
[Unit]
Description=automount for gluster brick ISOS

[Automount]
Where=/gluster_bricks/isos

[Install]
WantedBy=multi-user.target



Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:12:31 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
I have tried to enable debug and see the reason for the issue. Here is the 
relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077] 
[glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get 
pool name for device systemd-1
[2019-04-12 07:56:54.527509] E [MSGID: 106121] 
[glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: 
Failed to pre validate
[2019-04-12 07:56:54.527525] E [MSGID: 106024] 
[glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: 
Snapshot is supported only for thin provisioned LV. Ensure that all bricks of 
isos are thinly provisioned LV.
[2019-04-12 07:56:54.527539] W [MSGID: 106029] 
[glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot 
create pre-validation failed
[2019-04-12 07:56:54.527552] W [MSGID: 106121] 
[glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot 
Prevalidate Failed
[2019-04-12 07:56:54.527568] E [MSGID: 106121] 
[glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Snapshot on local node
[2019-04-12 07:56:54.527583] E [MSGID: 106121] 
[glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre 
Validation Failed

here is the output of lvscan & lvs:
[root@ovirt1 ~]# lvscan
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [9.86 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [168.59 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/swap' [6.70 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/root' [60.00 GiB] inherit
[root@ovirt1 ~]# lvs --noheadings -o pool_lv



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt2 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [<9.77 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [<161.40 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/root' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/swap' [16.00 GiB] inherit



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt3 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_sda3/gluster_thinpool_sda3' [41.00 GiB] 
inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_data' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_isos' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_engine' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/root' [20.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/swap' [8.00 GiB] inherit



  gluster_thinpool_sda3
  gluster_thinpool_sda3
  gluster_thinpool_sda3


I am mounting my bricks via systemd , as I have issues with bricks being 
started before VDO.
[root@ovirt1 ~]# findmnt /gluster_bricks/isos
TARGET   SOURCE FSTYPE OPTIONS
/gluster_bricks/isos systemd-1  autofs 
rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21843
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs    
rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root@ovirt1 ~]# ssh ovirt2 "findmnt /gluster_bricks/isos "
TARGET   SOURCE FSTYPE OPTIONS
/gluster_bricks/isos systemd-1  autofs 
rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14279
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs    
rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root@ovirt1 ~]# ssh ovirt3 "findmnt /gluster_bricks/isos "
TARGET   SOURCE  FSTYPE OPTIONS
/gluster_bricks/isos systemd-1   autofs 

[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
 Hello All,
I have tried to enable debug and see the reason for the issue. Here is the 
relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077] 
[glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get 
pool name for device systemd-1
[2019-04-12 07:56:54.527509] E [MSGID: 106121] 
[glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: 
Failed to pre validate
[2019-04-12 07:56:54.527525] E [MSGID: 106024] 
[glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: 
Snapshot is supported only for thin provisioned LV. Ensure that all bricks of 
isos are thinly provisioned LV.
[2019-04-12 07:56:54.527539] W [MSGID: 106029] 
[glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot 
create pre-validation failed
[2019-04-12 07:56:54.527552] W [MSGID: 106121] 
[glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot 
Prevalidate Failed
[2019-04-12 07:56:54.527568] E [MSGID: 106121] 
[glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Snapshot on local node
[2019-04-12 07:56:54.527583] E [MSGID: 106121] 
[glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre 
Validation Failed

here is the output of lvscan & lvs:
[root@ovirt1 ~]# lvscan
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [9.86 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [168.59 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/swap' [6.70 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/root' [60.00 GiB] inherit
[root@ovirt1 ~]# lvs --noheadings -o pool_lv



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt2 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [<9.77 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [<161.40 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/root' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/swap' [16.00 GiB] inherit



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt3 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_sda3/gluster_thinpool_sda3' [41.00 GiB] 
inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_data' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_isos' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_engine' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/root' [20.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/swap' [8.00 GiB] inherit



  gluster_thinpool_sda3
  gluster_thinpool_sda3
  gluster_thinpool_sda3


I am mounting my bricks via systemd , as I have issues with bricks being 
started before VDO.
[root@ovirt1 ~]# findmnt /gluster_bricks/isos
TARGET   SOURCE FSTYPE OPTIONS
/gluster_bricks/isos systemd-1  autofs 
rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21843
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs    
rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root@ovirt1 ~]# ssh ovirt2 "findmnt /gluster_bricks/isos "
TARGET   SOURCE FSTYPE OPTIONS
/gluster_bricks/isos systemd-1  autofs 
rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14279
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs    
rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root@ovirt1 ~]# ssh ovirt3 "findmnt /gluster_bricks/isos "
TARGET   SOURCE  FSTYPE OPTIONS
/gluster_bricks/isos systemd-1   autofs 
rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=17770
/gluster_bricks/isos /dev/mapper/gluster_vg_sda3-gluster_lv_isos xfs    
rw,noatime,nodiratime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=1024,noquota


[root@ovirt1 ~]# grep "gluster_bricks" /proc/mounts
systemd-1 /gluster_bricks/data autofs 
rw,relatime,fd=22,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21513 
0 0
systemd-1 /gluster_bricks/engine autofs