[ovirt-users] VM get stuck randomly

2016-03-12 Thread Christophe TREFOIS
Dear all,

I have a problem since couple of weeks, where randomly 1 VM (not always the 
same) becomes completely unresponsive.
We find this out because our Icinga server complains that host is down.

Upon inspection, we find we can’t open a console to the VM, nor can we login.

In oVirt engine, the VM looks like “up”. The only weird thing is that RAM usage 
shows 0% and CPU usage shows 100% or 75% depending on number of cores.
The only way to recover is to force shutdown the VM via 2-times shutdown from 
the engine.

Could you please help me to start debugging this?
I can provide any logs, but I’m not sure which ones, because I couldn’t see 
anything with ERROR in the vdsm logs on the host.

The host is running 

OS Version: RHEL - 7 - 1.1503.el7.centos.2.8
Kernel Version: 3.10.0 - 229.14.1.el7.x86_64
KVM Version:2.1.2 - 23.el7_1.8.1
LIBVIRT Version:libvirt-1.2.8-16.el7_1.4
VDSM Version:   vdsm-4.16.26-0.el7.centos
SPICE Version:  0.12.4 - 9.el7_1.3
GlusterFS Version:  glusterfs-3.7.5-1.el7

We use a locally exported gluster as storage domain (eg, storage is on the same 
machine exposed via gluster). No replica.
We run around 50 VMs on that host.

Thank you for your help in this,

—
Christophe
  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] QEMU GlusterFS support in oVirt

2016-03-12 Thread Samuli Heinonen

> On 12 Mar 2016, at 17:04, Nir Soffer  wrote:
> 
> On Sat, Mar 12, 2016 at 1:55 PM, Samuli Heinonen  
> wrote:
>> Hello all,
>> 
>> It seems that oVirt 3.6 is still using FUSE to access GlusterFS storage 
>> domains instead of using QEMU driver (libgfapi). As far as I know libgfapi 
>> support should be available in Libvirt and QEMU packages provided in CentOS 
>> 7.
> 
> We started to work this during 3.6 development, but the work was
> suspended because
> libvirt and qemu do not support multiple gluster servers [1]. This
> means that if your single
> server is down, you will not be able to connect to glister.

Since this is only used to fetch volume information when connecting to Gluster 
volume I don’t think it should be treated as blocking issue. If we lose one 
storage server that’s a problem we have to fix as soon as possible anyway. Even 
then hypervisors are already connected to other storage servers and there is no 
need to fetch volume information again. Also there are other ways to work 
around this like having a separate hostname that is used to fetch volume 
information and which can then be pointed to server that’s up.

It would be great if libgfapi could be available even as selectable option so 
it could be tested in real world situation.

> Recently Niels suggested that we use DNS for this purpose - if the DNS
> return multiple
> servers, libgafpi should be able to failover to one of these servers,
> so connecting with
> single server address should good as multiple server support in libvirt or 
> qemu.
> 
> The changes needed to support this are not big as you can see in [2],
> [3]. However the work
> was not completed and I don't know if it will completed for 4.0.

Thank you for these links. I’ll see if this is something we can try out in our 
test environment

Best regards,
Samuli

> 
>> Is there any workarounds how to use libgfapi with oVirt before it’s 
>> officially available?
> 
> I don't know about any.
> 
> [1] https://bugzilla.redhat.com/1247521
> [2] https://gerrit.ovirt.org/44061
> [3] https://gerrit.ovirt.org/33768
> 
> Nir

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] QEMU GlusterFS support in oVirt

2016-03-12 Thread Niels de Vos
On Sat, Mar 12, 2016 at 05:04:16PM +0200, Nir Soffer wrote:
> On Sat, Mar 12, 2016 at 1:55 PM, Samuli Heinonen  
> wrote:
> > Hello all,
> >
> > It seems that oVirt 3.6 is still using FUSE to access GlusterFS storage 
> > domains instead of using QEMU driver (libgfapi). As far as I know libgfapi 
> > support should be available in Libvirt and QEMU packages provided in CentOS 
> > 7.
> 
> We started to work this during 3.6 development, but the work was
> suspended because
> libvirt and qemu do not support multiple gluster servers [1]. This
> means that if your single
> server is down, you will not be able to connect to gluster.
> 
> Recently Niels suggested that we use DNS for this purpose - if the DNS
> return multiple
> servers, libgafpi should be able to failover to one of these servers,
> so connecting with
> single server address should good as multiple server support in libvirt or 
> qemu.

And in case the local oVirt Node is part of the Gluster Trusted Storage
Pool (aka running GlusterD), qemu can use "localhost" to connect to the
storage too. It is only the initial connection that would benefit from
the added fail-over by multiple hosts. Once the connection is
established, qemu/libgfapi will connect to all the bricks that
participate in the volume. That means that only starting or attaching a
new disk to a running VM is impacted when the gluster:// URL is used
with a storage server that is down. In case oVirt/VDSM knows what
storage servers are up, it could even select one of those and not use a
server that is down.

I've left a similar note in [1], maybe it encourages to start with a
"single host" solution. Extending it for multiple hostnames should then
be pretty simple, and it allows us to start furter testing and doing
other integration bits.

And in case someone cares about (raw) sparse files (not possible over
FUSE, only with Linux 4.5), glusterfs-3.8 will provide a huge
improvement. A qemu patch for utilizing it is under review at [4].

HTH,
Niels


> The changes needed to support this are not big as you can see in [2],
> [3]. However the work
> was not completed and I don't know if it will completed for 4.0.
> 
> > Is there any workarounds how to use libgfapi with oVirt before it’s
> > officially available?
> 
> I don't know about any.
> 
> [1] https://bugzilla.redhat.com/1247521
> [2] https://gerrit.ovirt.org/44061
> [3] https://gerrit.ovirt.org/33768

[4] http://lists.nongnu.org/archive/html/qemu-block/2016-03/msg00288.html

> 
> Nir


signature.asc
Description: PGP signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] QEMU GlusterFS support in oVirt

2016-03-12 Thread Nir Soffer
On Sat, Mar 12, 2016 at 1:55 PM, Samuli Heinonen  wrote:
> Hello all,
>
> It seems that oVirt 3.6 is still using FUSE to access GlusterFS storage 
> domains instead of using QEMU driver (libgfapi). As far as I know libgfapi 
> support should be available in Libvirt and QEMU packages provided in CentOS 7.

We started to work this during 3.6 development, but the work was
suspended because
libvirt and qemu do not support multiple gluster servers [1]. This
means that if your single
server is down, you will not be able to connect to gluster.

Recently Niels suggested that we use DNS for this purpose - if the DNS
return multiple
servers, libgafpi should be able to failover to one of these servers,
so connecting with
single server address should good as multiple server support in libvirt or qemu.

The changes needed to support this are not big as you can see in [2],
[3]. However the work
was not completed and I don't know if it will completed for 4.0.

> Is there any workarounds how to use libgfapi with oVirt before it’s 
> officially available?

I don't know about any.

[1] https://bugzilla.redhat.com/1247521
[2] https://gerrit.ovirt.org/44061
[3] https://gerrit.ovirt.org/33768

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Disks Snapshot

2016-03-12 Thread Marcelo Leandro
Good morning

I have a doubt, when i do a snapshot, a new lvm is generated, however
when I delete this snapshot the lvm not off, that's right?

[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
27a8bca3-f984-4f67-9dd2-9e2fc5a5f366  7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
3fba372c-4c39-4843-be9e-b358b196331d  b47f58e0-d576-49be-b8aa-f30581a0373a
5097df27-c676-4ee7-af89-ecdaed2c77be  c598bb22-a386-4908-bfa1-7c44bd764c96
5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
total 0
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
3fba372c-4c39-4843-be9e-b358b196331d ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
5097df27-c676-4ee7-af89-ecdaed2c77be ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23
5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12
7d9b6ed0-1125-4215-ab76-37bcda3f6c2d ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30
b47f58e0-d576-49be-b8aa-f30581a0373a ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01
c598bb22-a386-4908-bfa1-7c44bd764c96 ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96



disks snapshot:
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
image: 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
file format: qcow2
virtual size: 112G (120259084288 bytes)
disk size: 0
cluster_size: 65536
backing file: 
../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16


[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
3fba372c-4c39-4843-be9e-b358b196331d
image: 3fba372c-4c39-4843-be9e-b358b196331d
file format: qcow2
virtual size: 112G (120259084288 bytes)
disk size: 0
cluster_size: 65536
backing file: 
../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16

[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
image: 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
file format: qcow2
virtual size: 112G (120259084288 bytes)
disk size: 0
cluster_size: 65536
backing file: 
../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16


disk base:
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
b47f58e0-d576-49be-b8aa-f30581a0373a
image: b47f58e0-d576-49be-b8aa-f30581a0373a
file format: raw
virtual size: 112G (120259084288 bytes)
disk size: 0


Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] QEMU GlusterFS support in oVirt

2016-03-12 Thread Samuli Heinonen
Hello all,

It seems that oVirt 3.6 is still using FUSE to access GlusterFS storage domains 
instead of using QEMU driver (libgfapi). As far as I know libgfapi support 
should be available in Libvirt and QEMU packages provided in CentOS 7. Is there 
any workarounds how to use libgfapi with oVirt before it’s officially available?

Best regards,
Samuli Heinonen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users