Re: [ovirt-users] Katello integration

2015-06-08 Thread Sven Kieske
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I just wanted to say that this is a very cool and
useful feature!

Sadly, CentOS still does not provide
the errata information, because red hat
has no datasource that CentOS can use (at least
that's my last information from the centos devel list).
- -- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhaus
en
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
Oeynhausen
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJVdUHxAAoJEMby9TMDAbQR7KQP/3qxnqNXiqJrrY3bPfb+Y3Px
NaPphKw9zMzA5e+EptDb6HuV+elGCv7A9KnlLUSqO7Al17nA5d4HGTc2INrPXHUq
Vg76/6js/ld46ytcgaeWk+Fp6xQu/jkWjG7tnhtQerfVR+aN+G02CR+4EsaU1gf9
ADLrIXSjgQKFF+AjipuhXoEfYYhgygf75+P0WY6VdJ+87fozrL16QjFa9ZZvnHdo
3DI/x/LAHQBz3rvwBG0mdnEbjlep2fefdPZO/cOQKfJoKIDLJSRTuqsZaGsLvP/Z
YCl/+JofdtmPdQivgDo52dNVgBd4FoIh7xTUdIuu7x+Oju7Nr7LbVKFFEgy9wFup
C+ywU6Ah/RM3dywCKDG7U47i84mtnm7Du8RIbVR81A27ZWTEAYF72hGBAxr1Ize6
YZ/jmhtFTBYDYuxeeGMCVJmOTgkLduIbHfxdBYpQ5BUDlQZL0b03WTqpJ0RCBeqA
slsJ7pdTUgCXe1PKVPtEf+6buea9ELnVVI8QMkBUKrXv2szKG52ECITxnXxR7nLa
NC7EGr7NZmmYBDPCp7b6RDP4wlBwYdK7IdFcMQMJL6tLeSwICUOuPHXNi3dXxAGD
fS84wiriuj8Z0Km3odx3wcjP0AyMIPNdvpmukqRIYKiPt6dtum78qCH9JWibhxT0
9zA6llDB/bZpzOeZW/Vg
=xPE/
-END PGP SIGNATURE-
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] HA storage based on two nodes with one point of failure

2015-06-08 Thread Юрий Полторацкий
2015-06-08 8:32 GMT+03:00 Ravishankar N ravishan...@redhat.com:



 On 06/08/2015 02:38 AM, Юрий Полторацкий wrote:

 Hi,

 I have made a lab with a config listed below and have got unexpected
 result. Someone, tell me, please, where did I go wrong?

 I am testing oVirt. Data Center has two clusters: the first as a computing
 with three nodes (node1, node2, node3); the second as a storage (node5,
 node6) based on glusterfs (replica 2).

 I want the storage to be HA. I have read here
 https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Managing_Split-brain.html
 next:
 For a replicated volume with two nodes and one brick on each machine, if
 the server-side quorum is enabled and one of the nodes goes offline, the
 other node will also be taken offline because of the quorum configuration.
 As a result, the high availability provided by the replication is
 ineffective. To prevent this situation, a dummy node can be added to the
 trusted storage pool which does not contain any bricks. This ensures that
 even if one of the nodes which contains data goes offline, the other node
 will remain online. Note that if the dummy node and one of the data nodes
 goes offline, the brick on other node will be also be taken offline, and
 will result in data unavailability.

 So, I have added my Engine (not self-hosted) as a dummy node without a
 brick and have configured quorum as listed below:
 cluster.quorum-type: fixed
 cluster.quorum-count: 1
 cluster.server-quorum-type: server
 cluster.server-quorum-ratio: 51%


 Then, I've run a VM and have dropped the network link from node6, after
 one a hour have switched back the link and after a while have got a
 split-brain. But why? No one could write to the brick on node6: the VM was
 running on node3 and node1 was SPM.



 It could have happened that after node6 came up, the client(s) saw a
 temporary disconnect of node 5 and a write happened at that time. When the
 node 5 is connected again, we have AFR xattrs on both nodes blaming each
 other, causing split-brain. For a replica 2 setup. it is best to set the
 client-quorum to auto instead of fixed. What this means is that the first
 node of the replica must always be up for writes to be permitted. If the
 first node goes down, the volume becomes read-only.

Yes, at first I have tested with client-quorum auto, but my VMs has been
paused when the first node goes down and this is not unacceptable

Ok, I understood: there is now way to have fault tolerance storage with
only two servers using GlusterFS. I have to get another one.

Thanks.


 For better availability , it would be better to use a replica 3 volume
 with (again with client-quorum set to auto). If you are using glusterfs
 3.7, you can also consider using the arbiter configuration [1] for replica
 3.

 [1]
 https://github.com/gluster/glusterfs/blob/master/doc/features/afr-arbiter-volumes.md

 Thanks,
 Ravi


  Gluster's log from node6:
 Июн 07 15:35:06 node6.virt.local etc-glusterfs-glusterd.vol[28491]:
 [2015-06-07 12:35:06.106270] C [MSGID: 106002]
 [glusterd-server-quorum.c:356:glusterd_do_volume_quorum_action]
 0-management: Server quorum lost for volume vol3. Stopping local bricks.
 Июн 07 16:30:06 node6.virt.local etc-glusterfs-glusterd.vol[28491]:
 [2015-06-07 13:30:06.261505] C [MSGID: 106003]
 [glusterd-server-quorum.c:351:glusterd_do_volume_quorum_action]
 0-management: Server quorum regained for volume vol3. Starting local bricks.


 gluster volume heal vol3 info
 Brick node5.virt.local:/storage/brick12/
 /5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in split-brain

 Number of entries: 1

 Brick node6.virt.local:/storage/brick13/
 /5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in split-brain

 Number of entries: 1


 gluster volume info vol3

 Volume Name: vol3
 Type: Replicate
 Volume ID: 69ba8c68-6593-41ca-b1d9-40b3be50ac80
 Status: Started
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: node5.virt.local:/storage/brick12
 Brick2: node6.virt.local:/storage/brick13
 Options Reconfigured:
 storage.owner-gid: 36
 storage.owner-uid: 36
 cluster.server-quorum-type: server
 cluster.quorum-type: fixed
 network.remote-dio: enable
 cluster.eager-lock: enable
 performance.stat-prefetch: off
 performance.io-cache: off
 performance.read-ahead: off
 performance.quick-read: off
 auth.allow: *
 user.cifs: disable
 nfs.disable: on
 performance.readdir-ahead: on
 cluster.quorum-count: 1
 cluster.server-quorum-ratio: 51%



 06.06.2015 12:09, Юрий Полторацкий пишет:

 Hi,

  I want to build a HA storage based on two servers. I want that if one
 goes down, my storage will be available in RW mode.

  If I will use replica 2, then split-brain can occur. To avoid this I
 would use a quorum. As I understand correctly, I can use quorum on a client
 side, on a server side, or on both. I want to add a dummy node without a
 brick and make such config:

 cluster.quorum-type: fixed
 

Re: [ovirt-users] Comma Seperated Value doesn't accept by custom properties

2015-06-08 Thread Dan Kenigsberg
On Mon, Jun 08, 2015 at 11:15:45AM +0800, Punit Dambiwal wrote:
 Hi Dan,
 
 The results are below :-
 
 [root@mgmt ~]# engine-config -g UserDefinedVMProperties
 UserDefinedVMProperties:  version: 3.0
 UserDefinedVMProperties:  version: 3.1
 UserDefinedVMProperties:  version: 3.2
 UserDefinedVMProperties:  version: 3.3
 UserDefinedVMProperties:  version: 3.4
 UserDefinedVMProperties: noipspoof=^[0-9.]*$ version: 3.5
 [root@mgmt ~]#
 
 I used the following command to set this :-
 
 [root@mgmt ~]# engine-config -s
 UserDefinedVMProperties=noipspoof=^[0-9.]*$

oops, please add a coma character inside the brackets. If it works,
please mark
https://gerrit.ovirt.org/42031
as verified.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move VirtIO disk to External LUN disk

2015-06-08 Thread Daniel Helgenberger


On 06.06.2015 17:48, Juan Carlos YJ. Lin wrote:
 I have performance issue with the VirtIO disks and like to move it to External
 LUN on the ISCI storage
What version are you using? I assume here 3.5.

Do you want the disk image moved or the *content* of the image moved to 
your iscsi lun so the VM boots off directly from the lun? The first 
thing is quite easy and I assume this:
  to use the ISCSI pathtrought
 How I can do that?
1. connect your LUN to your hosts,

- In the GUI,

2. create a new iscsi Storage domain on it,
3. Shut down the affected VM,
4. Navigate to the disk tap, select the affected disks, right-click -  
move disk (or select several and use the button). In the pop up window, 
select the target storage domain (your new iscsi LUN).

The SPM host will now do the moving using the qemu-img or dd command 
depending on your ovirt version. Note, if the source domain was NFS, 
your images are converted from files to block storage, with all the pros 
and cons.

Hope that helps,


 Juan Carlos Lin
 Unisoft S.A.
 +595-993-288330


 System-wide Disclaimer ---
 Antes de imprimir, recuérdese de su compromiso con el Medio Ambiente
 Aviso: Este mensaje es dirigido para su destinatario y contiene informaciones
 que no pueden ser usadas por otras personas que no sean su(s) destinatario(s).
 La retransmisión del contenido no está autorizada fuera del contexto de su 
 envío
 y a quien corresponde. El uso no autorizado de la información en este mensaje 
 se
 halla penado por las leyes vigentes en todo el mundo. Si ha recibido este
 mensaje por error, por favor bórrala y notifique al remitente en la brevedad
 posible. El contenido de este mensaje no es responsabilidad de la Empresa y 
 debe
 ser atribuido siempre a su autor. Gracias.


-- 
Daniel Helgenberger
m box bewegtbild GmbH

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19
D-10115 BERLIN


www.m-box.de  www.monkeymen.tv

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Comma Seperated Value doesn't accept by custom properties

2015-06-08 Thread Punit Dambiwal
Hi Dan,

Can save now the comma separate ip address...but still the problem
persists

1. If one Network interface...i can add multiple ip addresses with comma
separated...
2. But if i add another network interface to this VMVM cannot boot up...
3. What need to do if i want to assign multiple ip address on multiple
network interface ??

Thanks,
Punit

On Mon, Jun 8, 2015 at 4:54 PM, Dan Kenigsberg dan...@redhat.com wrote:

 On Mon, Jun 08, 2015 at 11:15:45AM +0800, Punit Dambiwal wrote:
  Hi Dan,
 
  The results are below :-
 
  [root@mgmt ~]# engine-config -g UserDefinedVMProperties
  UserDefinedVMProperties:  version: 3.0
  UserDefinedVMProperties:  version: 3.1
  UserDefinedVMProperties:  version: 3.2
  UserDefinedVMProperties:  version: 3.3
  UserDefinedVMProperties:  version: 3.4
  UserDefinedVMProperties: noipspoof=^[0-9.]*$ version: 3.5
  [root@mgmt ~]#
 
  I used the following command to set this :-
 
  [root@mgmt ~]# engine-config -s
  UserDefinedVMProperties=noipspoof=^[0-9.]*$

 oops, please add a coma character inside the brackets. If it works,
 please mark
 https://gerrit.ovirt.org/42031
 as verified.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Questions about Spice Proxy

2015-06-08 Thread David Jaša
Hi Kevin,

The ports that the proxy is listening on is up to you, 3128 and 8080 are the 
usual ports. What you need to set up on the proxy is to allow connection to 
ports 5900-6123 of the hosts (host display network NICs if you use display 
network).

David

On Pá, 2015-06-05 at 10:38 +0200, Kevin COUSIN wrote:
 Hi list,
 
 Hi try to setup a SPICE Proxy with HAproxy. It works fine for the WebUI. I 
 define the proxy with engine-config -s 
 SpiceProxyDefault=http://mygreatproxy.tld:8080;. The connections are going 
 on the nodes with the same port (here 8080 or 3128 line in documentation) or 
 on the engine ? Need I to install a squid proxy instead of HAproxy ?
 
 Regards
 
 
 
Kevin C.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] HA storage based on two nodes with one point of failure

2015-06-08 Thread Ravishankar N



On 06/08/2015 02:38 AM, Юрий Полторацкий wrote:

Hi,

I have made a lab with a config listed below and have got unexpected 
result. Someone, tell me, please, where did I go wrong?


I am testing oVirt. Data Center has two clusters: the first as a 
computing with three nodes (node1, node2, node3); the second as a 
storage (node5, node6) based on glusterfs (replica 2).


I want the storage to be HA. I have read here 
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Managing_Split-brain.html 
next:
For a replicated volume with two nodes and one brick on each machine, 
if the server-side quorum is enabled and one of the nodes goes 
offline, the other node will also be taken offline because of the 
quorum configuration. As a result, the high availability provided by 
the replication is ineffective. To prevent this situation, a dummy 
node can be added to the trusted storage pool which does not contain 
any bricks. This ensures that even if one of the nodes which contains 
data goes offline, the other node will remain online. Note that if the 
dummy node and one of the data nodes goes offline, the brick on other 
node will be also be taken offline, and will result in data 
unavailability.


So, I have added my Engine (not self-hosted) as a dummy node without 
a brick and have configured quorum as listed below:

cluster.quorum-type: fixed
cluster.quorum-count: 1
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%


Then, I've run a VM and have dropped the network link from node6, 
after one a hour have switched back the link and after a while have 
got a split-brain. But why? No one could write to the brick on node6: 
the VM was running on node3 and node1 was SPM.





It could have happened that after node6 came up, the client(s) saw a 
temporary disconnect of node 5 and a write happened at that time. When 
the node 5 is connected again, we have AFR xattrs on both nodes blaming 
each other, causing split-brain. For a replica 2 setup. it is best to 
set the client-quorum to auto instead of fixed. What this means is that 
the first node of the replica must always be up for writes to be 
permitted. If the first node goes down, the volume becomes read-only.  
For better availability , it would be better to use a replica 3 volume 
with (again with client-quorum set to auto). If you are using glusterfs 
3.7, you can also consider using the arbiter configuration [1] for 
replica 3.


[1] 
https://github.com/gluster/glusterfs/blob/master/doc/features/afr-arbiter-volumes.md


Thanks,
Ravi



Gluster's log from node6:
Июн 07 15:35:06 node6.virt.local etc-glusterfs-glusterd.vol[28491]: 
[2015-06-07 12:35:06.106270] C [MSGID: 106002] 
[glusterd-server-quorum.c:356:glusterd_do_volume_quorum_action] 
0-management: Server quorum lost for volume vol3. Stopping local bricks.
Июн 07 16:30:06 node6.virt.local etc-glusterfs-glusterd.vol[28491]: 
[2015-06-07 13:30:06.261505] C [MSGID: 106003] 
[glusterd-server-quorum.c:351:glusterd_do_volume_quorum_action] 
0-management: Server quorum regained for volume vol3. Starting local 
bricks.



gluster volume heal vol3 info
Brick node5.virt.local:/storage/brick12/
/5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in split-brain

Number of entries: 1

Brick node6.virt.local:/storage/brick13/
/5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in split-brain

Number of entries: 1


gluster volume info vol3

Volume Name: vol3
Type: Replicate
Volume ID: 69ba8c68-6593-41ca-b1d9-40b3be50ac80
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node5.virt.local:/storage/brick12
Brick2: node6.virt.local:/storage/brick13
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: fixed
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: disable
nfs.disable: on
performance.readdir-ahead: on
cluster.quorum-count: 1
cluster.server-quorum-ratio: 51%



06.06.2015 12:09, Юрий Полторацкий пишет:

Hi,

I want to build a HA storage based on two servers. I want that if one 
goes down, my storage will be available in RW mode.


If I will use replica 2, then split-brain can occur. To avoid this I 
would use a quorum. As I understand correctly, I can use quorum on a 
client side, on a server side, or on both. I want to add a dummy node 
without a brick and make such config:


cluster.quorum-type: fixed
cluster.quorum-count: 1
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%

I expect that client will have access in RW mode until one brick 
alive. On the other side if server's quorum will not meet, then 
bricks will be RO.


Say, HOST1 with a brick BRICK1, HOST2 with a brick BRICK2, and HOST3 
without a brick.


Once HOST1 lose a network connection, than on this node server quorum 
will not meet 

Re: [ovirt-users] Comma Seperated Value doesn't accept by custom properties

2015-06-08 Thread Dan Kenigsberg
On Mon, Jun 08, 2015 at 05:43:40PM +0800, Punit Dambiwal wrote:
 Hi Dan,
 
 Can save now the comma separate ip address...but still the problem
 persists
 
 1. If one Network interface...i can add multiple ip addresses with comma
 separated...
 2. But if i add another network interface to this VMVM cannot boot up...
 3. What need to do if i want to assign multiple ip address on multiple
 network interface ??

Can you share your vdsm.log of this boot failure?

Regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users