Re: [ovirt-users] Ovirt-Engine HA

2016-08-19 Thread Yaniv Kaul
On Fri, Aug 19, 2016 at 8:11 PM, Scott  wrote:

> Hi Sandvik,
>
> I believe this ultimately is what the self hosted engine is planned to
> solve. It's up to you as to whether you feel comfortable with that idea
> currently.
>

Indeed. And many in the community are using it.


> Outside of that, I would point you to the Red Hat Cluster Suite in a
> RHEV/RHEL environment. The high availability component of that is based on
> Linux-HA but I have only used Red Hat's implementation.
>
> Lastly, the poor man's solution if this isn't critical infrastructure
> would be a hot standby server. Keep the packages in sync between the two
> however you feel appropriate. Run engine-backup as frequently as needed and
> copy it to the second server. Have your engine address as a secondary
> address with appropriate DNS and move it between them in a fail over (or
> use a CNAME). Restore the backup in a fail over and run engine-setup. I
> think you get the idea.
>

I don't like this approach very much - you always forget some
configuration. (For example, we've had an issue with Kerberos database
later on when migrating).
You are also 'wasting' a server.
Y.


> Personally I use hosted engine in all but my main production cluster. That
> one will use it once I've upgraded those hosts to RHEL 7.
>
> Hope that helps.
>
> Scott
>
> On Fri, Aug 19, 2016, 11:40 AM Sandvik Agustin 
> wrote:
>
>> Hi Ovirt Users,
>>
>> Do you have any documentation, tutorial, howto or workaround to perform
>> or to have an ovirt-engine HA? Iv'e tried to google about this but I can't
>> find any.
>>
>> let's say I want to have two engine, engine1 is currently managing two
>> hypervisor, then engin2 is my reserved engine, so I can still managed those
>> vm that are running inside of my hypervisor if ever engine1 fails.
>>
>> TIA :)
>> Sandvik
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt-Engine HA

2016-08-19 Thread Sandvik Agustin
Hi Scott,

Thanks for the quick reply, "self hosted engine" are you talking about the
hosted-engine? I tried hosted-engine, I think twice, and I'm having problem
with VLAN(s), I can't manipulate the NIC of the hosted engine.

I think I need to read more about this "self hosted engine".

TIA
Sandvik

On Sat, Aug 20, 2016 at 1:11 AM, Scott  wrote:

> Hi Sandvik,
>
> I believe this ultimately is what the self hosted engine is planned to
> solve. It's up to you as to whether you feel comfortable with that idea
> currently.
>
> Outside of that, I would point you to the Red Hat Cluster Suite in a
> RHEV/RHEL environment. The high availability component of that is based on
> Linux-HA but I have only used Red Hat's implementation.
>
> Lastly, the poor man's solution if this isn't critical infrastructure
> would be a hot standby server. Keep the packages in sync between the two
> however you feel appropriate. Run engine-backup as frequently as needed and
> copy it to the second server. Have your engine address as a secondary
> address with appropriate DNS and move it between them in a fail over (or
> use a CNAME). Restore the backup in a fail over and run engine-setup. I
> think you get the idea.
>
> Personally I use hosted engine in all but my main production cluster. That
> one will use it once I've upgraded those hosts to RHEL 7.
>
> Hope that helps.
>
> Scott
>
> On Fri, Aug 19, 2016, 11:40 AM Sandvik Agustin 
> wrote:
>
>> Hi Ovirt Users,
>>
>> Do you have any documentation, tutorial, howto or workaround to perform
>> or to have an ovirt-engine HA? Iv'e tried to google about this but I can't
>> find any.
>>
>> let's say I want to have two engine, engine1 is currently managing two
>> hypervisor, then engin2 is my reserved engine, so I can still managed those
>> vm that are running inside of my hypervisor if ever engine1 fails.
>>
>> TIA :)
>> Sandvik
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine NIC setup files

2016-08-19 Thread Hanson

My mistake. Needed NM_CONTROLLED=no added.


On 08/19/2016 12:58 PM, Hanson wrote:

Hi Guys,

I have edited /etc/sysconfig/network-scripts/ifcfg-eth0 &1 for the 
various subnets we needed.


Somewhere along the line, when the hosted-engine boots, eth1 comes up 
but eth0 does not. If I login using the upped interface and do an ifup 
eth0 it comes up.


it is set to ONBOOT=yes in the config.

I know with the nodes, that these files are overwritten on boot. Where 
should I be editing for the hosted-engine?



Thanks,

Hanson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt-Engine HA

2016-08-19 Thread Scott
Hi Sandvik,

I believe this ultimately is what the self hosted engine is planned to
solve. It's up to you as to whether you feel comfortable with that idea
currently.

Outside of that, I would point you to the Red Hat Cluster Suite in a
RHEV/RHEL environment. The high availability component of that is based on
Linux-HA but I have only used Red Hat's implementation.

Lastly, the poor man's solution if this isn't critical infrastructure would
be a hot standby server. Keep the packages in sync between the two however
you feel appropriate. Run engine-backup as frequently as needed and copy it
to the second server. Have your engine address as a secondary address with
appropriate DNS and move it between them in a fail over (or use a CNAME).
Restore the backup in a fail over and run engine-setup. I think you get the
idea.

Personally I use hosted engine in all but my main production cluster. That
one will use it once I've upgraded those hosts to RHEL 7.

Hope that helps.

Scott

On Fri, Aug 19, 2016, 11:40 AM Sandvik Agustin 
wrote:

> Hi Ovirt Users,
>
> Do you have any documentation, tutorial, howto or workaround to perform or
> to have an ovirt-engine HA? Iv'e tried to google about this but I can't
> find any.
>
> let's say I want to have two engine, engine1 is currently managing two
> hypervisor, then engin2 is my reserved engine, so I can still managed those
> vm that are running inside of my hypervisor if ever engine1 fails.
>
> TIA :)
> Sandvik
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] HostedEngine NIC setup files

2016-08-19 Thread Hanson

Hi Guys,

I have edited /etc/sysconfig/network-scripts/ifcfg-eth0 &1 for the 
various subnets we needed.


Somewhere along the line, when the hosted-engine boots, eth1 comes up 
but eth0 does not. If I login using the upped interface and do an ifup 
eth0 it comes up.


it is set to ONBOOT=yes in the config.

I know with the nodes, that these files are overwritten on boot. Where 
should I be editing for the hosted-engine?



Thanks,

Hanson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt-Engine HA

2016-08-19 Thread Sandvik Agustin
Hi Ovirt Users,

Do you have any documentation, tutorial, howto or workaround to perform or
to have an ovirt-engine HA? Iv'e tried to google about this but I can't
find any.

let's say I want to have two engine, engine1 is currently managing two
hypervisor, then engin2 is my reserved engine, so I can still managed those
vm that are running inside of my hypervisor if ever engine1 fails.

TIA :)
Sandvik
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] qemu-kvm-ev-2.3.0-31.el7_2.21.1 available for testing on x86_64, ppc64le and aarch64

2016-08-19 Thread Sandro Bonazzola
qemu-kvm-ev-2.3.0-31.el7_2.21.1 has been tagged for testing for CentOS Virt
SIG and is already in testing repositories.
It's now available for x86_64, ppc64le and aarch64.
Please help testing and providing feedback, thanks.
We plan to move to stable repo around Wednesday next week.

ChangeLog since previous release:
* Fri Aug 19 2016 Sandro Bonazzola  -
ev-2.3.0-31.el7_2.21 - Removing RH branding from package name * Tue Aug 02
2016 Miroslav Rezanina  - rhev-2.3.0-31.el7_2.21 -
kvm-block-iscsi-avoid-potential-overflow-of-acb-task-cdb.patch [bz#1358997]
- Resolves: bz#1358997 (CVE-2016-5126 qemu-kvm-rhev: Qemu: block: iscsi:
buffer overflow in iscsi_aio_ioctl [rhel-7.2.z]) * Wed Jul 27 2016 Miroslav
Rezanina  - rhev-2.3.0-31.el7_2.20 -
kvm-virtio-error-out-if-guest-exceeds-virtqueue-size.patch [bz#1359731] -
Resolves: bz#1359731 (EMBARGOED CVE-2016-5403 qemu-kvm-rhev: Qemu: virtio:
unbounded memory allocation on host via guest leading to DoS [rhel-7.2.z])
* Wed Jul 20 2016 Miroslav Rezanina  -
rhev-2.3.0-31.el7_2.19 -
kvm-qemu-sockets-use-qapi_free_SocketAddress-in-cleanup.patch [bz#1354090]
- kvm-tap-use-an-exit-notifier-to-call-down_script.patch [bz#1354090] -
kvm-slirp-use-exit-notifier-for-slirp_smb_cleanup.patch [bz#1354090] -
kvm-net-do-not-use-atexit-for-cleanup.patch [bz#1354090] - Resolves:
bz#1354090 (Boot guest with vhostuser server mode, QEMU prompt
'Segmentation fault' after executing '(qemu)system_powerdown') * Fri Jul 08
2016 Miroslav Rezanina  - rhev-2.3.0-31.el7_2.18 -
kvm-vhost-user-disable-chardev-handlers-on-close.patch [bz#1351892] -
kvm-char-clean-up-remaining-chardevs-when-leaving.patch [bz#1351892] -
kvm-sockets-add-helpers-for-creating-SocketAddress-from-.patch [bz#1351892]
- kvm-socket-unlink-unix-socket-on-remove.patch [bz#1351892] -
kvm-char-do-not-use-atexit-cleanup-handler.patch [bz#1351892] - Resolves:
bz#1351892 (vhost-user: A socket file is not deleted after VM's port is
detached.) * Tue Jun 28 2016 Miroslav Rezanina  -
rhev-2.3.0-31.el7_2.17 -
kvm-vhost-user-set-link-down-when-the-char-device-is-clo.patch [bz#1348593]
- kvm-vhost-user-fix-use-after-free.patch [bz#1348593] -
kvm-vhost-user-test-fix-up-rhel6-build.patch [bz#1348593] -
kvm-vhost-user-test-fix-migration-overlap-test.patch [bz#1348593] -
kvm-vhost-user-test-fix-chardriver-race.patch [bz#1348593] -
kvm-vhost-user-test-use-unix-port-for-migration.patch [bz#1348593] -
kvm-vhost-user-test-fix-crash-with-glib-2.36.patch [bz#1348593] -
kvm-vhost-user-test-use-correct-ROM-to-speed-up-and-avoi.patch [bz#1348593]
- kvm-tests-append-i386-tests.patch [bz#1348593] -
kvm-vhost-user-add-ability-to-know-vhost-user-backend-di.patch [bz#1348593]
- kvm-qemu-char-add-qemu_chr_disconnect-to-close-a-fd-acce.patch
[bz#1348593] - kvm-vhost-user-disconnect-on-start-failure.patch
[bz#1348593] - kvm-vhost-net-do-not-crash-if-backend-is-not-present.patch
[bz#1348593] - kvm-vhost-net-save-restore-vhost-user-acked-features.patch
[bz#1348593] - kvm-vhost-net-save-restore-vring-enable-state.patch
[bz#1348593] - kvm-test-start-vhost-user-reconnect-test.patch [bz#1348593]
- Resolves: bz#1348593 (No recovery after vhost-user process restart)

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.5 / 3.6 stuck

2016-08-19 Thread Fernando Fuentes
Erik,

Awesome!

Regards,

--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org



On Thu, Aug 18, 2016, at 08:22 PM, Erik Brakke wrote:
> Fernando, thanks! That did it!
> - added the ovirt 3.6 repo to the host
> - tried to update vdsm but had complaints about no glusterfs >= 3.7.x
>   available
> - glusterfs 3.7.11 seems to be the latest with FC22 packages, so added
>   that as a repo
> - vdsm updated as expected
> - reinstalled host from GUI
> - fired right up
>
> Ya saved me!
>
> On Wed, Aug 17, 2016 at 10:23 PM Fernando Fuentes
>  wrote:
>> __
>> Erik,
>>
>> I had a similar issue in the past. I think that updating  vdsm on
>> your host can do the trick.
>> Put your host on maintenance mode and upgrade your host to 3.6.
>>
>> Make sure you install the 3.6 repos.
>>
>> Good luck!
>>
>> Regards,
>>
>> --
>> Fernando Fuentes
>> ffuen...@txweather.org
>> http://www.txweather.org
>>
>>
>>
>> On Wed, Aug 17, 2016, at 09:45 PM, Erik Brakke wrote:
>>> Hello,
>>> I changed my cluster and data center compatibility from 3.5 to 3.6
>>> with a 3.5 host.  D'oh!
>>> Engine: 3.6 on FC22
>>> Host: 3.5 on FC22 (local storage)
>>>
>>> - Can I change the data center and cluster back to 3.5?  How?
>>> - I tried to upgrade host to 3.6 in the GUI.  Maintenance->Reinstall
>>>   did not work, and Upgrade option is greyed out.  Is there a way to
>>>   upgrade host to 3.6?
>>>
>>> Thanks
>>> _
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>> ___
>>  Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Prefered Host to start VM instead of Pinned Host

2016-08-19 Thread Matt .
Hi Guys,

Is it an idea to have an option, not the first boot option, to set a
prefered host for a VM to start on ?

If you remove this host that it also does not complain about a pinned
VM as it should faillback on "any host in cluster" in that way ?

It's nice for static VM's on hosts that might be started on other
hosts when the prefered host is gone, dead or whatever.

Cheers,

Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot start hosted engine after last 4.0.1 packages

2016-08-19 Thread Matt .
As I have an oob network I change my routing to see if it wanted to
upgrade packages, it seemed that I already was on 4.0.2 when this
occured.

The HE started in some strange way on one host, that host was even Non
Operational but had at least the ovirtmgmt network, I changed
something there earlier which was never an issue for the time being.

I will investigate now if I can change the Maintenance states again
because they seem to fail.

2016-08-18 20:59 GMT+02:00 Matt . :
> It takes a while and this is the status output of systemctl after it,
> see attachment.
>
>
>
> 2016-08-18 17:40 GMT+02:00 Simone Tiraboschi :
>> On Thu, Aug 18, 2016 at 5:28 PM, Matt .  wrote:
>>> Sure, It's in the attachment as screenshot.
>>
>> can you please try with
>>  systemctl restart sanlock --force
>> ?
>>
>>> 2016-08-18 17:13 GMT+02:00 Simone Tiraboschi :
 On Thu, Aug 18, 2016 at 5:05 PM, Matt .  wrote:
> There is not storage issue, there is plenty of space, around 900G and
> this is shown in the past as the same issue.
>
> What is strange is that I cannot change the maintenance status of the
> hosts, when I set it to whatever status it's not changed and the
> status are various it seems from Global to Local.
>
> So I'm stuck.

 can you please post the output of
  sanlock client status
 ?

> 2016-08-18 16:41 GMT+02:00 Nir Soffer :
>> On Thu, Aug 18, 2016 at 5:33 PM, Matt .  wrote:
>>> OK, it seems to be an issue related to something like a sanlock which
>>> I cannot solve:
>>>
>>> vdsm "Failed to acquire lock: No space left on device" is what happens
>>> when I restart vdsmd and even removing the __DIRECT_IO_TEST__ file is
>>> newly created on a vdsmd restart.
>>
>> __DIRECT_IO_TEST__ is empty file, used to check if vdsm can do direct I/O
>> to this storage, there is no need to remove it.
>>
>> "No space left on device" seems to be your problem, you should make some
>> space if you want to use this storage domain.
>>
>>>
>>> As I'm not able to change the maintenance mode on the hosts I need to
>>> figure out something else.
>>>
>>>
>>>
>>>
>>>
>>> 2016-08-18 10:40 GMT+02:00 Matt . :
 OK nice to know, I never saw that message before.

 I cannot do anything anymore with hosted engine. It doesn't change the
 maintenance state, it doesn't start the vm and this happens on all
 hosts. Nothing else changed then just the ovirt* and vdsm* package
 upgrades.

 The HE storage is also mounted and the agent.log shows it's
 information as normal.

 Showing logs is kinda an issue as I'm bound to ILO/Drac at the moment.




 2016-08-18 10:18 GMT+02:00 Simone Tiraboschi :
> On Thu, Aug 18, 2016 at 10:06 AM, Matt .  
> wrote:
>> Hello,
>>
>> I'm having issues after the last 4.0.1 package just before 4.0.2 came
>> out. My engine was running great, hosts started the vm also and the
>> hosted-engine command was working fine.
>>
>> Now I get rpcxml 3.6 deprecated warning which I never got before. See
>> the attachment for this output.
>
> That one it's just a warning and we are going to fix the root cause
> for the next version, it's absolutely harmless.
>
>> I have checked is IPv6 was disabled, but it isn't as that is what is
>> needed for this.
>>
>> Any clue ?
>
> If you have any issue, it's somewhere else.
> What are the symptoms?
>
>> Thanks!
>>
>> Matt
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Dedicated NICs for gluster network

2016-08-19 Thread Nicolas Ecarnot

Le 19/08/2016 à 13:43, Sahina Bose a écrit :



Or are you adding the 3 nodes to your existing cluster? If so, I
suggest you try adding this to a new cluster

OK, I tried and succeed to create a new cluster.
In this new cluster, I was ABLE to add the first new host, using
its mgmt DNS name.
This first host still has to have its NICs configured, and (using
Chrome or FF) the access to the network settings window is
stalling the browser (I tried to restart even the engine, to no
avail). Thus, I can not setup this first node NICs.

Thus, I can not add any further host because oVirt relies on a
first host to validate the further ones.



Network team should be able to help you here.



OK, there were no mean I could continue this way (browser crash), so I 
tried and succeed doing so :

- remove the newly created host and cluster
- create a new DATACENTER
- create a new cluster in this DC
- add the first new host : OK
- add the 2 other new hosts : OK

Now, I can smoothly configure their NICs.

Doing all this, I saw that oVirt detected there already was existing 
gluster cluster and volume, and integrated it in oVirt.


Then, I was able to create a new storage domain in this new DC and 
cluster, using one of the *gluster* FQDN's host. It went nicely.


BUT, when viewing the volume tab and brick details, the displayed brick 
names are the host DNS name, and NOT the host GLUSTER DNS names.


I'm worrying about this, confirmed by what I read in the logs :

2016-08-19 14:46:30,484 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-100) [107dc2e3] Could not associate brick 
'serv-vm-al04-data.sdis.isere.fr:/gluster/data/brick04
' of volume '35026521-e76e-4774-8ddf-0a701b9eb40c' with correct network 
as no gluster network found in cluster 
'1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30'
2016-08-19 14:46:30,492 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-100) [107dc2e3] Could not associate brick 
'serv-vm-al05-data.sdis.isere.fr:/gluster/data/brick04
' of volume '35026521-e76e-4774-8ddf-0a701b9eb40c' with correct network 
as no gluster network found in cluster 
'1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30'
2016-08-19 14:46:30,500 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
(DefaultQuartzScheduler_Worker-100) [107dc2e3] Could not associate brick 
'serv-vm-al06-data.sdis.isere.fr:/gluster/data/brick04
' of volume '35026521-e76e-4774-8ddf-0a701b9eb40c' with correct network 
as no gluster network found in cluster 
'1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30'


[oVirt shell (connected)]# list clusters

id : 0001-0001-0001-0001-0045
name   : cluster51
description: Cluster d'alerte de test

id : 1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30
name   : cluster52
description: Cluster d'alerte de test

[oVirt shell (connected)]#

"cluster52" is the recent cluster, and I do have a dedicated gluster 
network, marked as gluster network, in the correct DC and cluster.

The only point is that :
- Each host has its name ("serv-vm-al04") and a second name for gluster 
("serv-vm-al04-data").

- Using blahblahblah-data is correct on a gluster point of view
- Maybe oVirt is disturb not to be able to ping the gluster FQDN (not 
routed) and then throwing this error?


--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Dedicated NICs for gluster network

2016-08-19 Thread Sahina Bose
On Fri, Aug 19, 2016 at 2:33 PM, Nicolas Ecarnot 
wrote:

> Le 19/08/2016 à 09:55, Sahina Bose a écrit :
>
>
>
> On Fri, Aug 19, 2016 at 12:29 PM, Nicolas Ecarnot 
> wrote:
>
>> Hello,
>>
>> I'm digging out this thread because I now had the time to work on this
>> subject, and I'm stuck.
>>
>> This oVirt setup has a standalone engine, and 3 hosts.
>> These 3 hosts are hypervisors and gluster nodes, each using one NIC for
>> all the traffic, that is a very bad idea. (Well, it's working, but not
>> recommended).
>>
>> I added 3 OTHER nodes, and so far, I only created the gluster setup and
>> created a replica-3 volume.
>> Each of these new nodes now have one NIC for management, one NIC for
>> gluster, and other NICs for other things.
>> Each NIC has an IP + DNS name in its dedicated VLAN : one for mgmt and
>> one for gluster.
>> The mgmt subnet is routed, though the gluster subnet is not.
>> Every node can ping each other, either using the mgmt or the gluster
>> subnets.
>>
>> The creation of the gluster subnet and volume went very well and seems to
>> be perfect.
>>
>> Now, in the oVirt web gui, I'm trying to add these nodes as oVirt hosts.
>> I'm using their mgmt DNS names, and I'm getting :
>> "Error while executing action: Server  is already part of another
>> cluster."
>>
>
> Did you peer probe the gluster cluster prior to adding the nodes to oVirt?
>
>
> Yes, and using their "gluster subnet" names.
> It went fine.
>
> What's the output of "gluster peer status"
>
>
> [root@serv-vm-al04 log]# gluster peer status
>
> Number of Peers: 2
>
> Hostname: serv-vm-al05-data.sdis.isere.fr
>
> Uuid: eddb3c6d-2e98-45ca-bd1f-6d2153bbb60e
>
> State: Peer in Cluster (Connected)
>
> Hostname: serv-vm-al06-data.sdis.isere.fr
>
> Uuid: cafefdf3-ffc3-4589-abf6-6ca76905593b
>
> State: Peer in Cluster (Connected)
>
>
> On the two other nodes, the same command output is OK.
>


>
>
> If I understand correctly:
> node1 - mgmt.ip.1 & gluster.ip.1
> node2 - mgmt.ip.2 & gluster.ip.2
> node3 - mgmt.ip.3 & gluster.ip.3
>
> Right
>
>
> Did you create a network and assign "gluster" role to it in the cluster?
>
> I created a gluster network, but did not assign the gluster role so far,
> as my former 3 hosts had no dedicated NIC not ip for that.
> I planned to assign this role once my 3 new hosts were part of the game.
>
> Were you able to add the first node to cluster
>
> No
>
> , and got this error on second node addition ?
>
> I had the error when trying to add the first node.
>
> From the error, it looks like oVirt does not understand the peer list
> returned from gluster is a match with node being added.
>
> Sounds correct
>
> Please provide the log snippet of the failure (from engine.log as well as
> vdsm.log on node)
>
> See attached file
>



I couldn't view the attached log files for some reason, but the issue is
that you're adding the node which is part of a cluster to an existing
cluster. That will not work, even from gluster CLI


>
>
>> I found no idea when googling, except something related to gluster (you
>> bet!), telling this may be related to the fact that there is already a
>> volume, managed with a different name.
>>
>> Obviously, using a different name and IP is what I needed!
>> I used "transport.socket.bind-address" to make sure the gluster traffic
>> will only use the dedicated NICs.
>>
>>
>> Well, I also tried to created a storage domain relying on the freshly
>> created gluster volume, but as this subnet is not routed, it is not
>> reachable from the manager nor the existing SPM.
>>
>
> The existing SPM - isn't it one of the the 3 new nodes being added?
>
> No, the SPM is one of the 3 former and still existing hosts.
>
> Or are you adding the 3 nodes to your existing cluster? If so, I suggest
> you try adding this to a new cluster
>
> OK, I tried and succeed to create a new cluster.
> In this new cluster, I was ABLE to add the first new host, using its mgmt
> DNS name.
> This first host still has to have its NICs configured, and (using Chrome
> or FF) the access to the network settings window is stalling the browser (I
> tried to restart even the engine, to no avail). Thus, I can not setup this
> first node NICs.
>
> Thus, I can not add any further host because oVirt relies on a first host
> to validate the further ones.
>


Network team should be able to help you here.


>
> --
> Nicolas ECARNOT
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine with HA

2016-08-19 Thread Carlos Rodrigues
On Fri, 2016-08-19 at 12:24 +0200, Simone Tiraboschi wrote:
> On Fri, Aug 19, 2016 at 12:07 PM, Carlos Rodrigues 
> wrote:
> > 
> > On Fri, 2016-08-19 at 10:47 +0100, Carlos Rodrigues wrote:
> > > 
> > > On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
> > > > 
> > > > 
> > > > 
> > > > 
> > > > On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues  > > > x.co
> > > > m>
> > > > wrote:
> > > > > 
> > > > > 
> > > > > After night, the OVF_STORE it was created:
> > > > > 
> > > > 
> > > > It's quite strange that it got so long but now it looks fine.
> > > > 
> > > > If the ISO_DOMAIN that I see in your screenshot is served by
> > > > the
> > > > engine VM itself, I suggest to remove it and export from an
> > > > external
> > > > server.
> > > > Serving the ISO storage domain from the engine VM itself is not
> > > > a
> > > > good idea since when the engine VM is down you can experiment
> > > > long
> > > > delays before getting the engine VM restarted due to the
> > > > unavailable
> > > > storage domain.
> > > 
> > > Ok, thank you for advice.
> > > 
> > > Now, apparently is all ok. I'll do more tests with HA and any
> > > issue
> > > i'll tell you.
> > > 
> > > Thank you for your support.
> > > 
> > > Regards,
> > > Carlos Rodrigues
> > > 
> > 
> > I shutdown the network of host with engine VM and i expected that
> > other
> > host fence the host and start engine VM but i don't see any fence
> > action and the "free" host keep trying to start VM but get and
> > error of
> > sanlock
> > 
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
> > sending ioctl 5326 to a partition!
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
> > sending ioctl 80200204 to a partition!
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7867]: 1
> > guest
> > now active
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]:
> > 2016-
> > 08-19 11:03:03+0100 1023 [903]: r3 paxos_acquire owner 1 delta 1 9
> > 245502 alive
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]:
> > 2016-
> > 08-19 11:03:03+0100 1023 [903]: r3 acquire_token held error -243
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]:
> > 2016-
> > 08-19 11:03:03+0100 1023 [903]: r3 cmd_acquire 2,9,7862
> > acquire_token
> > -243 lease owned by other host
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local libvirtd[1369]:
> > resource busy: Failed to acquire lock: error -243
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel:
> > ovirtmgmt:
> > port 2(vnet0) entered disabled state
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: device
> > vnet0
> > left promiscuous mode
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel:
> > ovirtmgmt:
> > port 2(vnet0) entered disabled state
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7885]: 0
> > guests
> > now active
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local systemd-
> > machined[7863]: Machine qemu-4-HostedEngine terminated.
> 
> Maybe you hit this one:
> https://bugzilla.redhat.com/show_bug.cgi?id=1322849
> 
> 
> Can you please check it as described in comment 28 and eventually
> apply the workaround in comment 18?
> 

Apparently the host-id its ok. I don't need to apply the workaround.


> 
> 
> > 
> > > 
> > > > 
> > > > > 
> > > > > Regards,
> > > > > Carlos Rodrigues
> > > > > 
> > > > > On Fri, 2016-08-19 at 08:29 +0200, Simone Tiraboschi wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On Thu, Aug 18, 2016 at 6:38 PM, Carlos Rodrigues  > > > > > otux
> > > > > > .c
> > > > > > om> wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > On Thu, 2016-08-18 at 17:45 +0200, Simone Tiraboschi
> > > > > > > wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > On Thu, Aug 18, 2016 at 5:43 PM, Carlos Rodrigues  > > > > > > > @eur
> > > > > > > > ot
> > > > > > > > ux.com> wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > I increase hosted_engine disk space to 160G. How do i
> > > > > > > > > force
> > > > > > > > > to create
> > > > > > > > > OVF_STORE.
> > > > > > > > 
> > > > > > > > I think that restarting the engine on the engine VM
> > > > > > > > will
> > > > > > > > trigger it
> > > > > > > > although I'm not sure that it was a size issue.
> > > > > > > > 
> > > > > > > 
> > > > > > > I found to OVF_STORE on another storage domain with
> > > > > > > "Domain
> > > > > > > Type" "Data (Master)"
> > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > > Each storage domain has its own OVF_STORE volumes; you
> > > > > > should
> > > > > > get
> > > > > > them also on the hosted-engine storage domain.
> > > > > > Not really sure about how to trigger it again; adding Roy
> > > > > > here.
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > 
> > > > 

Re: [ovirt-users] HostedEngine with HA

2016-08-19 Thread Simone Tiraboschi
On Fri, Aug 19, 2016 at 12:07 PM, Carlos Rodrigues  wrote:
> On Fri, 2016-08-19 at 10:47 +0100, Carlos Rodrigues wrote:
>> On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
>> >
>> >
>> >
>> > On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues > > m>
>> > wrote:
>> > >
>> > > After night, the OVF_STORE it was created:
>> > >
>> >
>> > It's quite strange that it got so long but now it looks fine.
>> >
>> > If the ISO_DOMAIN that I see in your screenshot is served by the
>> > engine VM itself, I suggest to remove it and export from an
>> > external
>> > server.
>> > Serving the ISO storage domain from the engine VM itself is not a
>> > good idea since when the engine VM is down you can experiment long
>> > delays before getting the engine VM restarted due to the
>> > unavailable
>> > storage domain.
>>
>> Ok, thank you for advice.
>>
>> Now, apparently is all ok. I'll do more tests with HA and any issue
>> i'll tell you.
>>
>> Thank you for your support.
>>
>> Regards,
>> Carlos Rodrigues
>>
>
> I shutdown the network of host with engine VM and i expected that other
> host fence the host and start engine VM but i don't see any fence
> action and the "free" host keep trying to start VM but get and error of
> sanlock
>
> Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
> sending ioctl 5326 to a partition!
> Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
> sending ioctl 80200204 to a partition!
> Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7867]: 1 guest
> now active
> Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016-
> 08-19 11:03:03+0100 1023 [903]: r3 paxos_acquire owner 1 delta 1 9
> 245502 alive
> Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016-
> 08-19 11:03:03+0100 1023 [903]: r3 acquire_token held error -243
> Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016-
> 08-19 11:03:03+0100 1023 [903]: r3 cmd_acquire 2,9,7862 acquire_token
> -243 lease owned by other host
> Aug 19 11:03:03 ied-blade11.install.eurotux.local libvirtd[1369]:
> resource busy: Failed to acquire lock: error -243
> Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: ovirtmgmt:
> port 2(vnet0) entered disabled state
> Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: device vnet0
> left promiscuous mode
> Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: ovirtmgmt:
> port 2(vnet0) entered disabled state
> Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7885]: 0 guests
> now active
> Aug 19 11:03:03 ied-blade11.install.eurotux.local systemd-
> machined[7863]: Machine qemu-4-HostedEngine terminated.

Maybe you hit this one:
https://bugzilla.redhat.com/show_bug.cgi?id=1322849


Can you please check it as described in comment 28 and eventually
apply the workaround in comment 18?



>> > > Regards,
>> > > Carlos Rodrigues
>> > >
>> > > On Fri, 2016-08-19 at 08:29 +0200, Simone Tiraboschi wrote:
>> > > >
>> > > >
>> > > >
>> > > > On Thu, Aug 18, 2016 at 6:38 PM, Carlos Rodrigues > > > > .c
>> > > > om> wrote:
>> > > > >
>> > > > > On Thu, 2016-08-18 at 17:45 +0200, Simone Tiraboschi wrote:
>> > > > > >
>> > > > > > On Thu, Aug 18, 2016 at 5:43 PM, Carlos Rodrigues > > > > > > ot
>> > > > > > ux.com> wrote:
>> > > > > > >
>> > > > > > >
>> > > > > > > I increase hosted_engine disk space to 160G. How do i
>> > > > > > > force
>> > > > > > > to create
>> > > > > > > OVF_STORE.
>> > > > > >
>> > > > > > I think that restarting the engine on the engine VM will
>> > > > > > trigger it
>> > > > > > although I'm not sure that it was a size issue.
>> > > > > >
>> > > > >
>> > > > > I found to OVF_STORE on another storage domain with "Domain
>> > > > > Type" "Data (Master)"
>> > > > >
>> > > > >
>> > > >
>> > > > Each storage domain has its own OVF_STORE volumes; you should
>> > > > get
>> > > > them also on the hosted-engine storage domain.
>> > > > Not really sure about how to trigger it again; adding Roy here.
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > >
>> > > > > > >
>> > > > > > >
>> > > > > > > Regards,
>> > > > > > > Carlos Rodrigues
>> > > > > > >
>> > > > > > > On Thu, 2016-08-18 at 12:14 +0100, Carlos Rodrigues
>> > > > > > > wrote:
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > On Thu, 2016-08-18 at 12:34 +0200, Simone Tiraboschi
>> > > > > > > > wrote:
>> > > > > > > > >
>> > > > > > > > >
>> > > > > > > > >
>> > > > > > > > > On Thu, Aug 18, 2016 at 12:11 PM, Carlos Rodrigues
>> > > > > > > > > > > > > > > > > > r...@eurotux.co
>> > > > > > > > > m>
>> > > > > > > > > wrote:
>> > > > > > > > > >
>> > > > > > > > > >
>> > > > > > > > > >
>> > > > > > > > > >
>> > > > > > > > > > On Thu, 2016-08-18 at 11:53 +0200, Simone
>> > > > > > > > > > Tiraboschi
>> > > > > > > > > > wrote:
>> > > > > > > > > > >
>> > > > > > > > > > >
>> > > > > > > > > > >
>> > > > > > > > > > >
>> > > > > 

Re: [ovirt-users] HostedEngine with HA

2016-08-19 Thread Carlos Rodrigues
On Fri, 2016-08-19 at 10:47 +0100, Carlos Rodrigues wrote:
> On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
> > 
> > 
> > 
> > On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues  > m>
> > wrote:
> > > 
> > > After night, the OVF_STORE it was created:
> > > 
> > 
> > It's quite strange that it got so long but now it looks fine.
> > 
> > If the ISO_DOMAIN that I see in your screenshot is served by the
> > engine VM itself, I suggest to remove it and export from an
> > external
> > server.
> > Serving the ISO storage domain from the engine VM itself is not a
> > good idea since when the engine VM is down you can experiment long
> > delays before getting the engine VM restarted due to the
> > unavailable
> > storage domain.
> 
> Ok, thank you for advice.
> 
> Now, apparently is all ok. I'll do more tests with HA and any issue
> i'll tell you.
> 
> Thank you for your support.
> 
> Regards,
> Carlos Rodrigues
> 

I shutdown the network of host with engine VM and i expected that other
host fence the host and start engine VM but i don't see any fence
action and the "free" host keep trying to start VM but get and error of
sanlock

Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
sending ioctl 5326 to a partition!
Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
sending ioctl 80200204 to a partition!
Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7867]: 1 guest
now active
Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016-
08-19 11:03:03+0100 1023 [903]: r3 paxos_acquire owner 1 delta 1 9
245502 alive
Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016-
08-19 11:03:03+0100 1023 [903]: r3 acquire_token held error -243
Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016-
08-19 11:03:03+0100 1023 [903]: r3 cmd_acquire 2,9,7862 acquire_token
-243 lease owned by other host
Aug 19 11:03:03 ied-blade11.install.eurotux.local libvirtd[1369]:
resource busy: Failed to acquire lock: error -243
Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: ovirtmgmt:
port 2(vnet0) entered disabled state
Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: device vnet0
left promiscuous mode
Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: ovirtmgmt:
port 2(vnet0) entered disabled state
Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7885]: 0 guests
now active
Aug 19 11:03:03 ied-blade11.install.eurotux.local systemd-
machined[7863]: Machine qemu-4-HostedEngine terminated.


> > 
> >  
> > > 
> > > 
> > > 
> > > 
> > > Regards,
> > > Carlos Rodrigues
> > > 
> > > On Fri, 2016-08-19 at 08:29 +0200, Simone Tiraboschi wrote:
> > > > 
> > > > 
> > > > 
> > > > On Thu, Aug 18, 2016 at 6:38 PM, Carlos Rodrigues  > > > .c
> > > > om> wrote:
> > > > > 
> > > > > On Thu, 2016-08-18 at 17:45 +0200, Simone Tiraboschi wrote:
> > > > > > 
> > > > > > On Thu, Aug 18, 2016 at 5:43 PM, Carlos Rodrigues  > > > > > ot
> > > > > > ux.com> wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > I increase hosted_engine disk space to 160G. How do i
> > > > > > > force
> > > > > > > to create
> > > > > > > OVF_STORE.
> > > > > > 
> > > > > > I think that restarting the engine on the engine VM will
> > > > > > trigger it
> > > > > > although I'm not sure that it was a size issue.
> > > > > > 
> > > > > 
> > > > > I found to OVF_STORE on another storage domain with "Domain
> > > > > Type" "Data (Master)"
> > > > > 
> > > > > 
> > > > 
> > > > Each storage domain has its own OVF_STORE volumes; you should
> > > > get
> > > > them also on the hosted-engine storage domain.
> > > > Not really sure about how to trigger it again; adding Roy here.
> > > > 
> > > > 
> > > > 
> > > >  
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > Regards,
> > > > > > > Carlos Rodrigues
> > > > > > > 
> > > > > > > On Thu, 2016-08-18 at 12:14 +0100, Carlos Rodrigues
> > > > > > > wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > On Thu, 2016-08-18 at 12:34 +0200, Simone Tiraboschi
> > > > > > > > wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > On Thu, Aug 18, 2016 at 12:11 PM, Carlos Rodrigues
> > > > > > > > >  > > > > > > > > r...@eurotux.co
> > > > > > > > > m>
> > > > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > On Thu, 2016-08-18 at 11:53 +0200, Simone
> > > > > > > > > > Tiraboschi
> > > > > > > > > > wrote:
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > On Thu, Aug 18, 2016 at 11:50 AM, Carlos
> > > > > > > > > > > Rodrigues
> > > > > > > > > > >  > > > > > > > > > > x.
> > > > > > > > > > > com>
> > > > > > > > > > > wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > 

Re: [ovirt-users] HostedEngine with HA

2016-08-19 Thread Carlos Rodrigues
On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
> 
> 
> On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues 
> wrote:
> > After night, the OVF_STORE it was created:
> > 
> 
> It's quite strange that it got so long but now it looks fine.
> 
> If the ISO_DOMAIN that I see in your screenshot is served by the
> engine VM itself, I suggest to remove it and export from an external
> server.
> Serving the ISO storage domain from the engine VM itself is not a
> good idea since when the engine VM is down you can experiment long
> delays before getting the engine VM restarted due to the unavailable
> storage domain.

Ok, thank you for advice.

Now, apparently is all ok. I'll do more tests with HA and any issue
i'll tell you.

Thank you for your support.

Regards,
Carlos Rodrigues

>  
> > 
> > 
> > 
> > Regards,
> > Carlos Rodrigues
> > 
> > On Fri, 2016-08-19 at 08:29 +0200, Simone Tiraboschi wrote:
> > > 
> > > 
> > > On Thu, Aug 18, 2016 at 6:38 PM, Carlos Rodrigues  > > om> wrote:
> > > > On Thu, 2016-08-18 at 17:45 +0200, Simone Tiraboschi wrote:
> > > > > On Thu, Aug 18, 2016 at 5:43 PM, Carlos Rodrigues  > > > > ux.com> wrote:
> > > > > > 
> > > > > > I increase hosted_engine disk space to 160G. How do i force
> > > > > > to create
> > > > > > OVF_STORE.
> > > > > 
> > > > > I think that restarting the engine on the engine VM will
> > > > > trigger it
> > > > > although I'm not sure that it was a size issue.
> > > > > 
> > > > 
> > > > I found to OVF_STORE on another storage domain with "Domain
> > > > Type" "Data (Master)"
> > > > 
> > > > 
> > > 
> > > Each storage domain has its own OVF_STORE volumes; you should get
> > > them also on the hosted-engine storage domain.
> > > Not really sure about how to trigger it again; adding Roy here.
> > > 
> > > 
> > > 
> > >  
> > > > 
> > > > 
> > > > 
> > > > > > 
> > > > > > Regards,
> > > > > > Carlos Rodrigues
> > > > > > 
> > > > > > On Thu, 2016-08-18 at 12:14 +0100, Carlos Rodrigues wrote:
> > > > > > > 
> > > > > > > On Thu, 2016-08-18 at 12:34 +0200, Simone Tiraboschi
> > > > > > > wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > On Thu, Aug 18, 2016 at 12:11 PM, Carlos Rodrigues  > > > > > > > r...@eurotux.co
> > > > > > > > m>
> > > > > > > > wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > On Thu, 2016-08-18 at 11:53 +0200, Simone Tiraboschi
> > > > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > On Thu, Aug 18, 2016 at 11:50 AM, Carlos Rodrigues
> > > > > > > > > >  > > > > > > > > > x.
> > > > > > > > > > com>
> > > > > > > > > > wrote:
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > On Thu, 2016-08-18 at 11:42 +0200, Simone
> > > > > > > > > > > Tiraboschi wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > On Thu, Aug 18, 2016 at 11:25 AM, Carlos
> > > > > > > > > > > > Rodrigues  > > > > > > > > > > > ro
> > > > > > > > > > > > tux.
> > > > > > > > > > > > com> wrote:
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > On Thu, 2016-08-18 at 11:04 +0200, Simone
> > > > > > > > > > > > > Tiraboschi
> > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > On Thu, Aug 18, 2016 at 10:36 AM, Carlos
> > > > > > > > > > > > > > Rodrigues
> > > > > > > > > > > > > >  > > > > > > > > > > > > > euro
> > > > > > > > > > > > > > tux.com>
> > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > On Thu, 2016-08-18 at 10:27 +0200, Simone
> > > > > > > > > > > > > > > Tiraboschi
> > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > On Thu, Aug 18, 2016 at 10:22 AM,
> > > > > > > > > > > > > > > > Carlos Rodrigues
> > > > > > > > > > > > > > > >  > > > > > > > > > > > > > > > eurotux.
> > > > > > > > > > > > > > > > com>
> > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > > > On Thu, 2016-08-18 at 08:54 +0200,
> > > > > > > > > > > > > > > > > Simone
> > > > > > > > > > > > > > > > > Tiraboschi
> > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > 
> > > > > > > 

[ovirt-users] [ANN] oVirt 4.0.2 Post Release update is now available

2016-08-19 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of an oVirt 4.0.2
post release update, as of August 19th, 2016.

This update includes a new build of oVirt Engine fixing:
Bug 1367483 - Installing a 3.6 host to rhev-m 4.0 results in 'ovirtmgmt'
network as out of sync

See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO is available. [2]
* A new oVirt Engine Appliance is already available.
* Mirrors[3] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 4.0.2 release highlights:
http://www.ovirt.org/release/4.0.2/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.0.2/
[2] http://resources.ovirt.org/pub/ovirt-4.0/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] info about support status for hyper converged setup

2016-08-19 Thread Sahina Bose
Gluster hyperconverged will be integrated and fully supported in 4.1 - but
already available as preview from 3.6.8.

I think there are couple of trackers around this. One of which you can look
at for list of upcoming features/fixes -
https://bugzilla.redhat.com/showdependencytree.cgi?id=1277939_resolved=1

On Wed, Aug 17, 2016 at 6:04 PM, Gianluca Cecchi 
wrote:

> Is it now in version 4.0.2 (or previous 3.6.x) fully supported? Or still
> in testing?
> As described here:
> http://www.ovirt.org/develop/release-management/features/
> engine/self-hosted-engine-hyper-converged-gluster-support/
>
> Only with Gluster I presume...
> Is there any bug tracker to see a list of all potential
> problems/limitations or features not supported?
>
> Thanks,
> Gianluca
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Dedicated NICs for gluster network

2016-08-19 Thread Nicolas Ecarnot

Hello,

I'm digging out this thread because I now had the time to work on this 
subject, and I'm stuck.


This oVirt setup has a standalone engine, and 3 hosts.
These 3 hosts are hypervisors and gluster nodes, each using one NIC for 
all the traffic, that is a very bad idea. (Well, it's working, but not 
recommended).


I added 3 OTHER nodes, and so far, I only created the gluster setup and 
created a replica-3 volume.
Each of these new nodes now have one NIC for management, one NIC for 
gluster, and other NICs for other things.
Each NIC has an IP + DNS name in its dedicated VLAN : one for mgmt and 
one for gluster.

The mgmt subnet is routed, though the gluster subnet is not.
Every node can ping each other, either using the mgmt or the gluster 
subnets.


The creation of the gluster subnet and volume went very well and seems 
to be perfect.


Now, in the oVirt web gui, I'm trying to add these nodes as oVirt hosts.
I'm using their mgmt DNS names, and I'm getting :
"Error while executing action: Server  is already part of 
another cluster."


I found no idea when googling, except something related to gluster (you 
bet!), telling this may be related to the fact that there is already a 
volume, managed with a different name.


Obviously, using a different name and IP is what I needed!
I used "transport.socket.bind-address" to make sure the gluster traffic 
will only use the dedicated NICs.



Well, I also tried to created a storage domain relying on the freshly 
created gluster volume, but as this subnet is not routed, it is not 
reachable from the manager nor the existing SPM.



I'm feeling I'm missing something here, so your help is warmly welcome.


Nicolas ECARNOT

PS : CentOS 7.2 everywhere, oVirt 3.6.7

Le 27/11/2015 à 20:00, Ivan Bulatovic a écrit :

Hi Nicolas,

what works for me in 3.6 is creating a new network for gluster within
oVirt, marking it for gluster use only, optionally setting bonded
interface upon NIC's that are dedicated for gluster traffic and
providing it with an IP address without configuring a gateway, and then
modifying /etc/hosts so that hostnames are resolvable between nodes.
Every node should have two hostnames, one for ovirtmgmt network that is
resolvable via DNS (or via /etc/hosts), and the other for gluster
network that is resolvable purely via /etc/hosts (every node should
contain entries for themselves and for each gluster node).

Peers should be probed via their gluster hostnames, while ensuring that
gluster peer status contains only addresses and hostnames that are
dedicated for gluster on each node. Same goes for adding bricks,
creating a volume etc.

This way, no communication (except gluster one) should be allowed
through gluster dedicated vlan. To be on the safe side, we can also
force gluster to listen only on dedicated interfaces via
transport.socket.bind-address option (haven't tried this one, will do).

Separation of gluster (or in the future any storage network), live
migration network, vm and management network is always a good thing.
Perhaps, we could manage failover of those networks within oVirt, ie. in
case lm network is down - use gluster network for lm and vice versa.
Cool candidate for an RFE, but first we need this supported within
gluster itself. This may prove useful when there is not enough NIC's
available to do a bond beneath every defined network. But we can still
separate traffic and provide failover by selecting multiple networks
without actually doing any load balancing between the two.

As Nathanaël mentioned, marking network for gluster use is only
available in 3.6. I'm also interested if there is a better way around
this procedure, or perharps enhancing it.

Kind regards,

Ivan

On 11/27/2015 05:47 PM, Nathanaël Blanchet wrote:

Hello Nicolas,

Did you have a look to this :
http://www.ovirt.org/Features/Select_Network_For_Gluster ?
But it is only available from >=3.6...

Le 27/11/2015 17:02, Nicolas Ecarnot a écrit :

Hello,

[Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD
on the hosts].

On the switchs, I have created a dedicated VLAN to isolate the
glusterFS traffic, but I'm not using it yet.
I was thinking of creating a dedicated IP for each node's gluster
NIC, and a DNS record by the way ("my_nodes_name_GL"), but I fear
using this hostname or this ip in oVirt GUI host network interface
tab, leading oVirt think this is a different host.

Not being sure this fear is clearly described, let's say :
- On each node, I create a second ip+(dns record in the soa) used by
gluster, plugged on the correct VLAN
- in oVirt gui, in the host network setting tab, the interface will
be seen, with its ip, but reverse-dns-related to a different hostname.
Here, I fear oVirt might check this reverse DNS and declare this NIC
belongs to another host.

I would also prefer not use a reverse pointing to the name of the
host management ip, as this is evil and I'm a good guy.

On your side, how do you cope with a dedicated storage network 

Re: [ovirt-users] HostedEngine with HA

2016-08-19 Thread Simone Tiraboschi
On Thu, Aug 18, 2016 at 6:38 PM, Carlos Rodrigues  wrote:

> On Thu, 2016-08-18 at 17:45 +0200, Simone Tiraboschi wrote:
>
> On Thu, Aug 18, 2016 at 5:43 PM, Carlos Rodrigues 
> wrote:
>
>
> I increase hosted_engine disk space to 160G. How do i force to create
> OVF_STORE.
>
>
> I think that restarting the engine on the engine VM will trigger it
> although I'm not sure that it was a size issue.
>
>
> I found to OVF_STORE on another storage domain with "Domain Type" "Data
> (Master)"
>
>
Each storage domain has its own OVF_STORE volumes; you should get them also
on the hosted-engine storage domain.
Not really sure about how to trigger it again; adding Roy here.



> ​
>
>
>
> Regards,
> Carlos Rodrigues
>
> On Thu, 2016-08-18 at 12:14 +0100, Carlos Rodrigues wrote:
>
>
> On Thu, 2016-08-18 at 12:34 +0200, Simone Tiraboschi wrote:
>
>
>
> On Thu, Aug 18, 2016 at 12:11 PM, Carlos Rodrigues  m>
> wrote:
>
>
>
>
> On Thu, 2016-08-18 at 11:53 +0200, Simone Tiraboschi wrote:
>
>
>
>
>
>
> On Thu, Aug 18, 2016 at 11:50 AM, Carlos Rodrigues  x.
> com>
> wrote:
>
>
>
>
> On Thu, 2016-08-18 at 11:42 +0200, Simone Tiraboschi wrote:
>
>
>
>
> On Thu, Aug 18, 2016 at 11:25 AM, Carlos Rodrigues  ro
> tux.
> com> wrote:
>
>
>
>
>
> On Thu, 2016-08-18 at 11:04 +0200, Simone Tiraboschi
> wrote:
>
>
>
>
>
> On Thu, Aug 18, 2016 at 10:36 AM, Carlos Rodrigues
>  euro
> tux.com>
> wrote:
>
>
>
>
>
>
> On Thu, 2016-08-18 at 10:27 +0200, Simone Tiraboschi
> wrote:
>
>
>
>
>
>
> On Thu, Aug 18, 2016 at 10:22 AM, Carlos Rodrigues
>  eurotux.
> com>
> wrote:
>
>
>
>
>
>
>
> On Thu, 2016-08-18 at 08:54 +0200, Simone
> Tiraboschi
> wrote:
>
>
>
>
>
>
>
> On Tue, Aug 16, 2016 at 12:53 PM, Carlos
> Rodrigues  mar@euro
> tux.
> com>
> wrote:
>
>
>
>
>
>
>
>
> On Sun, 2016-08-14 at 14:22 +0300, Roy Golan
> wrote:
>
>
>
>
>
>
>
>
>
>
> On 12 August 2016 at 20:23, Carlos
> Rodrigues
>  r@eurotu
> x.co
> m>
> wrote:
>
>
>
>
>
>
>
>
> Hello,
>
> I have one cluster with two hosts with
> power
> management
> correctly
> configured and one virtual machine with
> HostedEngine
> over
> shared
> storage with FiberChannel.
>
> When i shutdown the network of host with
> HostedEngine
> VM,  it
> should be
> possible the HostedEngine VM migrate
> automatically to
> another
> host?
>
> migrate on which network?
>
>
>
>
>
>
>
>
> What is the expected behaviour on this HA
> scenario?
>
>
> After a few minutes your vm will be
> shutdown
> by
> the High
> Availability
> agent, as it can't see network, and started
> on
> another
> host.
>
>
>
> I'm testing this scenario and after shutdown
> network, it
> should
> be
> expected that agent shutdown ha and started
> on
> another
> host,
> but
> after
> couple minutes nothing happens and on host
> with
> network we
> getting
> the
> following messages:
>
> Aug 16 11:44:08 ied-
> blade11.install.eurotux.local
> ovirt-ha-
> agent[2779]:
> ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.Ho
> st
> edEn
> gine.con
> fig
> ERROR
> Unable to get vm.conf from OVF_STORE, falling
> back
> to
> initial
> vm.conf
>
> I think the HA agent its trying to get vm
> configuration but
> some
> how it
> can't get vm.conf to start VM.
>
>
> No, this is a different issues.
> In 3.6 we added a feature to let the engine
> manage
> also the
> engine VM
> itself; ovirt-ha-agent will pickup the latest
> engine
> VM
> configuration
> from the OVF_STORE which is managed by the
> engine.
> If something goes wrong, ovirt-ha-agent could
> fallback to the
> initial
> (bootstrap time) vm.conf. This will normally
> happen
> till you
> add
> your
> first regular storage domain and the engine
> imports
> the
> engine
> VM.
>
>
> But i already have my first storage domain and
> storage
> engine
> domain
> and already imported engine VM.
>
> I'm using 4.0 version.
>
>
> This seams an issue, can you please share your
> /var/log/ovirt-hosted-engine-ha/agent.log ?
>
>
> I sent it in attachment.
>
>
> Nothing strange here;
> do you see a couple of disks with alias OVF_STORE on
> the
> hosted-
> engine
> storage domain if you check it from the engine?
>
>
> Do you mean any disk label?
> I don't have it anyone:
>
> [root@ied-blade11 ~]#  ls /dev/disk/by-label/
> ls: cannot access /dev/disk/by-label/: No such file or
> directory
>
>
> No I mean: go to the engine web-ui, select the hosted-
> engine
> storage
> domain, check the disks there.
>
>
> No, the alias is virtio-disk0.
>
>
> And this is the engine VM disk, so the issue is why the engine
> has
> still to create the OVF_STORE.
> Can you please share your engine.log from the engine VM?
>
>
> Go in attachment.
>
>
> The creation of the OVF_STORE disk failed but it's not that clear
> why:
>
> 2016-08-17 08:43:33,538 ERROR
> [org.ovirt.engine.core.bll.storage.ovfstore.CreateOvfVolumeForStora
> ge
> DomainCommand]
> (DefaultQuartzScheduler6) [6f1f1fd4] Ending command
>