Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-05 Thread Gianluca Cecchi
On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose  wrote:

>
>
>> ...
>>
>> then the commands I need to run would be:
>>
>> gluster volume reset-brick export 
>> ovirt01.localdomain.local:/gluster/brick3/export
>> start
>> gluster volume reset-brick export 
>> ovirt01.localdomain.local:/gluster/brick3/export
>> gl01.localdomain.local:/gluster/brick3/export commit force
>>
>> Correct?
>>
>
> Yes, correct. gl01.localdomain.local should resolve correctly on all 3
> nodes.
>


It fails at first step:

 [root@ovirt01 ~]# gluster volume reset-brick export
ovirt01.localdomain.local:/gluster/brick3/export start
volume reset-brick: failed: Cannot execute command. The cluster is
operating at version 30712. reset-brick command reset-brick start is
unavailable in this version.
[root@ovirt01 ~]#

It seems somehow in relation with this upgrade not of the commercial
solution Red Hat Gluster Storage
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Installation_Guide/chap-Upgrading_Red_Hat_Storage.html

So ti seems I have to run some command of type:

gluster volume set all cluster.op-version X

with X > 30712

It seems that latest version of commercial Red Hat Gluster Storage is 3.1
and its op-version is indeed 30712..

So the question is which particular op-version I have to set and if the
command can be set online without generating disruption

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Sahina Bose
On Wed, Jul 5, 2017 at 3:10 AM, Gianluca Cecchi 
wrote:

> On Tue, Jul 4, 2017 at 2:57 PM, Gianluca Cecchi  > wrote:
>
>>
>>> No, it's not. One option is to update glusterfs packages to 3.10.
>>>
>>
>> Is it supported throughout oVirt to use CentOS Storage SIG packages
>> instead of ovirt provided ones? I imagine you mean it, correct?
>>
>> If this is a case, would I have to go with Gluster 3.9 (non LTS)
>> https://lists.centos.org/pipermail/centos-announce/2017-Janu
>> ary/022249.html
>>
>> Or Gluster 3.10 (LTS)
>> https://lists.centos.org/pipermail/centos-announce/2017-March/022337.html
>>
>> I suppose the latter...
>> Any problem then with updates of oVirt itself, eg going through 4.1.2 to
>> 4.1.3?
>>
>> Thanks
>> Gianluca
>>
>>>
>>> Is 3.9 version of Gluster packages provided when updating to upcoming
>>> 4.1.3, perhaps?
>>>
>>
> Never mind, I will verify. At the end this is a test system.
> I put the nodes in maintenance one by one and then installed glusterfs
> 3.10 with;
>
> yum install centos-release-gluster
> yum update
>
> All were able to self heal then and I see the 4 storage domains (engine,
> data, iso, export) up and running.
> See some notes at the end of the e-mail.
> Now I'm ready to test the change of gluster network traffic.
>
> In my case the current hostnames that are also matching the ovirtmgmt
> network are ovirt0N.localdomain.com with N=1,2,3
>
> On my vlan2, defined as gluster network role in the cluster, I have
> defined (on each node /etc/hosts file) the hostnames
>
> 10.10.2.102 gl01.localdomain.local gl01
> 10.10.2.103 gl02.localdomain.local gl02
> 10.10.2.104 gl03.localdomain.local gl03
>
> I need more details about command to run:
>
> Currently I have
>
> [root@ovirt03 ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: ovirt01.localdomain.local
> Uuid: e9717281-a356-42aa-a579-a4647a29a0bc
> State: Peer in Cluster (Connected)
> Other names:
> 10.10.2.102
>
> Hostname: ovirt02.localdomain.local
> Uuid: b89311fe-257f-4e44-8e15-9bff6245d689
> State: Peer in Cluster (Connected)
> Other names:
> 10.10.2.103
>
> Suppose I start form export volume, that has these info:
>
> [root@ovirt03 ~]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt01.localdomain.local:/gluster/brick3/export
> Brick2: ovirt02.localdomain.local:/gluster/brick3/export
> Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)
> ...
>
> then the commands I need to run would be:
>
> gluster volume reset-brick export 
> ovirt01.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt01.localdomain.local:/gluster/brick3/export
> gl01.localdomain.local:/gluster/brick3/export commit force
>
> Correct?
>

Yes, correct. gl01.localdomain.local should resolve correctly on all 3
nodes.


> Is it sufficient to run it on a single node? And then on the same node, to
> run also for the other bricks of the same volume:
>

Yes, it is sufficient to run on single node. You can run the reset-brick
for all bricks from same node.


>
> gluster volume reset-brick export 
> ovirt02.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt02.localdomain.local:/gluster/brick3/export
> gl02.localdomain.local:/gluster/brick3/export commit force
>
> and
>
> gluster volume reset-brick export 
> ovirt03.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt03.localdomain.local:/gluster/brick3/export
> gl03.localdomain.local:/gluster/brick3/export commit force
>
> Correct? Do I have to wait self-heal after each commit command, before
> proceeding with the other ones?
>

Ideally, gluster should recognize this as same brick as before, and heal
will not be needed. Please confirm that it is indeed the case before
proceeding


>
> Thanks in advance for input so that I can test it.
>
> Gianluca
>
>
> NOTE: during the update of gluster packages from 3.8 to 3.10 I got these:
>
> warning: /var/lib/glusterd/vols/engine/engine.ovirt01.localdomain.
> local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/
> engine.ovirt01.localdomain.local.gluster-brick1-engine.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/engine.ovirt02.localdomain.
> local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/
> engine.ovirt02.localdomain.local.gluster-brick1-engine.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/engine.ovirt03.localdomain.
> local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/
> engine.ovirt03.localdomain.local.gluster-brick1-engine.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol saved
> as /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/engine.tcp-fuse.vol saved as
> /var/lib/glusterd/vo

Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Gianluca Cecchi
On Tue, Jul 4, 2017 at 2:57 PM, Gianluca Cecchi 
wrote:

>
>> No, it's not. One option is to update glusterfs packages to 3.10.
>>
>
> Is it supported throughout oVirt to use CentOS Storage SIG packages
> instead of ovirt provided ones? I imagine you mean it, correct?
>
> If this is a case, would I have to go with Gluster 3.9 (non LTS)
> https://lists.centos.org/pipermail/centos-announce/2017-
> January/022249.html
>
> Or Gluster 3.10 (LTS)
> https://lists.centos.org/pipermail/centos-announce/2017-March/022337.html
>
> I suppose the latter...
> Any problem then with updates of oVirt itself, eg going through 4.1.2 to
> 4.1.3?
>
> Thanks
> Gianluca
>
>>
>> Is 3.9 version of Gluster packages provided when updating to upcoming
>> 4.1.3, perhaps?
>>
>
Never mind, I will verify. At the end this is a test system.
I put the nodes in maintenance one by one and then installed glusterfs 3.10
with;

yum install centos-release-gluster
yum update

All were able to self heal then and I see the 4 storage domains (engine,
data, iso, export) up and running.
See some notes at the end of the e-mail.
Now I'm ready to test the change of gluster network traffic.

In my case the current hostnames that are also matching the ovirtmgmt
network are ovirt0N.localdomain.com with N=1,2,3

On my vlan2, defined as gluster network role in the cluster, I have defined
(on each node /etc/hosts file) the hostnames

10.10.2.102 gl01.localdomain.local gl01
10.10.2.103 gl02.localdomain.local gl02
10.10.2.104 gl03.localdomain.local gl03

I need more details about command to run:

Currently I have

[root@ovirt03 ~]# gluster peer status
Number of Peers: 2

Hostname: ovirt01.localdomain.local
Uuid: e9717281-a356-42aa-a579-a4647a29a0bc
State: Peer in Cluster (Connected)
Other names:
10.10.2.102

Hostname: ovirt02.localdomain.local
Uuid: b89311fe-257f-4e44-8e15-9bff6245d689
State: Peer in Cluster (Connected)
Other names:
10.10.2.103

Suppose I start form export volume, that has these info:

[root@ovirt03 ~]# gluster volume info export

Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt01.localdomain.local:/gluster/brick3/export
Brick2: ovirt02.localdomain.local:/gluster/brick3/export
Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)
...

then the commands I need to run would be:

gluster volume reset-brick export
ovirt01.localdomain.local:/gluster/brick3/export start
gluster volume reset-brick export
ovirt01.localdomain.local:/gluster/brick3/export
gl01.localdomain.local:/gluster/brick3/export commit force

Correct?

Is it sufficient to run it on a single node? And then on the same node, to
run also for the other bricks of the same volume:

gluster volume reset-brick export
ovirt02.localdomain.local:/gluster/brick3/export start
gluster volume reset-brick export
ovirt02.localdomain.local:/gluster/brick3/export
gl02.localdomain.local:/gluster/brick3/export commit force

and

gluster volume reset-brick export
ovirt03.localdomain.local:/gluster/brick3/export start
gluster volume reset-brick export
ovirt03.localdomain.local:/gluster/brick3/export
gl03.localdomain.local:/gluster/brick3/export commit force

Correct? Do I have to wait self-heal after each commit command, before
proceeding with the other ones?

Thanks in advance for input so that I can test it.

Gianluca


NOTE: during the update of gluster packages from 3.8 to 3.10 I got these:

warning:
/var/lib/glusterd/vols/engine/engine.ovirt01.localdomain.local.gluster-brick1-engine.vol
saved as
/var/lib/glusterd/vols/engine/engine.ovirt01.localdomain.local.gluster-brick1-engine.vol.rpmsave
warning:
/var/lib/glusterd/vols/engine/engine.ovirt02.localdomain.local.gluster-brick1-engine.vol
saved as
/var/lib/glusterd/vols/engine/engine.ovirt02.localdomain.local.gluster-brick1-engine.vol.rpmsave
warning:
/var/lib/glusterd/vols/engine/engine.ovirt03.localdomain.local.gluster-brick1-engine.vol
saved as
/var/lib/glusterd/vols/engine/engine.ovirt03.localdomain.local.gluster-brick1-engine.vol.rpmsave
warning: /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol saved as
/var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol.rpmsave
warning: /var/lib/glusterd/vols/engine/engine.tcp-fuse.vol saved as
/var/lib/glusterd/vols/engine/engine.tcp-fuse.vol.rpmsave
warning:
/var/lib/glusterd/vols/data/data.ovirt01.localdomain.local.gluster-brick2-data.vol
saved as
/var/lib/glusterd/vols/data/data.ovirt01.localdomain.local.gluster-brick2-data.vol.rpmsave
warning:
/var/lib/glusterd/vols/data/data.ovirt02.localdomain.local.gluster-brick2-data.vol
saved as
/var/lib/glusterd/vols/data/data.ovirt02.localdomain.local.gluster-brick2-data.vol.rpmsave
warning:
/var/lib/glusterd/vols/data/data.ovirt03.localdomain.local.gluster-brick2-data.vol
saved as
/var/lib/glusterd/vols/data/data.ovirt03.localdomain.local.gluster-brick2-data.vol.rpmsave
warning: /var/lib/glusterd/vol

Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Gianluca Cecchi
On Tue, Jul 4, 2017 at 12:45 PM, Sahina Bose  wrote:

>
>
> On Tue, Jul 4, 2017 at 3:18 PM, Gianluca Cecchi  > wrote:
>
>>
>>
>> On Mon, Jul 3, 2017 at 12:48 PM, Sahina Bose  wrote:
>>
>>>
>>>
>>> On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham  wrote:
>>>

 Only problem I would like to manage is that I have gluster network
> shared with ovirtmgmt one.
> Can I move it now with these updated packages?
>

 Are the gluster peers configured with the same hostnames/IPs as your
 hosts within oVirt?

 Once they're configured on the same network, separating them might be a
 bit difficult. Also, the last time I looked, oVirt still doesn't support
 managing HCI oVirt/Gluster nodes running each service on a different
 interface (see below).

 In theory, the procedure would involve stopping all of the Gluster
 processes on all of the peers, updating the peer addresses in the gluster
 configs on all of the nodes, then restarting glusterd & the bricks. I've
 not tested this however, and it's not a "supported" procedure. I've no idea
 how oVirt would deal with these changes either.

>>>
>>> Which version of glusterfs do you have running now? With glusterfs>=
>>> 3.9, there's a reset-brick command that can help you do this.
>>>
>>
>> At this moment on my oVirt nodes I have gluster packages as provided by
>> 4.1.2 repos, so:
>>
>> glusterfs-3.8.13-1.el7.x86_64
>> glusterfs-api-3.8.13-1.el7.x86_64
>> glusterfs-cli-3.8.13-1.el7.x86_64
>> glusterfs-client-xlators-3.8.13-1.el7.x86_64
>> glusterfs-fuse-3.8.13-1.el7.x86_64
>> glusterfs-geo-replication-3.8.13-1.el7.x86_64
>> glusterfs-libs-3.8.13-1.el7.x86_64
>> glusterfs-server-3.8.13-1.el7.x86_64
>> vdsm-gluster-4.19.15-1.el7.centos.noarch
>>
>> Is 3.9 version of Gluster packages provided when updating to upcoming
>> 4.1.3, perhaps?
>>
>
> No, it's not. One option is to update glusterfs packages to 3.10.
>

Is it supported throughout oVirt to use CentOS Storage SIG packages instead
of ovirt provided ones? I imagine you mean it, correct?

If this is a case, would I have to go with Gluster 3.9 (non LTS)
https://lists.centos.org/pipermail/centos-announce/2017-January/022249.html

Or Gluster 3.10 (LTS)
https://lists.centos.org/pipermail/centos-announce/2017-March/022337.html

I suppose the latter...
Any problem then with updates of oVirt itself, eg going through 4.1.2 to
4.1.3?

Thanks
Gianluca



>
> There's an RFE open to add this to GUI. For now, this has to be done from
> command line of one of the gluster nodes.
>

Ok. Depending on answer related to version of Gluster to use, I will try it
In the mean time I have completed steps 1 and 2 and I'm going to read
referenced docs for reset-brick command

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Sahina Bose
On Tue, Jul 4, 2017 at 3:18 PM, Gianluca Cecchi 
wrote:

>
>
> On Mon, Jul 3, 2017 at 12:48 PM, Sahina Bose  wrote:
>
>>
>>
>> On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham  wrote:
>>
>>>
>>> Only problem I would like to manage is that I have gluster network
 shared with ovirtmgmt one.
 Can I move it now with these updated packages?

>>>
>>> Are the gluster peers configured with the same hostnames/IPs as your
>>> hosts within oVirt?
>>>
>>> Once they're configured on the same network, separating them might be a
>>> bit difficult. Also, the last time I looked, oVirt still doesn't support
>>> managing HCI oVirt/Gluster nodes running each service on a different
>>> interface (see below).
>>>
>>> In theory, the procedure would involve stopping all of the Gluster
>>> processes on all of the peers, updating the peer addresses in the gluster
>>> configs on all of the nodes, then restarting glusterd & the bricks. I've
>>> not tested this however, and it's not a "supported" procedure. I've no idea
>>> how oVirt would deal with these changes either.
>>>
>>
>> Which version of glusterfs do you have running now? With glusterfs>= 3.9,
>> there's a reset-brick command that can help you do this.
>>
>
> At this moment on my oVirt nodes I have gluster packages as provided by
> 4.1.2 repos, so:
>
> glusterfs-3.8.13-1.el7.x86_64
> glusterfs-api-3.8.13-1.el7.x86_64
> glusterfs-cli-3.8.13-1.el7.x86_64
> glusterfs-client-xlators-3.8.13-1.el7.x86_64
> glusterfs-fuse-3.8.13-1.el7.x86_64
> glusterfs-geo-replication-3.8.13-1.el7.x86_64
> glusterfs-libs-3.8.13-1.el7.x86_64
> glusterfs-server-3.8.13-1.el7.x86_64
> vdsm-gluster-4.19.15-1.el7.centos.noarch
>
> Is 3.9 version of Gluster packages provided when updating to upcoming
> 4.1.3, perhaps?
>

No, it's not. One option is to update glusterfs packages to 3.10.


>
>
>
>>
>> It's possible to move to the new interface for gluster.
>>
>> The procedure would be:
>>
>> 1. Create a network with "gluster" network role.
>> 2. On each host, use "Setup networks" to associate the gluster network on
>> the desired interface. (This would ensure thet the engine will peer probe
>> this interface's IP address as well, so that it can be used to identify the
>> host in brick defintion)
>> 3. For each of the volume's bricks - change the definition of the brick,
>> so that the new ip address is used. Ensure that there's no pending heal
>> (i.e gluster volume heal info - should list 0 entires) before you start
>> this(see https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ -
>> Introducing reset-brick command)
>>
>> gluster volume reset-brick VOLNAME :BRICKPATH start
>> gluster volume reset-brick VOLNAME :BRICKPATH 
>> :BRICKPATH commit force
>>
>>
>>
>
> So do you think I can use any other commands with oVirt 4.1.2 and gluster
> 3.8?
> Can I safely proceed with steps 1 and 2? When I setup a gluster network
> and associated it to one host, what are exactly the implications? Will I
> disrupt anything, or is it seen only an option for having gluster traffing
> going on...?
>

Steps 1 & 2 will ensure that the IP address associated with the gluster
network is peer probed. It does not ensure that brick communication happens
using that interface. This happens only when the brick is identified using
that IP as well. (Step 3)


>
> BTW: How would I complete the webadmin gui part of step 3? I don't see an
> "edit" brick funcionality; I only see "Add" and "Replace Brick"...
>

There's an RFE open to add this to GUI. For now, this has to be done from
command line of one of the gluster nodes.


>
> Thanks,
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Gianluca Cecchi
On Mon, Jul 3, 2017 at 12:48 PM, Sahina Bose  wrote:

>
>
> On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham  wrote:
>
>>
>> Only problem I would like to manage is that I have gluster network shared
>>> with ovirtmgmt one.
>>> Can I move it now with these updated packages?
>>>
>>
>> Are the gluster peers configured with the same hostnames/IPs as your
>> hosts within oVirt?
>>
>> Once they're configured on the same network, separating them might be a
>> bit difficult. Also, the last time I looked, oVirt still doesn't support
>> managing HCI oVirt/Gluster nodes running each service on a different
>> interface (see below).
>>
>> In theory, the procedure would involve stopping all of the Gluster
>> processes on all of the peers, updating the peer addresses in the gluster
>> configs on all of the nodes, then restarting glusterd & the bricks. I've
>> not tested this however, and it's not a "supported" procedure. I've no idea
>> how oVirt would deal with these changes either.
>>
>
> Which version of glusterfs do you have running now? With glusterfs>= 3.9,
> there's a reset-brick command that can help you do this.
>

At this moment on my oVirt nodes I have gluster packages as provided by
4.1.2 repos, so:

glusterfs-3.8.13-1.el7.x86_64
glusterfs-api-3.8.13-1.el7.x86_64
glusterfs-cli-3.8.13-1.el7.x86_64
glusterfs-client-xlators-3.8.13-1.el7.x86_64
glusterfs-fuse-3.8.13-1.el7.x86_64
glusterfs-geo-replication-3.8.13-1.el7.x86_64
glusterfs-libs-3.8.13-1.el7.x86_64
glusterfs-server-3.8.13-1.el7.x86_64
vdsm-gluster-4.19.15-1.el7.centos.noarch

Is 3.9 version of Gluster packages provided when updating to upcoming
4.1.3, perhaps?



>
> It's possible to move to the new interface for gluster.
>
> The procedure would be:
>
> 1. Create a network with "gluster" network role.
> 2. On each host, use "Setup networks" to associate the gluster network on
> the desired interface. (This would ensure thet the engine will peer probe
> this interface's IP address as well, so that it can be used to identify the
> host in brick defintion)
> 3. For each of the volume's bricks - change the definition of the brick,
> so that the new ip address is used. Ensure that there's no pending heal
> (i.e gluster volume heal info - should list 0 entires) before you start
> this(see https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ -
> Introducing reset-brick command)
>
> gluster volume reset-brick VOLNAME :BRICKPATH start
> gluster volume reset-brick VOLNAME :BRICKPATH 
> :BRICKPATH commit force
>
>
>

So do you think I can use any other commands with oVirt 4.1.2 and gluster
3.8?
Can I safely proceed with steps 1 and 2? When I setup a gluster network and
associated it to one host, what are exactly the implications? Will I
disrupt anything, or is it seen only an option for having gluster traffing
going on...?

BTW: How would I complete the webadmin gui part of step 3? I don't see an
"edit" brick funcionality; I only see "Add" and "Replace Brick"...

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-03 Thread Sahina Bose
On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham  wrote:

>
> Only problem I would like to manage is that I have gluster network shared
>> with ovirtmgmt one.
>> Can I move it now with these updated packages?
>>
>
> Are the gluster peers configured with the same hostnames/IPs as your hosts
> within oVirt?
>
> Once they're configured on the same network, separating them might be a
> bit difficult. Also, the last time I looked, oVirt still doesn't support
> managing HCI oVirt/Gluster nodes running each service on a different
> interface (see below).
>
> In theory, the procedure would involve stopping all of the Gluster
> processes on all of the peers, updating the peer addresses in the gluster
> configs on all of the nodes, then restarting glusterd & the bricks. I've
> not tested this however, and it's not a "supported" procedure. I've no idea
> how oVirt would deal with these changes either.
>

Which version of glusterfs do you have running now? With glusterfs>= 3.9,
there's a reset-brick command that can help you do this.

It's possible to move to the new interface for gluster.

The procedure would be:

1. Create a network with "gluster" network role.
2. On each host, use "Setup networks" to associate the gluster network on
the desired interface. (This would ensure thet the engine will peer probe
this interface's IP address as well, so that it can be used to identify the
host in brick defintion)
3. For each of the volume's bricks - change the definition of the brick, so
that the new ip address is used. Ensure that there's no pending heal (i.e
gluster volume heal info - should list 0 entires) before you start this(see
https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ - Introducing
reset-brick command)

gluster volume reset-brick VOLNAME :BRICKPATH start
gluster volume reset-brick VOLNAME :BRICKPATH
:BRICKPATH commit force




>
>
> To properly separate my own storage & management networks from the
> beginning, I configured each host with 2 IPs on different subnets and a
> different hostname corresponding to each IP. For example, "v0" points to
> the management interface of the first node, and "s0" points to the storage
> interface.
>
> oVirt's problem is that, whilst it can see the pre-configured bricks and
> volumes on each host, it can't create any new bricks or volumes because it
> wants to use the same hostnames it uses to manage the hosts. It also means
> that it can't fence the hosts correctly, as it doesn't understand that "v0"
> & "s0" are the same host.
> This isn't a problem for me though, as I don't need to manage my Gluster
> instances via the GUI, and automatic fencing can be done via the IPMI
> interfaces.
>
> Last I read, this is a recognised problem, but a fix isn't expect to
> arrive any time soon.
>
> --
> Doug
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-02 Thread Gianluca Cecchi
On Sat, Jul 1, 2017 at 8:51 PM, Doug Ingham  wrote:

>
> Only problem I would like to manage is that I have gluster network shared
>> with ovirtmgmt one.
>> Can I move it now with these updated packages?
>>
>
> Are the gluster peers configured with the same hostnames/IPs as your hosts
> within oVirt?
>

Yes.
>From gluster point of view:

[root@ovirt01 ~]# gluster peer status
Number of Peers: 2

Hostname: ovirt03.localdomain.local
Uuid: ec81a04c-a19c-4d31-9d82-7543cefe79f3
State: Peer in Cluster (Connected)

Hostname: ovirt02.localdomain.local
Uuid: b89311fe-257f-4e44-8e15-9bff6245d689
State: Peer in Cluster (Connected)
[root@ovirt01 ~]#

[root@ovirt02 ~]# gluster peer status
Number of Peers: 2

Hostname: ovirt03.localdomain.local
Uuid: ec81a04c-a19c-4d31-9d82-7543cefe79f3
State: Peer in Cluster (Connected)

Hostname: ovirt01.localdomain.local
Uuid: e9717281-a356-42aa-a579-a4647a29a0bc
State: Peer in Cluster (Connected)


In oVirt, the hosts are defined with Hostname/IP field as
ovirt01.localdomain.local
192.168.150.103
ovirt03.localdomain.local

I don't remember why for the second host I used its ip instead of its
hostname... possily I used the ip for test when adding it because I want to
crosscheck the "Name" and the "Hostname/IP" columns of Hosts tab (the host
from which I executed gdeploy was added with "Name" hosted_engine_1; I see
that field is editable... but not the "Hostname/IP" one obviously)
The node from which I initially executed the gdeploy job in 4.0.5 was
ovirt01.localdomain.local

BTW: during upgrade of hosts there was an errore with ansible19

Error: ansible1.9 conflicts with ansible-2.3.1.0-1.el7.noarch

So the solution for updating nodes after enabling ovirt 4.1 repo was:
yum remove ansible gdeploy
yum install ansible gdeploy
yum update

Probably the ansible and gdeploy packages are not neede any more after
initial deploy though. But they can come useful on hand in case of
maintenance of config files.



> Once they're configured on the same network, separating them might be a
> bit difficult. Also, the last time I looked, oVirt still doesn't support
> managing HCI oVirt/Gluster nodes running each service on a different
> interface (see below).
>
> In theory, the procedure would involve stopping all of the Gluster
> processes on all of the peers, updating the peer addresses in the gluster
> configs on all of the nodes, then restarting glusterd & the bricks. I've
> not tested this however, and it's not a "supported" procedure. I've no idea
> how oVirt would deal with these changes either.
>

I could try, creating a snapshot before, as the oVirt hosts are vsphere VMs
themselves.
But what about this new network configuration inside oVirt? Should I
configure it as gluster network within cluster "logical networks" tab with
an ip for each host configured within oVirt, or should I set the network as
unmanaged by oVirt at all?



>
>
> To properly separate my own storage & management networks from the
> beginning, I configured each host with 2 IPs on different subnets and a
> different hostname corresponding to each IP. For example, "v0" points to
> the management interface of the first node, and "s0" points to the storage
> interface.
>
> Do you have the gluster dedicated network configured as "gluster network"
in oVirt?



Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-01 Thread Doug Ingham
> Only problem I would like to manage is that I have gluster network shared
> with ovirtmgmt one.
> Can I move it now with these updated packages?
>

Are the gluster peers configured with the same hostnames/IPs as your hosts
within oVirt?

Once they're configured on the same network, separating them might be a bit
difficult. Also, the last time I looked, oVirt still doesn't support
managing HCI oVirt/Gluster nodes running each service on a different
interface (see below).

In theory, the procedure would involve stopping all of the Gluster
processes on all of the peers, updating the peer addresses in the gluster
configs on all of the nodes, then restarting glusterd & the bricks. I've
not tested this however, and it's not a "supported" procedure. I've no idea
how oVirt would deal with these changes either.


To properly separate my own storage & management networks from the
beginning, I configured each host with 2 IPs on different subnets and a
different hostname corresponding to each IP. For example, "v0" points to
the management interface of the first node, and "s0" points to the storage
interface.

oVirt's problem is that, whilst it can see the pre-configured bricks and
volumes on each host, it can't create any new bricks or volumes because it
wants to use the same hostnames it uses to manage the hosts. It also means
that it can't fence the hosts correctly, as it doesn't understand that "v0"
& "s0" are the same host.
This isn't a problem for me though, as I don't need to manage my Gluster
instances via the GUI, and automatic fencing can be done via the IPMI
interfaces.

Last I read, this is a recognised problem, but a fix isn't expect to arrive
any time soon.

-- 
Doug
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-01 Thread Gianluca Cecchi
On Fri, Jun 30, 2017 at 1:11 PM, Gianluca Cecchi 
wrote:

> Hello,
> I'm going to try to update to 4.1 an HC environment, currently on 4.0 with
> 3 nodes in CentOS 7.3 and one of them configured as arbiter
>
> Any particular caveat in HC?
> Are the steps below, normally used for Self Hosted Engine environments the
> only ones to consider?
>
> - update repos on the 3 hosts and on the engine vm
> - global maintenance
> - update engine
> - update also os packages of engine vm
> - shutdown engine vm
> - disable global maintenance
> - verify engine vm boots and functionality is ok
>

I passed from 4.0.5 to 4.1.2
All steps above went well

Then
> - update hosts: preferred way will be from the gui itself that takes care
> of moving VMs, maintenance and such or to proceed manually?
>
> Is there a preferred order with which I have to update the hosts, after
> updating the engine? Arbiter for first or as the latest or not important at
> all?
>
> Any possible problem having disaligned versions of glusterfs packages
> until I complete all the 3 hosts? Any known bugs passing from 4.0 to 4.1
> and related glusterfs components?
>
> Thanks in advance,
> Gianluca
>

I basically followed the instructions here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/upgrading_the_self-hosted_engine

and had no problems.
As a sequence, I first updated the two hosts that contain the actual
gluster data, one by one, and finally the arbiter one.
I only had a doubt to what select in this page when you put host into
maintenance form the GUI:
https://drive.google.com/file/d/0BwoPbcrMv8mvSDRKQVo4QzVvbTQ/view?usp=sharing

I selected only the option to stop Gluster service.
What about the other one? It could be useful a contextual help perhaps,
when you mouse over the 2 options

I expected the engine vm to migrate to the newly updated hosts but it
didn't happen. I don't know if I'm confusing with another scenario...
I had the engine vm running on the arbiter node that was the last one to be
updated, so I manually moved the engine vm to one of the already upgraded
hosts when it was its turn.

At the end I was also able to upgrade cluster and DC from 4.0 to 4.1

Only problem I would like to manage is that I have gluster network shared
with ovirtmgmt one.
Can I move it now with these updated packages?

BTW: my environment is based on a single NUC6i5 with 32Gb of ram, where I
have ESXi 6.0U2 installed. The 3 oVirt HCI hosts are 3 vSphere VMs, so the
engine VM is an L2 guest.
But performace is quite good after installing haveged on it. Donna if it
could be usefule to install haveged also in the hypervisors...
I already setup a second nic for the oVirt hosts: it is a host-only network
adapter from ESXi point of view, so it lives in memory of the ESXi
hypervisor.
I configured in oVirt 4 vlans on this new nic (vlan1,2,3,4).
So it could be fine to have glusterfs configured on one of these vlans,
instead of the ovirtmgmt one.

Thanks in advance for any suggestion,

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrading HC from 4.0 to 4.1

2017-06-30 Thread Gianluca Cecchi
Hello,
I'm going to try to update to 4.1 an HC environment, currently on 4.0 with
3 nodes in CentOS 7.3 and one of them configured as arbiter

Any particular caveat in HC?
Are the steps below, normally used for Self Hosted Engine environments the
only ones to consider?

- update repos on the 3 hosts and on the engine vm
- global maintenance
- update engine
- update also os packages of engine vm
- shutdown engine vm
- disable global maintenance
- verify engine vm boots and functionality is ok
Then
- update hosts: preferred way will be from the gui itself that takes care
of moving VMs, maintenance and such or to proceed manually?

Is there a preferred order with which I have to update the hosts, after
updating the engine? Arbiter for first or as the latest or not important at
all?

Any possible problem having disaligned versions of glusterfs packages until
I complete all the 3 hosts? Any known bugs passing from 4.0 to 4.1 and
related glusterfs components?

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users