On Sun, Jul 2, 2017 at 2:08 AM, Mike DePaulo wrote:
> Hi everyone,
> I have ovirt 4.1.1/4.1.2 running on 3 hosts with a gluster hosted engine.
> I was working on setting up a network for gluster storage and
> migration. The addresses for it will be 10.0.20.x, rather than
> 192.168.1.x for the management network. However, I switched gluster
> storage and migration back over to the management network.
> I updated and rebooted one of my hosts (death-star, 10.0.20.52) and on
> reboot, the glusterd service would start, but wouldn't seem to work.
> The engine webgui reported that its bricks were down, and commands
> like this would fail:
> [root@death-star glusterfs]# gluster pool list
> pool list: failed
> [root@death-star glusterfs]# gluster peer status
> peer status: failed
> Upon further investigation, I had under /var/lib/glusterd/peers/ the 2
> existing UUID files, plus a new 3rd one:
> [root@death-star peers]# cat 10.0.20.53
> I moved that file out of there, restarted glusterd, and now gluster is
> working again.
> I am guessing that this is a bug. Let me know if I should attach other
> log files; I am not sure which ones.
> And yes, 10.0.20.53 is the IP of one of the other hosts.
I'm trying to accomplish the same.
See also comments here at my answer today:
So at the end you rollback?
Can you list with detail what were your modifications and operating steps
with hosts, before trying to restart with the new network config?
Did you try to set the new network as gluster role in oVirt?
I'm using 4 volumes at the moment: data, engine, iso, export and based on
some analysis that I'm doing right now, one should modify at least these
files for each vol_name accordingly under /var/lib/glusterd on the 3 hosts:
plus one has to rename 3 of these files themselves. Suppose hostnames are
ovirtN.localdomain.local and that you decide to assign hostname
glovirtN.localdomain.local to the interfaces on the new gluster network,
they should become:
And also change these files on each node (with related uid files of the
other two nodes)s:
I see no problems for the migration network chenage though. I did it,
changing role in check box under Cluster --> Default --> Logical Networks
subpane --> Manage Networks
You have to assign an ip to the interface for every host in Hosts -->
Network Interfaces --> Setup Host Networks
Users mailing list