On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho <we...@forest.go.th> wrote:
> On 28/7/2559 15:54, Simone Tiraboschi wrote:
> On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho <we...@forest.go.th> wrote:
>> On 21/7/2559 16:53, Simone Tiraboschi wrote:
>> On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho <we...@forest.go.th>
>> wrote:
>>> Can I just follow
>>> http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
>>> until step 3 and do everything else via GUI?
>> Yes, absolutely.
>> Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted
>> the services but the result is still the same. Kinda lost now. What should I
>> do next?
> Can you please attach ovirt-ha-agent logs?
> Yes, here are the logs:
> https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh

Thanks Wee,
your issue is here:
The hosted-engine storage domain is already mounted on
with a path that is not supported anymore: the right path should be

Did you manually tried to avoid the issue of a single entry point for
the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine
and host02.ovirt.forest.go.th:_hosted__engine there?
This could cause a lot of confusion since the code could not detect
that the storage domain is the same and you can end with it mounted
twice into different locations and a lot of issues.
The correct solution of that issue was this one:

Now, to have it fixed on your env you have to hack a bit.
First step, you have to edit
/etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine
hosts to ensure that the storage field always point to the same entry
point (host01 for instance)
Then on each host you can add something like:

Then check the representation of your storage connection in the table
storage_server_connections of the engine DB and make sure that
connection refers to the entry point you used in hosted-engine.conf on
all your hosts, you have lastly to set the value of mount_options also

Please tune also the value of network.ping-timeout for your glusterFS
volume to avoid this:

> --
> Wee
Users mailing list

Reply via email to