>> >> 1.2 All bricks healed (gluster volume heal data info summary) and no
>> >> split-brain
>> >
>> >
>> >
>> > gluster volume heal data info
>> >
>> > Brick node-msk-gluster203:/opt/gluster/data
>> > Status: Connected
>> > Number of entries: 0
>> >
>> > Brick node-msk-gluster205:/opt/glu
Thx for your help, Strahil! Hmmm, I see DNS resolution failed in hostname without FQDN. I'll try to fix it. 19.03.2019, 09:43, "Strahil" :Hi Alexei,>> 1.2 All bricks healed (gluster volume heal data info summary) and no split-brain>> > > gluster volume heal data info> > Brick node-msk-gluster203
Hi Alexei,
>> 1.2 All bricks healed (gluster volume heal data info summary) and no
>> split-brain
>
>
>
> gluster volume heal data info
>
> Brick node-msk-gluster203:/opt/gluster/data
> Status: Connected
> Number of entries: 0
>
> Brick node-msk-gluster205:/opt/gluster/data
>
>
>
>
>
Thx for answer! 18.03.2019, 14:52, "Strahil Nikolov" : Hi Alexei, In order to debug it check the following: 1. Check gluster:1.1 All bricks up ? All peers up. Gluster version is 3.12.15 [root@node-msk-gluster203 ~]# gluster peer statusNumber of Peers: 2 Hostname: node-msk-gluster205.Uuid: 188
Hi Alexei,
In order to debug it check the following:
1. Check gluster:1.1 All bricks up ?1.2 All bricks healed (gluster volume heal
data info summary) and no split-brain
2. Go to the problematic host and check the mount point is there2.1. Check
permissions (should be vdsm:kvm) and fix with chown
Hi all! I have a very similar problem after update one of the two nodes to version 4.3.1. This node77-02 lost connection to gluster volume named DATA, but not to volume with hosted engine. node77-02 /var/log/messages Mar 18 13:40:00 node77-02 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.ho
Hi Simone,
I have noticed that my Engine's root disk is 'vda' just in standalone KVM.
I have the feeling that was not the case before.
Can someone check a default engine and post the output of lsblk ?
Thanks in advance.
Best Regards,
Strahil NikolovOn Mar 15, 2019 12:46, Strahil Nikolov
wro
On Fri, Mar 15, 2019 at 8:12 AM Strahil Nikolov wrote:
Ok,
I have managed to recover again and no issues are detected this time.I guess
this case is quite rare and nobody has experienced that.
>Hi,>can you please explain how you fixed it?
I have set again to global maintenance, defined the Ho
On Fri, Mar 15, 2019 at 8:12 AM Strahil Nikolov
wrote:
> Ok,
>
> I have managed to recover again and no issues are detected this time.
> I guess this case is quite rare and nobody has experienced that.
>
Hi,
can you please explain how you fixed it?
>
> Best Regards,
> Strahil Nikolov
>
> В сря
Ok,
I have managed to recover again and no issues are detected this time.I guess
this case is quite rare and nobody has experienced that.
Best Regards,Strahil Nikolov
В сряда, 13 март 2019 г., 13:03:38 ч. Гринуич+2, Strahil Nikolov
написа:
Dear Simone,
it seems that there is some kin
Dear Simone,
it seems that there is some kind of problem ,as the OVF got updated with wrong
configuration:[root@ovirt2 ~]# ls -l
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/{441abdc8-6cb1-49a4-903f-a1ec0ed88429,94ade632-6ecc-4901-8cec-8e
Dear Simone,
it should be 60 min , but I have checked several hours after that and it didn't
update it.
[root@engine ~]# engine-config -g OvfUpdateIntervalInMinutes
OvfUpdateIntervalInMinutes: 60 version: general
How can i make a backup of the VM config , as you have noticed the local copy
in /
On Tue, Mar 12, 2019 at 9:48 AM Strahil Nikolov
wrote:
> Latest update - the system is back and running normally.
> After a day (or maybe a little more), the OVF is OK:
>
Normally it should try every 60 minutes.
Can you please execute
engine-config -g OvfUpdateIntervalInMinutes
on your engine VM
Latest update - the system is back and running normally.After a day (or maybe
a little more), the OVF is OK:
[root@ovirt1 ~]# ls -l
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/{441abdc8-6cb1-49a4-903f-a1ec0ed88429,94ade632-6ecc-4901-8ce
Hello again,
Latest update: the engine is up and running (or at least the login portal).
[root@ovirt1 ~]# hosted-engine --check-livelinessHosted Engine is up!
I have found online the xml for the network:
[root@ovirt1 ~]# cat ovirtmgmt_net.xml vdsm-ovirtmgmt
Sadly, I had to create a symbol
Hi Simone,
and thanks for your help.
So far I found out that there is some problem with the local copy of the
HostedEngine config (see attached part of vdsm.log).
I have found out an older xml configuration (in an old vdsm.log) and defining
the VM works, but powering it on reports:
[root@ovirt1
On Fri, Mar 8, 2019 at 12:49 PM Strahil Nikolov
wrote:
> Hi Simone,
>
> sadly it seems that starting the engine from an alternative config is not
> working.
> Virsh reports that the VM is defined , but shut down and the dumpxml
> doesn't show any disks - maybe this is normal for oVirt (I have nev
On Thu, Mar 7, 2019 at 2:54 PM Strahil Nikolov
wrote:
>
>
>
> >The OVF_STORE volume is going to get periodically recreated by the engine
> so at least you need a running engine.
>
> >In order to avoid this kind of issue we have two OVF_STORE disks, in your
> case:
>
> >MainThread::INFO::2019-03-0
>The OVF_STORE volume is going to get periodically recreated by the engine so
>at least you need a running engine.
>In order to avoid this kind of issue we have two OVF_STORE disks, in your case:
>MainThread::INFO::2019-03-06
>06:50:02,391::ovf_store::120::ovirt_hosted_engine_ha.lib.ovf.ovf
On Thu, Mar 7, 2019 at 9:19 AM Strahil Nikolov
wrote:
> Hi Simone,
>
> I think I found the problem - ovirt-ha cannot extract the file containing
> the needed data .
> In my case it is completely empty:
>
>
> [root@ovirt1 ~]# ll
> /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9
Hi Simone,
I think I found the problem - ovirt-ha cannot extract the file containing the
needed data .In my case it is completely empty:
[root@ovirt1 ~]# ll
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb
On Wed, Mar 6, 2019 at 3:09 PM Strahil Nikolov
wrote:
> Hi Simone,
>
> thanks for your reply.
>
> >Are you really sure that the issue was on the ping?
> >on storage errors the broker restart itself and while the broker is
> restarting >the agent cannot ask the broker to trigger the gateway monito
On Wed, Mar 6, 2019 at 6:13 AM Strahil wrote:
> Hi guys,
>
> After updating to 4.3.1 I had an issue where the ovirt-ha-broker was
> complaining that it couldn't ping the gateway.
>
Are you really sure that the issue was on the ping?
on storage errors the broker restart itself and while the broke
23 matches
Mail list logo