On Fri, Oct 9, 2020 at 7:12 PM Martin Perina wrote:
>
>
> Could you please share with us all logs from engine gathered by
> logcollector? We will try to find out any clue what's wrong in your env ...
>
> Thanks,
> Martin
>
>
I will try to collect.
In the mean time I've found that SSH could be in
A few things to consider,
what is your RAID situation per host. If you're using mdadm based soft
raid, you need to make sure your drives support power loss data
protection. This is mostly only a feature on enterprise drives.
Essenstially it ensures the drives reserve enough energy to flush
Based on the logs you shared, it looks like a network issue - but it could
always be something else.
If you ever experience something like that situation, please share the logs
immediately and add the gluster mailing list - in order to get assistance with
the root cause.
Best Regards,
Strahil
On Fri, Oct 9, 2020 at 6:47 PM Gianluca Cecchi
wrote:
>
>
> On Fri, Oct 9, 2020 at 6:29 PM Martin Perina wrote:
>
>>
>>
>> On Fri, Oct 9, 2020 at 5:54 PM Gianluca Cecchi
>> wrote:
>>
>>> On Fri, Oct 9, 2020 at 4:58 PM Martin Perina wrote:
>>>
Hi Gianluca,
could you please check
On Fri, Oct 9, 2020 at 6:29 PM Martin Perina wrote:
>
>
> On Fri, Oct 9, 2020 at 5:54 PM Gianluca Cecchi
> wrote:
>
>> On Fri, Oct 9, 2020 at 4:58 PM Martin Perina wrote:
>>
>>> Hi Gianluca,
>>>
>>> could you please check selinux context of
>>> /var/log/ovirt-engine/ansible-runner-service.log
On Fri, Oct 9, 2020 at 5:54 PM Gianluca Cecchi
wrote:
> On Fri, Oct 9, 2020 at 4:58 PM Martin Perina wrote:
>
>> Hi Gianluca,
>>
>> could you please check selinux context of
>> /var/log/ovirt-engine/ansible-runner-service.log to see if you are not
>> affected by
On Fri, Oct 9, 2020 at 4:58 PM Martin Perina wrote:
> Hi Gianluca,
>
> could you please check selinux context of
> /var/log/ovirt-engine/ansible-runner-service.log to see if you are not
> affected by https://bugzilla.redhat.com/show_bug.cgi?id=1880171#c5 ?
>
> Thanks,
> Martin
>
Thanks for
Hi Gianluca,
could you please check selinux context of
/var/log/ovirt-engine/ansible-runner-service.log to see if you are not
affected by https://bugzilla.redhat.com/show_bug.cgi?id=1880171#c5 ?
Thanks,
Martin
On Fri, Oct 9, 2020 at 4:45 PM Gianluca Cecchi
wrote:
> On Thu, Oct 8, 2020 at
On Thu, Oct 8, 2020 at 5:13 PM Gianluca Cecchi
wrote:
>
>
> On Thu, Oct 8, 2020 at 5:08 PM Gianluca Cecchi
> wrote:
>
>> On Thu, Oct 8, 2020 at 4:59 PM Dana Elfassy wrote:
>>
>>> And also please attach the content of the file found at:
>>> /etc/ansible-runner-service/config.yaml
>>>
>>> On
Hmm, I'm not sure. I just created glusterfs volumes on LVM volumes, changed
ownership to vdsm.kvm and applied virt group. Then I added it to oVirt as
storage for VMs
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Hi Strahil,
I remember during after creating the volume I applied the virt group to it.
Volume info:
Volume Name: data
Type: Replicate
Volume ID: 05842cd6-7f16-4329-9ffd-64a0b4366fbe
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Hi,
due to a bug in our Ovirt integrated backup system now we have some VMs
with snapshots in illegal state.
It seems that there's an inconsistency between the db and the real
status of images on disk.
Let me show an example:
engine=# select
Running “# engine-setup”
[ ERROR ] Failed to execute stage 'Closing up': Failed to start service
'ovirt-imageio'
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20201009105350-7q7pbo.log
[ INFO ] Generating answer file
13 matches
Mail list logo