Thanks! That was very interesting conversation :-)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
strict-o-direct just allows the app to define if direct I/O is needed and yes,
that could be a reason for your data loss.
The good thing is that the feature is part of the virt group and there is a
"Optimize for Virt" button somewhere in the UI . Yet, I prefer the manual
approach of building
Thanks. I will get rid of multipath.
I did not set performance.strict-o-direct specifically, only changed
permissions of the volume to vdsm.kvm and applied the virt gourp.
Now is see performance.strict-o-direct was off. Could it be the reason of the
data loss?
Direct I/O is enabled in oVirt by
One recommendation is to get rid of the multipath for your SSD.
Replica 3 volumes are quite resilient and I'm really surprised it happened to
you.
For the multipath stuff , you can create something like this:
[root@ovirt1 ~]# cat /etc/multipath/conf.d/blacklist.conf
blacklist {
wwid
Hi Nikolov,
Thanks for the very interesting answer :-)
I do not use any raid controller. I was hoping glusterfs would take care of
fault tolerance but apparently it failed.
I have one Samsung 1TB SSD drives in each server for VM storage. I see it is of
type "multipath". There is XFS
Hi Jaroslaw,
That point was from someone else. I don't think that gluster has a such weak
point. The only weak point I have seen is the infrastructure it relies ontop
and of course the built-in limitations it has.
You need to verify the following:
- mount options are important . Using
Thanks Alex. I actually think that the issue was caused by power loss on the
switch socket.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
Thanks Strahil
The data center is remote so I will definitely ask the lab guys to ensure the
switch is connected to battery supported power socket.
So the gluster's weak point is actually the switch in the network? Can it have
difficulty finding out which version of data is correct after the
A few things to consider,
what is your RAID situation per host. If you're using mdadm based soft
raid, you need to make sure your drives support power loss data
protection. This is mostly only a feature on enterprise drives.
Essenstially it ensures the drives reserve enough energy to flush
Based on the logs you shared, it looks like a network issue - but it could
always be something else.
If you ever experience something like that situation, please share the logs
immediately and add the gluster mailing list - in order to get assistance with
the root cause.
Best Regards,
Strahil
Hmm, I'm not sure. I just created glusterfs volumes on LVM volumes, changed
ownership to vdsm.kvm and applied virt group. Then I added it to oVirt as
storage for VMs
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Hi Strahil,
I remember during after creating the volume I applied the virt group to it.
Volume info:
Volume Name: data
Type: Replicate
Volume ID: 05842cd6-7f16-4329-9ffd-64a0b4366fbe
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Hi Jaroslaw,
it's more important to find the root cause of the data loss , as this is
definately not supposed to happen (I got myself several power outages without
issues).
Do you keep the logs ?
For now , check if your gluster settings (gluster volume info VOL) matches the
settings in the
are you using JBOD bricks or do you have some sort of RAID for each of
the bricks?
Are you using sharding?
-wk
On 10/8/2020 6:11 AM, Jarosław Prokopowski wrote:
Hi Jayme, there is UPS but anyway the outages happened. We have also Raritan
KVM but it is not supported by oVirt.
The setup is 6
Hi Jayme, there is UPS but anyway the outages happened. We have also Raritan
KVM but it is not supported by oVirt.
The setup is 6 hosts - Tow pairs of 3 hosts each using one replica 3 volume.
BTW what would be the best gluster volume solution for 6+ hosts?
IMO this is best handled at hardware level with UPS and battery/flash
backed controllers. Can you share more details about your oVirt setup? How
many servers are you working with andare you using replica 3 or replica 3
arbiter?
On Thu, Oct 8, 2020 at 9:15 AM Jarosław Prokopowski
wrote:
> Hi
16 matches
Mail list logo