Ohh yes is important to know ahead.Not so nice if they drop drivers.
Fortunately for now my Perc H710 (LSI MegaRAID SAS 2208) is still supported in
megaraid_sas linux module for RHEL 8.
Upgrade to ovirt 4.4 is really difficult.I had to have downtimes for it to work
correctly.
After you deploy
Emy,
I was wondering how much if any improvement I'd see with Gluster storage
moving to oVirt 4.4/CentOS 8x (but have not made the switch yet myself).
You should keep in mind that your Perc controllers aren't supported by
CentOS 8 out of the box, they dropped support for many older controllers.
Yo
i found the problem.
The kernel version in Centos 7.8 with version 3.x.x is really too old and does
not know how to handle fine new SSD disks or RAID Controllers with latest BIOS
Updates applied.
Booting and Archlinux latest iso image with kernel 5.7.6 or a Centos 8.2 with
kernel 4.18 increase
Thank you for the information provided.
Yeap MTU is working ok with Jumbo Frames, on all gluster nodes.
In the next days if i have time, I will try to play with ovirt 4.4 and gluster
7.x vs ovirt 4.4 and NFS to check for performance.
I might try even ceph with ovirt 4.4
_
На 29 юни 2020 г. 4:14:33 GMT+03:00, jury cat написа:
>If i destroy the brick, i might upgrade to ovirt 4.4 and Centos 8.2.
>Do you think upgrade to ovirt 4.4 with glusterfs improves performance
>or i am better with NFS ?
Actually only you can find out as we cannot know the workload of your V
Ovirt is using the default shard file of 64MB and I don't think this is 'small
file' at all.
There are a lot of tunables to optimize Gluster and I can admit it's not an
easy task.
Deadline is good for databases, but with SSDs you should try the
performance of enabled multiqueue and the
I’ve tried various methods to improve gluster performance on similar
hardware and never had much luck. Small file workloads were particularly
troublesome. I ended up switching high performance vms to nfs storage and
performance with nfs improved greatly in my use case.
On Sun, Jun 28, 2020 at 6:42
> Hello ,
Hello and thank you for the reply.Bellow are the answers to your questions.
>
> Let me ask some questions:
> 1. What is the scheduler for your PV ?
On the Raid Controller device where the SSD disks are in Raid 0 (device sda) it
is set to "deadline". But on the lvm volume logical volu
Hello ,
Let me ask some questions:
1. What is the scheduler for your PV ?
2. Have you aligned your PV during the setup 'pvcreate --dataalignment
alignment_value device'
3. What is your tuned profile ? Do you use rhgs-random-io from the
ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/
9 matches
Mail list logo