See below:
> On Aug 27, 2020, at 3:19 PM, info--- via Users wrote:
>
> Thank you. Reboot of the engine and afterwards the backup server helped :-)
Good deal.
> Should I revert some of my previous changes? Reduce the write window size?
> - gluster volume set vmstore performance.read-ahead on
>
Thank you. Reboot of the engine and afterwards the backup server helped :-)
Before the change -> 53 minutes for 10% and the restore broke at 98% ...
- [2020-08-26:20:45:30] Restore Imaging: 1/1 - 10% Imaging in progress
- [2020-08-26:20:41:22] Restore Imaging: 1/1 - 9% Imaging in progress...
-
Looks like you’ve got a posix or nfs mount there? Is your gluster storage
domain of type GlusterFS? And make sure you restarted the ovirt-engine after
enabling LibfgApiSupported, before stopping and restarting the vm.
An active libgf mount looks like:
> On A
On Wed, Aug 26, 2020 at 8:19 PM info--- via Users wrote:
> I enabled libgfapi and powered off / on the VM.
>
> - engine-config --all
> - LibgfApiSupported: true version: 4.3
>
> How can I see that this is active on the VM? The disk looks the same like
> before.
>
> - virsh dumpxml 15
>
>
I enabled libgfapi and powered off / on the VM.
- engine-config --all
- LibgfApiSupported: true version: 4.3
How can I see that this is active on the VM? The disk looks the same like
before.
- virsh dumpxml 15
Here is the Volume setu
Libgfapi bypasses the context switching from User space to kernel to user space
(FUSE) , so it gets better performance.
I can't find the previous communication ,so can you share your volume settings
again ?
Best Regards,
Strahil Nikolov
В неделя, 23 август 2020 г., 21:45:22 Гринуич+3, info
Setting cluster.choose-local to on helped a lot to improve the read
performance. Write performance still bad.
Am I right that this look then more like a glusterfs issue and not something
what need to be changed on ovirt (libgfapi) or on VMs.
Changing tcp offloading did not make any difference.
Thank you.
ping -M do -s 8972 another-gluster-node is working.
The rest I'll check on the weekend.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
Thank you. They are all Thin. I have something to do at the weekend and change
them to Preallocated. Looks like stopping the VM. Downloading and uploading the
disk to change the format.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an em
Are you using fully allocated VM disks ?
На 20 август 2020 г. 0:53:40 GMT+03:00, info--- via Users
написа:
>Additional Info: I've running Nextcloud on a VM.
>- to sync (download) 1 file to the client with 300 MB is fast
>- to sync (download) 300 files to the client with in total 1 MB is very
>sl
На 19 август 2020 г. 22:39:22 GMT+03:00, info--- via Users
написа:
>Thank you for the quick reply.
>
>- I/O scheduler hosts -> changed
>echo noop > /sys/block/sdb/queue/scheduler
>echo noop > /sys/block/sdc/queue/scheduler
On reboot it will be reverted. Test this way and if you notice improvem
Additional Info: I've running Nextcloud on a VM.
- to sync (download) 1 file to the client with 300 MB is fast
- to sync (download) 300 files to the client with in total 1 MB is very slow
___
Users mailing list -- users@ovirt.org
To unsubscribe send an em
Thank you for the quick reply.
- I/O scheduler hosts -> changed
echo noop > /sys/block/sdb/queue/scheduler
echo noop > /sys/block/sdc/queue/scheduler
- CPU states -> can you explain this a bit more?
cat /dev/cpu_dma_latency
F
hexdump -C /dev/cpu_dma_latency
46 00 00 00
Check and tune the following:
- I/O scheduler on the host (usually noop/none are good for writes,
(mq-)deadline for reads)
- CPU cstates
- Tuned profile , there are some 'dirty' settings that will avoid I/O locks
- MTU size and tcp offloading (some users report enabled is better
14 matches
Mail list logo