[ovirt-users] Re: Glusterfs and vm's

2021-05-08 Thread WK
ould appreciate the URL. -wk ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-g

[ovirt-users] Re: CentOS 8 is dead

2020-12-08 Thread WK
On 12/8/2020 12:20 PM, Michael Watters wrote: This was one of my fears regarding the IBM acquisition.  I guess we can't complain too much, it's not like anybody *pays* for CentOS.  :) yes, but "we" do provide feedback and bug reports from a LOT of different environments which directly helps

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-25 Thread WK
MB/s range -wk On 11/25/2020 2:29 AM, Harry O wrote: Unfortunately I didn't get any improvement by upgrading the network. Bare metal (zfs raid1 zvol): dd if=/dev/zero of=/gluster_bricks/test1.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-23 Thread WK
com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_network_teaming You *will* see an immediate improvement. MTU 9000 (jumbo frames) can also help a bit. Of course 10G or better networking would be optimal. -wk __

[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-08 Thread WK
are you using JBOD bricks or do you have some sort of RAID for each of the bricks? Are you using sharding? -wk On 10/8/2020 6:11 AM, Jarosław Prokopowski wrote: Hi Jayme, there is UPS but anyway the outages happened. We have also Raritan KVM but it is not supported by oVirt. The setup is 6

[ovirt-users] Re: CEPH - Opinions and ROI

2020-10-02 Thread WK
Yes, we manage a number of Distributed Storage systems including MooseFS, Ceph, DRBD and of course Gluster (since 3.3). Each has a specific use. For small customer-specific VM host clusters, which is the majority of what we do, Gluster is by far the safest and easiest to deploy/understand

[ovirt-users] Re: CVE-2018-3639 - Important - oVirt - Speculative Store Bypass

2018-05-23 Thread WK
2) Do the existing libvirt/qemu patches prevent a user "root" or "otherwise" in a VM from snooping on other VMs and/or the host? Sincerely, -wk ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org

Re: [ovirt-users] oVirt 4.1.9 and Spectre-Meltdown checks

2018-01-26 Thread WK
Updated info: https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf Looks like Intel is now committing to support Sandy/Ivy Bridge. No mention of Westmere or earlier as of yet  :-( On 1/26/2018 10:13 AM, WK wrote: That cpu  is X5690. That is Westmere

Re: [ovirt-users] oVirt 4.1.9 and Spectre-Meltdown checks

2018-01-26 Thread WK
(i.e. CPUs from the last 5 years) with a vague mention of other cpus on a 'customer' need basis. Westmere is circa 2010 and came out before Sandy/Ivy Bridge so we don't know when or if they will be fixed, but probably only after the Sandy/Ivy Bridges get theirs. -wk On 1/26/2018 1:50 AM

Re: [ovirt-users] Rebuilding my infra..

2018-01-08 Thread WK
core2duo with a 40-60 GB SSD for most cases. The formula is something like 4k of arbiter space for every "file".  For 10-80 VM disk images that would be really minimal. -wk ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailma

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread WK
. One person has reported that his VMs went read-only during that period, but other have not reported that. -wk ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] Ceph

2014-05-03 Thread WK
and ceph where appropriate. /2cents What are the oVirt situations where Gluster works better and conversely, what are the uses where Ceph would work better ? -wk ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users