Has anyone looked at OpenNebula?
https://opennebula.io/
Seems to be libvirt based.
-wk
On 2/5/22 5:03 AM, marcel d'heureuse wrote:
Moin,
We will take a look into proxmox.
Hyper v is also eol, if server server2022 is standard.
Br
Marcel
Am 5. Februar 2022 13:40:30 MEZ schrieb Thomas
ould appreciate the URL.
-wk
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-g
On 12/8/2020 12:20 PM, Michael Watters wrote:
This was one of my fears regarding the IBM acquisition. I guess we
can't complain too much, it's not like anybody *pays* for CentOS. :)
yes, but "we" do provide feedback and bug reports from a LOT of
different environments which directly helps
MB/s range
-wk
On 11/25/2020 2:29 AM, Harry O wrote:
Unfortunately I didn't get any improvement by upgrading the network.
Bare metal (zfs raid1 zvol):
dd if=/dev/zero of=/gluster_bricks/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB
com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_network_teaming
You *will* see an immediate improvement.
MTU 9000 (jumbo frames) can also help a bit.
Of course 10G or better networking would be optimal.
-wk
__
are you using JBOD bricks or do you have some sort of RAID for each of
the bricks?
Are you using sharding?
-wk
On 10/8/2020 6:11 AM, Jarosław Prokopowski wrote:
Hi Jayme, there is UPS but anyway the outages happened. We have also Raritan
KVM but it is not supported by oVirt.
The setup is 6
Yes, we manage a number of Distributed Storage systems including
MooseFS, Ceph, DRBD and of course Gluster (since 3.3). Each has a
specific use.
For small customer-specific VM host clusters, which is the majority of
what we do, Gluster is by far the safest and easiest to
deploy/understand
2) Do the existing libvirt/qemu patches prevent a user "root" or
"otherwise" in a VM from snooping on other VMs and/or the host?
Sincerely,
-wk
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Updated info:
https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf
Looks like Intel is now committing to support Sandy/Ivy Bridge.
No mention of Westmere or earlier as of yet :-(
On 1/26/2018 10:13 AM, WK wrote:
That cpu is X5690. That is Westmere
(i.e. CPUs from
the last 5 years) with a vague mention of other cpus on a 'customer'
need basis.
Westmere is circa 2010 and came out before Sandy/Ivy Bridge so we don't
know when or if they will be fixed, but probably only after the
Sandy/Ivy Bridges get theirs.
-wk
On 1/26/2018 1:50 AM
core2duo with a 40-60 GB SSD for most
cases. The formula is something like 4k of arbiter space for every
"file". For 10-80 VM disk images that would be really minimal.
-wk
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailma
. One person has reported
that his VMs went read-only during that period, but other have not
reported that.
-wk
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
and ceph where
appropriate.
/2cents
What are the oVirt situations where Gluster works better and conversely,
what are the uses where Ceph would work better ?
-wk
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
13 matches
Mail list logo