Christopher Law writes:
> Anyone explain why I can't export a Virtual Machine with a TPM? Is
> this something to do with the TPM data? I'm exporting the VM to some
> cold storage. What's the deal here how can I export it and keep the
> TPM data or do I have to disable TPM on it first?
>
> oVirt E
Milan thanks for the info, I suspected as much.
Yes it's a Windows 11 VM, so TPM is locked in the on position while Windows 11
OS selected.
I'll keep an eye on the bug and see if I can work around it for now.
I suspect it will be problem if using the TPM for something like bitlocker,
potential
Hi Aviv
We are still observing this issue. dwh db is increasing very rapidly. So far we
are unable to find what is causing so much increasing. These are top tables
consuming disk space.
I added entry in the root crontab to vacuum the db but it did not work.
public.host_interface_samples_his
Good morning all,
Thank you in advance.
My current environment is ovirt 4.5.2.4-1.el8, running on centos 7.9.
I'm looking for advice on resolving two issues that have just come to light:
*Issue #1*: When I attempt to create a template from the oVirt GUI, I click
the "Make Template" button, ente
Unfortunately I could not find anything else that would indicate why the
host<->hosted-engine network is broken on the 4.5 oVirt.
I did attempt to get a reference installation (the oVirt Node
installation,
https://resources.ovirt.org/pub/ovirt-4.5/iso/ovirt-node-ng-installer/ovirt-node-ng-inst
Good evening everyone.
Guys, I have two machines that use oVirt, I managed to put CephFS as a
storage domain and everything is working perfectly, but on these two
machines there are 4 NVMe on each, I would like to know if there is any
possibility or any way to use their local storage? I don't care
Hello,
I was asked the other day what file system oVirt uses to store virtual machine
images and how it differed from VMware VMFS. I had to admit that I didn't quite
understand this aspect of oVirt. I've had a look around but haven't found an
answer. I was wondering if someone could explain at
Hello Albert,
thanks, and sorry for the late response.
On Sept. 26 vdsmd went down on the host again. (didn't monitor properly, so,
only now, I realized it was down:)
This time I could not find any segvs in the logs.
When did it start?
We used until now, 4.3, and then upgraded first to 4.5.1
Good evening everyone.
I'm implementing oVirt in my infrastructure using Ceph as Storage (I'm using 10
Gibabits interfaces), I managed to raise the Storage correctly, reaching rates
of 1.2 Gigabytes per second via NFS. When I went to depoy the hosted-engine via
NFS, everything went fine, but wh
9 matches
Mail list logo