Hello,
we are subject to PCI-DSS. I have some questions. We currently have setup oVirt
in our environnement.
We created 2 Datacenter.
- one with a cluster with hosted engine on gluster (Hyperconverged env) which
represents the "LAN" part
- one with a cluster with gluster storage wich is the DMZ
Any ideas anyone ? At least, could you please provide your opinion ?
Regards,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Cond
Hello,
i used the vm_backup script as provide here :
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/vm_backup.py
I understood the process to backup VM. I'm stuck at getting the logical_name of
the disk when the snapshot disk is attached in a VM.
I checked the flow like this
Even after a long wait, the logical_name is not populated in the engine DB. No
idea how to get it populated.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/pri
Found an interesting article about how it works
https://ovirt.org/develop/release-management/features/storage/reportguestdiskslogicaldevicename/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy State
it seems that i have the logical_name not populated only with snapshot disk
attachment. When i attached a normal disk (ie: i created a disk on ovirt) via
webui, the logical_name is populated. Can't really find why but it seems
related to snap disk
___
> On Sun, May 27, 2018 at 5:33 AM, Punaatua PK wrote:
>
>
> If https is enabled, the webhook uses the https url to communicate. What
> does "gluster-eventsapi status" on any of the gluster nodes return?
>
[root@test ~]# gluster-eventsapi status
Webhooks:
http://engi
Hello,
we a re in the same situation as you. We tried vprotect but at the end, we used
the script provided in the python SDK example:
https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/vm_backup.py
We adapt it. It
Hello,
I have a problem when i tried to hotplug a disk to a VM. Here is the situation.
We use a VM (let's call this backupVM) which is responsible of doing our VM
backup by :
- Making a snapshot of the VM we want to backup
- Attach the snapshot disk
- Make the copy using DD
- Unplug the snapshot
Hello,
when i tried to attach a snapshot disk into my backupvm. oVirt doesn't want to
attach it.
In the engine.log I can see this.
2018-08-04 19:20:04,472-10 ERROR
[org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default
task-33) [646ad7a1-4501-4648-a937-bbee0abaec46] Command
'
Hi Raz,
yes i saw this bug. But I was focusing on the opération that leads to it which
is not the same as me. I'm not very familiar with bugzilla.
No solution for the moment for this bug ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send a
Hello Raz,
yes i saw this bug. But the problem doesn't seem to be the same. I search on
google with qemu + write lock
but didn't find anything.
Do you know the command launch by this opération ? qemu-img ?
___
Users mailing list -- users@ovirt.org
To u
Hello Raz,
after looking onto rhev documentation, I did find the "THE ENGINE VACUUM TOOL".
It seems to "maintains PostgreSQL databases by updating tables and removing
dead rows, allowing disk space to be reused"
Maybe it will delete old entries ?
___
Hello,
We have the same problem. It's seems that the ovirt-imageio-proxy is the
bottleneck for our setup. We use vprotect to backup a VM via api
We ask for support with vprotect team and we are currently tunning some
parameters on the kernel.
We have full 10G network in ou setup.
Here is the
ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-imageio-proxy-1.4.6-1.el7.noarch
ovirt-imageio-daemon-1.4.6-1.el7.noarch
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/
Hello,
we have the current setup.
=> 1 datacenter on site A with 1 cluster which is compose of 3 hosts and a
self-hosted engine
=> 1 datacenter on site B (let's call this TATA) with 1 cluster compose of 3
hosts which is using glusterfs. This datacenter is managed by the self-hosted
engine on s
Hello Simone,
at first, let me thank you for your answer.
Ok for the storage domain. I have already done this before.
The storage domain is a glusterfs domain. The glusterfs volume is composed of 3
hosts which are managed from the 1st engine.
I wanted the detach the hosts, vm, storage domain fro
Hello,
we currently have a self-hosted engine on gluster with 3 hosts. We want to have
the engine on a single machine on a standalone KVM.
We did the following steps on our test platform.
- Create a VM on a standalone KVM
- Put the self hosted engine into global maintenance
- Shut the self-hoste
18 matches
Mail list logo