Due to urgency of the case, I fetched the backup copy from weekend and
proceeded to push missing data to VM (the VM is a git repo). I lost few
notes, though not much damage was done...
I'm starting to feel uncomfortable with this solution though and might
switch (at least the production VMs) to
Hi Alex,
We had a bigger problem recently which involved the error you mention. I
sent it to the mail list and you can find the final solution we chose at
[1]. Not the cleanest solution of course, but we managed to recover all
VMs... I think in your case the relevant part is the one that
Is glusterd running on the server: goku.sanren.**
There's an error
Failed to get volume info: Command execution failed
error: Connection failed. Please check if gluster daemon is operational
Please check the volume status using "gluster volume status engine"
and if all looks ok, attach the mount
Hi,
We're using ovirt-engine-sdk-python 4.1.6 on oVirt 4.1.9, currently
we're trying to delete some snapshots via a script like this:
sys_serv = conn.system_service()
vms_service = sys_serv.vms_service()
vm_service = vms_service.vm_service(vmid)
snaps_service =
Better late than never - thank you all for the input, it was very useful!
With the ability to measure the collapsed form of a volume chain in
the qcow2 format, we managed to simplify and improve the process of
creating an OVA significantly. What we do now is:
1. Measure the collapsed qcow2 volumes
On Wed, Jul 11, 2018 at 11:30:19AM +0300, Arik Hadas wrote:
> 4. Mount each reserved place for a disk as a loopback device and convert
> the volume-chain directly to it [1]
nbdkit tar plugin can overwrite a single file inside a tarball, all in
userspace and non-root.
Hi Sahina,
Yes the glusterd daemon was not running. I have started it and is able to
add a glusterfs storage domain. Thank you so much for your help.
Oops! I allocated 50GiB for this storage domain and it requires 60GiB.
On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose wrote:
> Is glusterd
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe wrote:
> Hi,
>
> Below are the versions of packages installed. Please find the logs
> attached.
> Qemu:
> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64
>
Hi,
I gt hold of a 10zig V1200-P for testing VDI under ovirt/RHEV, as I'd read that
they support SPICE. However, I haven't found how to connect it successfully to
a VM as yet. Has anyone had any experience with these using ovirt/RHEV? I
haven't been able to find any manuals or support info on
Thank you all for your help, I have managed to deploy the engine
successfully. It was a quiet a lesson.
On Wed, Jul 11, 2018 at 11:55 AM, Sakhi Hadebe wrote:
> Hi Sahina,
>
> Yes the glusterd daemon was not running. I have started it and is able to
> add a glusterfs storage domain. Thank you
On Thu, Jul 5, 2018 at 9:19 PM, Niyazi Elvan wrote:
> Hi All,
>
> I have started testing oVirt 4.2 and focused on OVN recently. I was
> wondering whether there is a plan to manage L2->L7 ACLs through oVirt web
> ui.
> If not, how could it be possible to manage ACLs except command line tools ?
>
Hello,
I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately,
NFS share cannot be mount, there is permission error.
(Indeed, after the same issue on another NFS ISO repo, I've change exported
directory user/group to 36:36, and O-Virt NFS mount was great).
So, I think
Never mind, it looks like these don't the SPICE protocol. Will try an IGEL thin
client.
Cam
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
Hi,
did you share it correct to **all** your ovirt HOSTS with FQDN? Solid rock
stable DNS ?
Could you mount it on these HOSTS manually as root ? exported as no root_squash
needed.
On HOST side, mount it, cd into, change chown 36:36 . , umount it.
Try with overt again.
- or better, use iSCSI
Hello,
I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately, NFS
share cannot be mount, there is permission error.
(Indeed, after the same issue on another NFS ISO repo, I’ve change exported
directory user/group to 36:36, and O-Virt NFS mount was great).
So, I think
On Jul 11, 2018 17:36, jeanbaptiste coupiac wrote:Hello, I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately, NFS share cannot be mount, there is permission error. (Indeed, after the same issue on another NFS ISO repo, I’ve change exported directory user/group to 36:36, and
Hi all,
I have a VM stuck in state "Migrating to". I restarted ovirt-engine and
rebooted all hosts, no success. I run ovirt 4.2.4.5-1.el7 on CentOS 7.5 hosts
with vdsm-4.20.32-1.el7.x86_64. How can I clean this up?
Thank you and all the best,
Simone
Hi all, i'm trying to deploy oVirt to manage multiple 'data centers' (really
just very basic deployments of single hosts using local storage that are
deployed in varying geographic locations), where each host/resource pool is
(unfortunately) only accessible via NAT.
I've setup port-forwards
We've been using IGEL thin clients for years in our RHEV environment. The
Windows based ones suck and don't have a SPICE client pre-installed but the
Linux ones work really well.
CC
On Thu, Jul 12, 2018 at 1:43 AM wrote:
> Never mind, it looks like these don't the SPICE protocol. Will try an
19 matches
Mail list logo