[ovirt-users] Re: VMs stuck at Migrating status in Dashboard- Trying to cancel migration will yield "cannot cancel migration for non-migrating vm"

2019-09-15 Thread paul . christian . suba
Hi,

The issue started prior to the host being rebooted intentionally.
But I did mark It as rebooted after I restarted the host.

I have observed that the VMs status in the Dashboard GUI was not reflecting the 
actual status of the VMs when viewed from the Host CLI.

It seems restarting the ovirt-engine service in HE solved the problem.

When I place any host in maintenance mode, I no longer see the error below and 
VMs are migrating now without problems.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KMYWNVMELUJRKBX7BAYQKPPSTZHA6NOT/


[ovirt-users] Find MAC for VM

2019-09-15 Thread Colin Coe
Hi all

I've got the following Python snippet which works fine under 4.3 but I need
it to work under 4.1 as well.
---
nics_service = vms_service.vm_service(vm.id).nics_service()
nic = nics_service.list(search='name=nic1')[0]
---

Under 4.1, it barfs on "nics_service.list(search='name=nic1')[0]".

What do I need to do to make it work under 4.1?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIULSYOC4SNJKS7CFJAFQIFSLVFICICQ/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-15 Thread Nir Soffer
On Sun, Sep 15, 2019 at 8:37 PM Mikael Öhman  wrote:

> > What do you mean by "small block sizes"?
>
> inside a VM, or directly on the mounted glusterfs;
> dd if=/dev/zero of=tmpfile  bs=1M count=100 oflag=direct
> of course, a terrible way to write data, but also things like compiling
> software inside one of the VMs was terrible slow, 5-10x slower than
> hardware, consisting of almost only idling.
>
> Uploading disk images never got above 30MB/s.
> (and I did try all options i could find; using upload_disk.py on one of
> the hosts, even through a unix socket or with -d option, tweaking buffer
> size, all of which made no difference).
> Adding an NFS volume and uploading to it I reach +200MB/s.
>
> I tried tuning a few parameters on glusterfs but saw no improvements until
> I got to network.remote-dio, which made everything listed above really fast.
>
> > Note that network.remote-dio is not the recommended configuration
> > for ovirt, in particular if on hyperconverge setup when it can be harmful
> > by delaying sanlock I/O.
> >
> >
> https://github.com/oVirt/ovirt-site/blob/4a9b28aac48870343c5ea4d1e83a63c1.
> ..
> > (Patch in discussion)
>
> Oh, I had seen this page, thanks. Is "remote-dio=enabled" harmful as in
> things breaking, or just worse performance?
>

I think the issue is delayed writes to storage until you call fsync() on
the node. The
kernel may try to flush too much data, which may cause delays in other
processes.

Many years ago we had such issues that cause sanlock failures in renewing
leases, which
can end in terminating vdsm. This is the reason we always use direct I/O
when copying
disks. If you have hypervonvrged setup and use network.remote-dio you may
have the
same problems.

Sahina worked on a bug that was related to network.remote-dio, I hope she
can add more
details.


> I was a bit reluctant to turn it on, but after seeing it was part of the
> virt group I thought it must have been safe.
>

I think it should be safe for general virt usage, but you may need to tweak
some
host settings to avoid large delays when using fsync().


> Perhaps some of the other options like "performance.strict-o-direct on"
> would solve my performance issues in a nicer way (I will test it out first
> thing on monday)
>

This is not likely to improve performance, but it is seems to be required
if you use
network.remote-dio = off. Without it direct I/O does not behave in a
predictable way
and may cause failures in qemu, vdsm and ovirt-imageio.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TXULQKGQ6CZDUW4D3XBTLKTS4I3TNSC4/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-15 Thread Mikael Öhman
> What do you mean by "small block sizes"?

inside a VM, or directly on the mounted glusterfs;
dd if=/dev/zero of=tmpfile  bs=1M count=100 oflag=direct
of course, a terrible way to write data, but also things like compiling 
software inside one of the VMs was terrible slow, 5-10x slower than hardware, 
consisting of almost only idling.

Uploading disk images never got above 30MB/s. 
(and I did try all options i could find; using upload_disk.py on one of the 
hosts, even through a unix socket or with -d option, tweaking buffer size, all 
of which made no difference).
Adding an NFS volume and uploading to it I reach +200MB/s.

I tried tuning a few parameters on glusterfs but saw no improvements until I 
got to network.remote-dio, which made everything listed above really fast.

> Note that network.remote-dio is not the recommended configuration
> for ovirt, in particular if on hyperconverge setup when it can be harmful
> by delaying sanlock I/O.
> 
> https://github.com/oVirt/ovirt-site/blob/4a9b28aac48870343c5ea4d1e83a63c1...
> (Patch in discussion)

Oh, I had seen this page, thanks. Is "remote-dio=enabled" harmful as in things 
breaking, or just worse performance?
I was a bit reluctant to turn it on, but after seeing it was part of the virt 
group I thought it must have been safe.
Perhaps some of the other options like "performance.strict-o-direct on" would 
solve my performance issues in a nicer way (I will test it out first thing on 
monday)

Thanks (I'm not much of a filesystem guy, thanks for putting up with my 
ignorance)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFKVVKHYSLARJWED7KONQSN4RIG5FXSM/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-15 Thread Nir Soffer
On Fri, Sep 13, 2019 at 3:54 PM Mikael Öhman  wrote:

> Perhaps it will help future sysadmins will learn from my mistake.
> I also saw very poor upload speeds (~30MB/s) no matter what I tried. I
> went through the whole route with unix-sockets and whatnot.
>
> But, in the end, it just turned out that the glusterfs itself was the
> bottleneck; abysmal performance for small block sizes.
>

What do you mean by "small block sizes"?

I found the list of suggested performance tweaks that RHEL suggests. In
> particular, it was the "network.remote-dio=on" setting that made all the
> difference. Almost 10x faster.
>

Note that network.remote-dio is not the recommended configuration
for ovirt, in particular if on hyperconverge setup when it can be harmful
by delaying sanlock I/O.

https://github.com/oVirt/ovirt-site/blob/4a9b28aac48870343c5ea4d1e83a63c10a1cbfa2/source/documentation/admin-guide/chap-Working_with_Gluster_Storage.md#options-set-on-gluster-storage-volumes-to-store-virtual-machine-images
(Patch in discussion)

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6RIRLB6KIDEBYSIJCRJQFZ4RVHOHQAC7/


[ovirt-users] Re: CentOS 6 in oVirt

2019-09-15 Thread Nir Soffer
On Wed, Sep 11, 2019, 10:07 Dominik Holler  wrote:

> Hello,
> are the official CentOS 6 cloud images known to work in oVirt?
> I tried, but they seem not to work in oVirt [1], but on plain libvirt.
> Are there any pitfalls known during using them in oVirt?
>

Why do you want to use prehisroric images?

Thanks
> Dominik
>
>
> [1]
>   https://gerrit.ovirt.org/#/c/101117/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TN5SEECVFGCCMQ2QACYYEZXNLFGBYDZH/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YGC3UXJMWGJZAJ3RVJ3PTL3MGKMLHDGD/


[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-09-15 Thread Benny Zlotnik
>* Would ovirt have been able to deal with clearing the rbd locks, or
did I miss a trick somewhere to resolve this situation with manually
going through each device and clering the lock?

Unfortunately there is no trick on ovirt's side

>* Might it be possible for ovirt to detect when the rbd images are
locked for writing and prevent launching?

Since rbd paths are provided via cinderlib, a higher level interface, ovirt
does not have knowledge of implementation details like this

On Thu, Sep 12, 2019 at 11:27 PM Dan Poltawski 
wrote:

> Yesterday we had a catastrophic hardware failure with one of our nodes
> using ceph and the experimental cinderlib integration.
>
> Unfortunately the ovirt cluster recover the situation well and took
> some manual intervention to resolve. I thought I'd share what happened
> and how we resolved it in case there is any best practice to share/bugs
> which are worth creating to help others in similar situaiton. We are
> early in our use of ovirt, so its quite possible we have things
> incorreclty configured.
>
> Our setup: We have two nodes, hosted engine on iSCSI, about 40vms all
> using managed block storage mounting the rbd volumes directly. I hadn't
> configured power management (perhaps this is the fundamental problem).
>
> Yesterday a hardware fault caused one of the nodes to crash and stay
> down awaiting user input in POST screens, taking 20 vms with it.
>
> The hosted engine was fortunately on the 'good' node  and detected that
> the node had become unresponsive, but noted 'Host cannot be fenced
> automatically because power management for the host is disabled.'.
>
> At this point, knowing that one node was dead, I wanted to bring up the
> failed vms on the good node. However, the vms were appearing in an
> unknown state and I couldn't do any operations on them. It wasn't clear
> to me what the best course of action to do there would be. I am not
> sure if there is a way to mark the node as failed?
>
> In my urgency to try and resolve the situation I managed to get the
> failed node startred back up, shortly after it came up the
> engine detected that all the vms were down, I put the failed host into
> maintaince mode and tried to start the failed vms.
>
> Unfortunately the failed vms did not start up cleanly - it turned out
> that they still had rbd locks preventing writing from the failed node.
>
> To finally gets the vms to start I then manually went through every
> vm's managed block, found the id and found the lock and removed it:
> rbd lock list rbd/volume-{id}
> rbd lock remove rbd/voleume-{id} 'auto {lockid}' {lockername}
>
> Some overall thoughts I had:
> * I'm not sure what the best course of action is to notify the engine
> about a catastrophic hardware failure? If power management was
> configured, I suppose it would've removed the power and marked them all
> down?
>
> * Would ovirt have been able to deal with clearing the rbd locks, or
> did I miss a trick somewhere to resolve this situation with manually
> going through each device and clering the lock?
>
> * Might it be possible for ovirt to detect when the rbd images are
> locked for writing and prevent launching?
>
> regards,
>
> Dan
>
> 
>
> The Networking People (TNP) Limited. Registered office: Network House,
> Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company
> number: 07667393
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZGGIT2KKBWCPXNB5JEQEA3KQP5ZBNXR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FJCBWWTWYTHHID3KYL67KEK63H6F2HWT/


[ovirt-users] Re: How do I make delta clones ? OVM (aka Xen) vs Ovirt

2019-09-15 Thread Yedidyah Bar David
On Tue, Sep 10, 2019 at 2:18 AM Tim Tuck  wrote:
>
> Hi all,
>
> I'm new to Ovirt coming from a OVM ( aka Xen ) environment and I'm
> trying to work out how to do similar things.
>
> In my OVM environment I have VMs that are 100GB in size. These VMs are
> templates that are used for running customer workshops so we create and
> destroy them quite often.
>
> In OVM I can clone 10 x 100GB VM's in about 1 minute, the cloning
> process just creates a "delta" VM rather than copying the entire 100GB.
> Those VM's can then be run and the on-disk size of the delta VM grows as
> it runs and deviates from the master.
>
> The same goes for templates, creating a 100GB VM from a template takes
> only a few seconds. Creating a template from a VM only takes a few seconds.
>
> My problem is that I cant seem to do any of this using Ovirt.
>
> it appears that there is no way to tell the cloning process " I want 10"
> like I can in OVM - am I missing something here?

That's definitely not my expertise, but I think most people in your
position use "VM pools". Did you look into that?

>
> In my tests I find that even if a "snapshot" is made, the entire VM is
> copied rather than creating a delta.
>
> Creating a VM from a template is no different in that it copies the
> entire VM.

Are you sure? Does this include the VM's disk(s)? IIRC from the few
times I tried that, the clone VM uses the same disk, perhaps using a
snapshot of it or something like that - as I said, that's not my
expertise. This might depend, though, on storage type, options, etc.

>
> So...
>
> Is it possible for Ovirt to create multiple clones on request ?
>
> Can Ovirt do "delta" copies rather than copy the entire VM ?
>
> If Ovirt can do these things, how do I set it up to to do that ?
>
> Any help appreciated.
>
> thanks
>
> Tim
>
>
>
>
>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C23QFIU6A65SKHQR7HATD4IXXVLQPILD/



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LNQG5ZD7U526JL3ZUPC2TNW5AY4XG4OO/