Re: [ovirt-users] ova conversion errors: where to search?

2017-02-18 Thread Shahar Havivi
also make sure that the ova have permission to vdsm:kvm (36:36)

On Sat, Feb 18, 2017 at 1:42 AM, Tomáš Golembiovský 

> Hi,
> On Fri, 17 Feb 2017 17:54:28 +0100
> Gianluca Cecchi  wrote:
> > Hello,
> > testing some VMware ova imports.
> > In a test that I'm doing I get failure in conversion; in engine.log
> >
> > 2017-02-17 17:39:02,992+01 INFO
> >  [org.ovirt.engine.core.bll.exportimport.ConvertVmCallback]
> > (DefaultQuartzScheduler4) [59adde47] Conversion of VM from external
> > environment failed: Job u'cdb31877-3ebb-400d-88da-1f645b1261ae' process
> > failed exit-code: 1
> >
> > where to find more details regarding reasons for failure? Other files on
> > engine or should I see at host side?
> There are two places you should look. One is vdsm.log on the host
> performing the import. And if you have oVirt 4.1 another place to look
> at is /var/log/vdsm/import where the import logs are stored.
> > In case I have to import a windows VM ova, which iso should I attach near
> > the "Attach VirtIO-Drivers" checkbox? Any "official" iso? I
> > downloaded oVirt-toolsSetup-4.1-3.fc24.iso but I don't know if it is the
> > right iso for this.
> Install virtio-win package somewhere. In it, inside
> /usr/share/virtio-win directory, you will find virtio-win-*.iso file.
> > Thanks,
> > Gianluca
> --
> Tomáš Golembiovský 
> ___
> Users mailing list
Users mailing list

Re: [ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

2017-02-18 Thread Karli Sjöberg

Den 18 feb. 2017 8:56 fm skrev Gianluca Cecchi :
> On Feb 17, 2017 7:22 PM, "Karli Sjöberg"  wrote:
>> Den 17 feb. 2017 6:30 em skrev Gianluca Cecchi :
>>> Hello,
>>> I'm going to setup an environment where I will have 2 hosts and each with 2 
>>> adapters to connect to storage domain(s). This will be a test environment, 
>>> not a production one.
>>> The storage domain(s) will be NFS, provided by a Netapp system.
>>> The hosts have 4 x 1Gb/s adapters and I think to use 2 for ovirtmgmt and 
>>> VMs (through bonding and VLANs) and to dedicate the other 2 adapters to the 
>>> NFS domain connectivity.
>>> What would be the best setup to have both HA on the connection and also 
>>> using the whole 2Gb/s in normal load scenario?
>>> Is it better to make more storage domains (and more svm on Netapp side) or 
>>> only one?
>>> What would be the suitable bonding mode to put on adapters? I normally use 
>>> 802.3ad provided by the switches, but I'm not sure if in this configuration 
>>> I can use both the network adapters for the overall load of the different 
>>> VMs that I would have in place...
>>> Thanks in advance for every suggestion,
>>> Gianluca
>> Hey G!
>> If it was me doing this, I would make one 4x1Gb/s 802.3ad bond on filer and 
>> hosts to KISS. Then, if bandwidth is of concern, I would set up two VLANs 
>> for storage interfaces with addresses on separate subnets ( and 
>> on filer. 10.0.0.(2,3) and 10.0.1.(2,3) on hosts) and then on the 
>> filer set up only two NFS exports where you try to as evenly as possible 
>> provision your VMs. This way the network load would evenly spread over all 
>> interfaces for simplest config and best fault tolerance, while keeping 
>> storage traffic at max 2Gb/s. You only need one SVM with several addresses 
>> to achieve this. We have our VMWare environment set up similar to this 
>> towards our NetApp. We also have our oVirt environment set up like this, but 
>> towards a different NFS storage, with great success.
>> /K
> Thanks for your answer, K!
> So you mean to make a unique bond composed by all 4 network adapters and put 
> all the networks on it, comprised ovirtmgmt and such, through clans?
> How do you configure 802.3ad on 4 adapters? How many switches do you have to 
> connect to, from these 4 adapters? Or do you use round robin bonding (but I 
> presume it is not supported in court this bond)?
> Thanks!

Well, in our case, we have two clustered switches from C-company so two NICs in 
each. And then, yeah, different VLANs for every network on top of the same 
bond. Works like a charm:)

Users mailing list