On 2012-7-22 19:51, Ayal Baron wrote:
----- Original Message -----
On 07/20/2012 09:19 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 11:32 AM, Itamar Heim <[email protected]>
wrote:
On 07/20/2012 07:21 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 3:52 AM, Itamar Heim <[email protected]>
wrote:
On 07/20/2012 02:08 AM, Trey Dockendorf wrote:
On Thu, Jul 19, 2012 at 4:00 AM, Matthew Booth
<[email protected]>
wrote:
On 18/07/12 23:52, Itamar Heim wrote:
On 07/18/2012 06:00 PM, Trey Dockendorf wrote:
I'm attempting to fine-tune the process of getting my
KVM/Libvirt
managed VMs over into my new oVirt infrastructure, and the
virt-v2v
import is failing in the WUI with "Failed to read VM
'dh-imager01'
OVF, it may be corrupted". I've attached both engine and
vdsm logs
that are a snapshot from when I ran the virt-v2v command
until I saw
the failure under Events.
matt - any thoughts?
Nothing springs to mind immediately, but it sounds like v2v is
producing
an
invalid OVF. If somebody can diagnose what the problem with
the OVF is
I
can
fix v2v.
Matt
virt-v2v command used...
# virt-v2v -i libvirtxml -o rhev -os
dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml
dh-imager01_sys.qcow2: 100%
[===========================================================================================================================================================================================================]D
0h00m37s
virt-v2v: dh-imager01 configured with virtio drivers.
The xml has been modified numerous times based on past
mailing list
comments to have VNC and Network information removed, but
still the
same failure. I've attached the latest XML that was used in
the
log's
failure as dh-imager01.xml. I've also tried passing hte
bridge
device
(ovirtmgmt) in the above command with same failure results.
Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and
ovirt-engine-3.1 respectively.
Please let me know what other configuration information
could be
helpful to debug / troubleshoot this.
Are there any other methods besides a virt-v2v migration
that can
allow me to use my previous KVM VMs within oVirt?
Thanks
- Trey
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
Attached is the virt-v2v generated ovf that's in my NFS export
domain
Any other means to get KVM/libvirt/virt-manager based VMs into
oVirt?
Possibly something as crude as provisioning new VMs with oVirt
then
replacing the virtual hard drives?
this would work - just create the VM on an NFS storage domain
with a disk
the same size as origin, and copy over the disk you had.
a bit trickier for iscsi, so i'd do this with nfs.
Thanks
- Trey
Why is it trickier with iSCSI? Currently the only Data Center I
have
functioning in oVirt only has iSCSI storage available.
with iscsi, you will have to create the disks as pre-allocated,
and use DD
to overwrite them.
NFS doesn't have to be pre-allocated.
and since you are using pre-allocated, you need to use the RAW
format iirc
Currently most of my KVM VMs are qcow2, so converting them to raw
would not be a problem. However, why is DD necessary? Why can't I
overwrite the <image_name>.img with my *.img file ? Since I've
used
mostly qcow2 in my time with KVM/libvirt I may lack some
understanding
of how to correctly handle raw images.
in both cases there aren't any .img files.
you can convert you qcow2 to raw before copying them over to iscsi or
nfs using qemu-img convert.
it is not necessary, but will save you failing on small details
between
the two.
using the export domain is safest, even though it doubles the amount
of IO
Would a qcow2 image with preallocation=metadata be possible on an
iSCSI data store?
ayal?
nope. metadata preallocation means that each logical block has a corresponding
physical block.
Ayal, by saying "logical block" and physical block here, what do they
stand for in linux systems? I guess, physical block is "the scsi lun
disk", logical block is "lvm disk"? right?
With files this is fine as you can seek wherever you want and the file will
remain sparse. With block devices this makes little sense as the second the
guest accesses a block which is mapped to an unallocated physical block we'd
have to allocate all the area up to that point.
(btw, qemu-img will fail if you try to create such an image on a block device)
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users
--
Shu Ming <[email protected]>
IBM China Systems and Technology Laboratory
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users