Hey Abhi,
I am not sure where an ISO comes into play.  I did not work with any ISOs
in my flow.  Maybe the OVA is generated in a different way?  Do you have
control over the way the OVA is created?

I was using the following command to export a VM from VMware using the
'ovftool', which would produce my OVA file:

ovftool -o --powerOffSource --noSSLVerify --acceptAllEulas
--maxVirtualHardwareVersion=<max_virtual_hardware_version> -tt=OVA
-n=<clean_vm_name>
"vi://<username>:<password>@<endpoint>?moref=vim.VirtualMachine:<vm_id>"
<file_path>

- <clean_vm_name> is the name according to the ACS naming conventions.
- <username>, <password> and <endpoint> are pretty obvious I think.
- <file_path> is where you want the file to be saved to.
- <vm_id> is the ID of the VM in VMware (described below).
- When migrating from a more recent version of VMware to a previous version
of VMware, you will need to specify the <max_virtual_hardware_version>
setting to reflect the destination VMware version.  The possible settings
and the respective VMware version supported are listed below.
    10 - Supports: ESXi 5.5, Fusion 6.x, Workstation 10.x, Player 6.x
    9 - Supports: ESXi 5.1, Fusion 5.x, Workstation 9.x, Player 5.x
    8 - Supports: ESXi 5.0, Fusion 4.x, Workstation 8.x, Player 4.x
    7 - Supports: ESXi/ESX 4.x, Fusion 3.x, Fusion 2.x, Workstation 7.x,
Workstation 6.5.x, Player 3.x, Server 2.x
    6 - Supports: Workstation 6.0.x
    4 - Supports: ACE 2.x, ESX 3.x, Fusion 1.x, Player 2.x
    3 and 4 - Supports: ACE 1.x, Lab Manager 2.x, Player 1.x, Server 1.x,
Workstation 5.x, Workstation 4.x
    3 - Supports: ESX 2.x, GSX Server 3.x


For me, since I was discovering the entire source VMware environment and
listing all possible VMs which could be migrated, I had to find a way to
bulk discover VMs.  I did that using the MORTypes using the 'pyshere'
Python package.  The MORTypes are a bit complicated to work with, but they
are a LOT faster, so it was worth it in my case.  I used the following to
get the <vm_id> listed above.  It is worth noting that this is the only way
I was able to consistently be able to export 'ANY' VM from VMware without
having things like VM naming conventions or cluster placement and such
causing problems.

```
        mors = vmware._get_managed_objects(MORTypes.VirtualMachine).keys()
props = {
MORTypes.VirtualMachine:['name', 'config.files.vmPathName'],
}
result = vmware._get_object_properties_bulk(mors, props)
for vm_item in result:
vm_id = vm_item.Obj
```

This export process consistently gave me an OVA in the following format:
vm-name.ova
|-- vm-name.ovf
|-- vm-name-disk1.vmdk
|-- vm-name-disk2.vmdk
|-- vm-name-disk3.vmdk
|-- manifest.txt(?)

With these files, I would make a few directories to simplify the
modification and creation of the OVA per disk.

vm-name-disk1
|-- vm-name.ova (modified)
|-- vm-name-disk1.vmdk

vm-name-disk2
|-- vm-name.ova (modified)
|-- vm-name-disk2.vmdk

vm-name-disk3
|-- vm-name.ova (modified)
|-- vm-name-disk3.vmdk

Which I would then re-tar back into an OVA file to produce:

vm-name-disk1.ova (ACS Template)
vm-name-disk2.ova (Data Volume)
vm-name-disk3.ova (Data Volume)

I then uploaded all of these to ACS via a local file server and launched
the VM from the Template OVA once ready.  After the VM had successfully
launched, I would then attach each of the Data Volumes which I had
previously uploaded.

Given that my flow seems to be slightly different since I never used an
ISO, I am not sure the following is the cause of your current problem, but
I suspect it is.

I had no problems exporting and importing VMs with drives that used the
SCSI controller in VMware, however, I was never able to get VMs with an IDE
controller to work.  Drives with an IDE controller always gave me the same
problem, they would hang at boot because the "root partition could not be
found".  Because I was under time pressure and because my client did not
use drives with IDE controllers, I did not attempt to automate the recovery
for this problem.  I suspect manual intervention is required in order to
modify the MBR to make the disk bootable.

Not sure this was helpful, but hopefully it gets you farther down your road
of troubleshooting this.

Cheers,

*Will Stevens*
CTO

<https://goo.gl/NYZ8KK>

On Wed, May 10, 2017 at 6:56 AM, Abhinandan Prateek <
abhinandan.prat...@shapeblue.com> wrote:

> Hi Will,
>
>   We have hit some road blocks. The main issue here is that cloudstack
> sees the VM as a set of disks, while a OVA contains VM definition including
> instructions on pre boot steps, delays and maybe more. So even if we are
> able to reliably get disks from the OVA and orchestrate these with vCenter,
> we still may end up with some non-booting VM. Following is the flow:
>
> 1. Split the OVA into disks and iso, assume boot disk.
> 2. Boot disk is the parent template and rest of the disks and iso are
> child templates, created in cloudstack.
> 3. Map disk offerings to disks, cloudstack then orchestrates the boot disk
> and additional disks as a venter VM.
> 4. Attach the ISO.
>
> This works with some limitations. The cloudstack VMs exported as OVA work,
> but some of the appliances that I tested it with show errors on vCenter and
> checking the console reveals booting problem. Have you faced similar issues
> ? How do we go about these. I am not sure if you are in a position to share
> parts of the work that might be relevant. We have kept our PR private as it
> is still under works, but if it is useful we can share it. Basically it is
> based on previous similar effort.
>
> Regards,
> -abhi
>
>
>
>
> abhinandan.prat...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> On 04/05/17, 3:06 PM, "Abhinandan Prateek" <abhinandan.prateek@shapeblue.
> com> wrote:
>
> >
> >The template generation related actions are better done on SSVM as they
> will be dealing/moving with various data/boot disks. Vim(VMWare
> infrastructure management), works with vCenter context and as such cannot
> be used inside SSVM. Vim gives a much better validation of OVF as it can
> make compatibility checks with vCenter. Currently it is part of vmware
> hypervisor plugin. Due to these dependency I ended parsing and generating
> OVF using standard dom api.
> >Due to the nature of OVF we also ended up making several assumptions.
> Like the one that says the first disk is the boot disk. The few OVF file
> that I have seems to work for now. (Other than that the one OVA had second
> disk as boot disk). Though the OVF file does contain the OS of the VM but
> weirdly it does not link it to the disk that has it.
> >
> >
> >Regards,
> >-abhi
> >
> >
> >
> >On 03/05/17, 5:57 PM, "Will Stevens" <williamstev...@gmail.com> wrote:
> >
> >>Cool. Let me know if you have questions.
> >>
> >>My instinct is that we probably want to keep the Ova manipulation in the
> >>context of vmware since I don't believe it will be used outside that
> >>context. Trying to manipulate the ovf files with generic tools may prove
> to
> >>be more complicated to manage going forward as it is almost guaranteed to
> >>require some 'hacks' to make it work. If we can avoid those by using the
> >>vim jars, it may be worth it. I have not reviewed anything on the vim
> jars
> >>side, so I don't know how good of an option that is.
> >>
> >>Are there key benefits to not using the vim jars?
> >>
> >>Cheers,
> >>
> >>Will
> >>
> >>On May 3, 2017 3:34 AM, "Abhinandan Prateek" <
> >>abhinandan.prat...@shapeblue.com> wrote:
> >>
> >>> Hi Will,
> >>>
> >>>    I am improving the multiple disk OVA feature. As part of revamp I am
> >>> moving out some OVF manipulation code from the vmware hypervisor plugin
> >>> context to secondary storage component. The existing code was using
> vim25
> >>> and managed objects to query and rewrite the OVF file. I have rewritten
> >>> that, using standard java w3c dom parser.
> >>>
> >>>    The overall flow is mostly similar and as below:
> >>> 1. Decompress OVA and read the OVF file. OVF file will give information
> >>> about various disks
> >>> 3. Create the regular cloudstack template out for the boot disk and
> >>> rewrite the OVF file, minus the information about other disks.
> >>> 4. For each additional disk create data disk templates and capture the
> >>> relationship in db.
> >>> 5. This can then be followed by creating the multi-disk cloudstack VM.
> >>>
> >>> Essentially I am rewriting the original OVF file after removing the
> File
> >>> and Disk information that refers to the other disks.  Given that the
> the
> >>> VMWare is picky, I think it will require some more cleanup and
> massaging.
> >>> Your inputs will definitely help.
> >>>
> >>> Overall I think the two pieces, the tool that you have and the
> cloudstack
> >>> multi disk OVA functionality can nicely complement each other. Will
> post my
> >>> learning here.
> >>>
> >>> Thanks and regards,
> >>> -abhi
> >>>
> >>>
> >>>
> >>>
> >>> On 02/05/17, 6:05 PM, "williamstev...@gmail.com on behalf of Will
> >>> Stevens" <williamstev...@gmail.com on behalf of wstev...@cloudops.com>
> >>> wrote:
> >>>
> >>> >Hey Abhinandan,
> >>> >First, can you give us a bit more context regarding what you are
> doing so
> >>> >we can highlight potential areas to watch out for?  I have done some
> OVF
> >>> >parsing/modification and there are a bunch of gotchas to be aware
> of.  I
> >>> >will try to outline some of the ones I found.  I have not tried to
> use the
> >>> >vim25.jar, so I can't really help on that front.
> >>> >
> >>> >In my use case, I was exporting VMs via the ovftool from a source
> VMware
> >>> >environment, and I was migrating them to an ACS managed VMware
> >>> >environment.  In doing so, I also wanted to support VMs with multiple
> >>> disks
> >>> >using a Root volume and multiple Data volumes, as well as change the
> nic
> >>> >type (vmxnet3), assign static IPs, etc...  I have not had the time to
> open
> >>> >source my migration tool, but it is on my todo list.
> >>> >
> >>> >My general flow was:
> >>> >- Export the VM with ovftool
> >>> >- Extract the resulting OVA into its parts (OVF, VMDKs, Manifest)
> >>> >- Duplicate the OVF file, once per VMDK
> >>> >- Modify a OVF file to be specific for each of the VMDKs (one OVF per
> >>> VMDK)
> >>> >- Take each VMDK and the corresponding OVF and recompress them back
> into
> >>> an
> >>> >OVA
> >>> >- Treat the first OVA as a template and the rest as data disks
> >>> >
> >>> >My initial (naive) approach was to just treat the OVF as a well
> behaved
> >>> XML
> >>> >file and use standard XML libs (in my case in Python) to parse and
> >>> >manipulate the OVF file.  This approach had a few pitfalls which I
> will
> >>> >outline here.
> >>> >
> >>> >VMware is VERY picky about the format of the OVF file, if the file is
> not
> >>> >perfect, VMware won't import it (or at least the VM won't launch).
> There
> >>> >were two main items which caused me issues.
> >>> >
> >>> >a) The <Envelope> tag MUST have all of the namespace definitions even
> if
> >>> >they are not used in the file.  This is something that most XML
> parsers
> >>> are
> >>> >confused by.  Most XML parsers will only include the namespaces used
> in
> >>> the
> >>> >file when the file is saved.  I had to ensure that the resulting OVF
> files
> >>> >had all of the original namespace definitions for the file to import
> >>> >correctly.  If I remember correctly, they even had to be in the right
> >>> >order.  I did this by changing the resulting file after saving it
> with the
> >>> >XML lib.
> >>> >
> >>> >b) VMware uses namespaces which actually collide with each other.  For
> >>> >example, both the default namespace and the 'ovf' namespace share the
> same
> >>> >URL.  Again, XML libraries don't understand this, so I had to manage
> that
> >>> >manually.  Luckily, the way VMware handles these namespaces is
> relatively
> >>> >consistent, so I was able to find a workaround.  Basically, the
> default
> >>> >namespace will apply to all of the elements, and the 'ovf' namespace
> will
> >>> >be applied only in the attributes.  Because of this I was able to
> just use
> >>> >the 'ovf' namespace and then after exporting the file, I did a find
> >>> replace
> >>> >from '<ovf:' and '</ovf:' to '<' and '</' respectively.
> >>> >
> >>> >Those are the main gotchas which I encountered.
> >>> >
> >>> >I put the OVA Split function I wrote into a Gist [1] (for now) for
> your
> >>> >reference in case reviewing the code is helpful.  I was under a lot of
> >>> time
> >>> >pressure when building this tool, so I have a bunch of cleanup to do
> >>> before
> >>> >I release it as open source, but I can rush it out and clean it up
> after
> >>> >release if you are solving the same(ish) problem and my code will be
> >>> useful.
> >>> >
> >>> >[1] https://gist.github.com/swill/f6b54762ffcce85772535a490a9c8cbe
> >>> >
> >>> >I hope this is helpful in your case.
> >>> >
> >>> >Cheers,
> >>> >
> >>> >*Will STEVENS*
> >>> >Lead Developer
> >>> >
> >>> ><https://goo.gl/NYZ8KK>
> >>> >
> >>> >On Tue, May 2, 2017 at 3:49 AM, Abhinandan Prateek <
> >>> >abhinandan.prat...@shapeblue.com> wrote:
> >>> >
> >>> >> Hello,
> >>> >>
> >>> >> I am looking at vim25.jar to put together ovftool like functionality
> >>> >> specially around parsing and generating OVF files. vim25.jar  is
> already
> >>> >> included as non-oss dependency and used by vmware hypervisor
> plugin. I
> >>> see
> >>> >> that some OVF parsing capabilities are present in this jar, but it
> >>> seems to
> >>> >> be tied to host connection/context. Can anyone who has used this can
> >>> tell
> >>> >> me if I can use it as a standalone OVF manipulation api any pointer
> to
> >>> good
> >>> >> resource on that will be nice.
> >>> >>
> >>> >> Regards,
> >>> >> -abhi
> >>> >>
> >>> >>
> >>> >>
> >>> >> abhinandan.prat...@shapeblue.com
> >>> >> www.shapeblue.com
> >>> >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >>> >> @shapeblue
> >>> >>
> >>> >>
> >>> >>
> >>> >>
> >>>
> >>> abhinandan.prat...@shapeblue.com
> >>> www.shapeblue.com
> >>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >>> @shapeblue
> >>>
> >>>
> >>>
> >>>
> >
> >abhinandan.prat...@shapeblue.com
> >www.shapeblue.com
> >53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >@shapeblue
> >
> >
> >
>

Reply via email to