Re: [Libguestfs] OpenStack output workflow

2018-10-12 Thread Richard W.M. Jones
On Thu, Oct 04, 2018 at 04:08:48PM +0200, Fabien Dupont wrote:
> New code tries SIGTERM first, with a grace period of 30 seconds:
> https://github.com/ManageIQ/manageiq-content/pull/433.

Yes that looks much better, thanks.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines.  Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v

___
Libguestfs mailing list
Libguestfs@redhat.com
https://www.redhat.com/mailman/listinfo/libguestfs


Re: [Libguestfs] OpenStack output workflow

2018-10-04 Thread Fabien Dupont
New code tries SIGTERM first, with a grace period of 30 seconds:
https://github.com/ManageIQ/manageiq-content/pull/433.

On Wed, Sep 26, 2018 at 6:10 PM Richard W.M. Jones 
wrote:

> On Wed, Sep 26, 2018 at 04:57:19PM +0200, Fabien Dupont wrote:
> > It's not virt-v2v-wrapper that kills virt-v2v, it's ManageIQ. We have the
> > PID from virt-v2v-wrapper state file. What would be the preferred way
> > to interrupt it ?
>
> It's not too nice to send kill -9 to virt-v2v because it means none of
> the at-exit handlers get to run, so it will leave temporary files all
> over the place.  It's better to send an ordinary kill signal
> (eg. SIGTERM).  If virt-v2v doesn't exit after some grace period,
> eg. 30 seconds, then it's a bug, but maybe you could then send
> SIGKILL.
>
> This is actually another thing which a temporary systemd unit will
> solve for us:
> https://www.freedesktop.org/software/systemd/man/systemd.kill.html
>
> Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat
> http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> virt-top is 'top' for virtual machines.  Tiny program with many
> powerful monitoring features, net stats, disk stats, logging, etc.
> http://people.redhat.com/~rjones/virt-top
>


-- 

*Fabien Dupont*

PRINCIPAL SOFTWARE ENGINEER

Red Hat - Solutions Engineering

fab...@redhat.com M: +33 (0) 662 784 971 <+33662784971>

  *TRIED. TESTED. TRUSTED.*

Twitter: @redhatway  | Instagram: @redhatinc
 | Snapchat: @redhatsnaps
___
Libguestfs mailing list
Libguestfs@redhat.com
https://www.redhat.com/mailman/listinfo/libguestfs

Re: [Libguestfs] OpenStack output workflow

2018-09-27 Thread Fabien Dupont
Thanks Rich. I'll change the code for IMS 1.1. And I like more
and more that systemd thing.

On Wed, Sep 26, 2018 at 6:10 PM Richard W.M. Jones 
wrote:

> On Wed, Sep 26, 2018 at 04:57:19PM +0200, Fabien Dupont wrote:
> > It's not virt-v2v-wrapper that kills virt-v2v, it's ManageIQ. We have the
> > PID from virt-v2v-wrapper state file. What would be the preferred way
> > to interrupt it ?
>
> It's not too nice to send kill -9 to virt-v2v because it means none of
> the at-exit handlers get to run, so it will leave temporary files all
> over the place.  It's better to send an ordinary kill signal
> (eg. SIGTERM).  If virt-v2v doesn't exit after some grace period,
> eg. 30 seconds, then it's a bug, but maybe you could then send
> SIGKILL.
>
> This is actually another thing which a temporary systemd unit will
> solve for us:
> https://www.freedesktop.org/software/systemd/man/systemd.kill.html
>
> Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat
> http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> virt-top is 'top' for virtual machines.  Tiny program with many
> powerful monitoring features, net stats, disk stats, logging, etc.
> http://people.redhat.com/~rjones/virt-top
>


-- 

*Fabien Dupont*

PRINCIPAL SOFTWARE ENGINEER

Red Hat - Solutions Engineering

fab...@redhat.com M: +33 (0) 662 784 971 <+33662784971>

  *TRIED. TESTED. TRUSTED.*

Twitter: @redhatway  | Instagram: @redhatinc
 | Snapchat: @redhatsnaps
___
Libguestfs mailing list
Libguestfs@redhat.com
https://www.redhat.com/mailman/listinfo/libguestfs

Re: [Libguestfs] OpenStack output workflow

2018-09-26 Thread Richard W.M. Jones
On Wed, Sep 26, 2018 at 04:57:19PM +0200, Fabien Dupont wrote:
> It's not virt-v2v-wrapper that kills virt-v2v, it's ManageIQ. We have the
> PID from virt-v2v-wrapper state file. What would be the preferred way
> to interrupt it ?

It's not too nice to send kill -9 to virt-v2v because it means none of
the at-exit handlers get to run, so it will leave temporary files all
over the place.  It's better to send an ordinary kill signal
(eg. SIGTERM).  If virt-v2v doesn't exit after some grace period,
eg. 30 seconds, then it's a bug, but maybe you could then send
SIGKILL.

This is actually another thing which a temporary systemd unit will
solve for us:
https://www.freedesktop.org/software/systemd/man/systemd.kill.html

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top

___
Libguestfs mailing list
Libguestfs@redhat.com
https://www.redhat.com/mailman/listinfo/libguestfs


Re: [Libguestfs] OpenStack output workflow

2018-09-26 Thread Fabien Dupont
On Wed, Sep 26, 2018 at 4:25 PM Richard W.M. Jones 
wrote:

> On Wed, Sep 26, 2018 at 02:40:54PM +0200, Fabien Dupont wrote:
> > [Adding Tomas Golembiovsky]
> >
> > Well, that's mainly IMS related challenges. We're working on
> > OpenStack output support and migration throttling and this implies
> > changes to virt-v2v-wrapper.  This is then the opportunity to think
> > about virt-v2v-wrapper maintenance and feature set. It has been
> > created in the first place to simplify interaction with virt-v2v
> > from ManageIQ.
>
> Stepping back here, the upstream community on this mailing list have
> no idea what you mean by the terms "IMS", "virt-v2v-wrapper" and
> "ManageIQ".  I'll try to explain briefly:
>
> * ManageIQ (http://manageiq.org/) = a kind of scriptable, universal
>   management tool for cloudy things, using Ansible.
>
> * Ansible = automates remote management of machines using ssh.
>
> * IMS = an internal Red Hat project to add virt-v2v support to
>   ManageIQ.
>
> * virt-v2v-wrapper = a wrapper around virt-v2v which allows it to be
>   called from Ansible.  The reason we need this is because Ansible
>   doesn't support managing long-running processes like virt-v2v, so we
>   need to have another component which provides an API which can be
>   queried remotely, while tending to the long-running virt-v2v behind
>   the scenes.  [Tomáš: Got a link to the code?  I can't find it right now]
>

Thanks for adding explanation. FWIW, Ansible is not involved in IMS.
There are some rogue code out there
(https://github.com/fdupont-redhat/ims-v2v-engine_ansible)
doing things with Ansible, because I like experimenting.

Virt-v2v-wrapper code is available here:
https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/blob/master/files/virt-v2v-wrapper.py


> > The first challenge we faced is the interaction with virt-v2v. It's
> > highly versatile and proposes a lot of options for input and
> > output. The downside of it is that over time is becomes more and
> > more difficult to know them all.
>
> The options are all documented in the manual.  I have thought for a
> while that we need to tackle the virt-v2v manual: It's too big, and
> unapproachable.  Really I think we need to split it into topic
> sections and rewrite parts of it.  Unfortunately I've not had time to
> do that so far.
>
> > And all the error messages are made for human beings, not machines,
> > so providing feedback through a launcher, such as virt-v2v-wrapper,
> > is difficult.
>
> This is indeed an issue.  Pino recently added enhanced support for the
> ‘--machine-readable’ option which should address some problems:
>
>
> https://github.com/libguestfs/libguestfs/commit/afa8111b751ed33e1989e6d9bb03928cefa17917
>
> If this change still doesn't fully address the issues with automating
> virt-v2v then please let us know what specifically can be improved
> here.
>

Thanks for the link. virt-v2v-wrapper is using the standalone call with
--machine-readable to get the capabilities of virt-v2v. It's used to
check for --mac option support.

[...]
> > For progress, the only way to know what happens is to run virt-v2v
> > in debug mode (-v -x) and parse the (very extensive)
> > output. Virt-v2v-wrapper does it for us in IMS, but it is merely a
> > workaround.
>
> Right, this is indeed another problem which we should address.  I
> thought we had an RFE filed for this, but I cannot find it.  At the
> moment the workaround you mention is very ugly and clunky, but AFAIK
> it does work.
>

You're right. I works. We'll test if the --machine-readable enhances the
machine experience, and make RFEs if needed.


> > I'd expect a conversion tool to provide a comprehensive progress,
> > such as "I'm converting VM 'my_vm' and more specifically disk X/Y
> > (XX%). Total conversion progress is XX%". Of course, I'd also expect
> > a machine readable output (JSON, CSV, YAML…). Debug mode ensures we
> > have all the data in case of failure, so I don't say remove it, but
> > simply add specialized outputs.
>
> We can discuss debug output vs progress output and formats to use
> separately when fixing the above, but yes, point taken.
>
> > The third challenge was to clean up in case of virt-v2v failure. For
> > example, when it fails converting a disk to RHV, it doesn't clean
> > the finished and unfinished disks.
>
> This is a bug (https://bugzilla.redhat.com/show_bug.cgi?id=1616226).
> It's been on my to-do list for quite a while , but I haven't got to
> it, so patches welcome ...
>

Thanks for the BZ reference. IIUC, this will clean the disk being converted
at the kill interruption time. What about the already converted disks for
a multi-disks VM ? IMO, they should also be removed.


> > Virt-v2v-wrapper was initially written by RHV team (Tomas) for RHV
> > migrations, so it sounded fair(ish). But, extending the outputs to
> > OpenStack, we'll have to deal with leftovers in OpenStack too. Maybe
> > a cleanup on failure option would be a good idea, with a default to
> > 

Re: [Libguestfs] OpenStack output workflow

2018-09-26 Thread Richard W.M. Jones
On Wed, Sep 26, 2018 at 02:40:54PM +0200, Fabien Dupont wrote:
> [Adding Tomas Golembiovsky]
>
> Well, that's mainly IMS related challenges. We're working on
> OpenStack output support and migration throttling and this implies
> changes to virt-v2v-wrapper.  This is then the opportunity to think
> about virt-v2v-wrapper maintenance and feature set. It has been
> created in the first place to simplify interaction with virt-v2v
> from ManageIQ.

Stepping back here, the upstream community on this mailing list have
no idea what you mean by the terms "IMS", "virt-v2v-wrapper" and
"ManageIQ".  I'll try to explain briefly:

* ManageIQ (http://manageiq.org/) = a kind of scriptable, universal
  management tool for cloudy things, using Ansible.

* Ansible = automates remote management of machines using ssh.

* IMS = an internal Red Hat project to add virt-v2v support to
  ManageIQ.

* virt-v2v-wrapper = a wrapper around virt-v2v which allows it to be
  called from Ansible.  The reason we need this is because Ansible
  doesn't support managing long-running processes like virt-v2v, so we
  need to have another component which provides an API which can be
  queried remotely, while tending to the long-running virt-v2v behind
  the scenes.  [Tomáš: Got a link to the code?  I can't find it right now]

> The first challenge we faced is the interaction with virt-v2v. It's
> highly versatile and proposes a lot of options for input and
> output. The downside of it is that over time is becomes more and
> more difficult to know them all.

The options are all documented in the manual.  I have thought for a
while that we need to tackle the virt-v2v manual: It's too big, and
unapproachable.  Really I think we need to split it into topic
sections and rewrite parts of it.  Unfortunately I've not had time to
do that so far.

> And all the error messages are made for human beings, not machines,
> so providing feedback through a launcher, such as virt-v2v-wrapper,
> is difficult.

This is indeed an issue.  Pino recently added enhanced support for the
‘--machine-readable’ option which should address some problems:

  
https://github.com/libguestfs/libguestfs/commit/afa8111b751ed33e1989e6d9bb03928cefa17917

If this change still doesn't fully address the issues with automating
virt-v2v then please let us know what specifically can be improved
here.

[...]
> For progress, the only way to know what happens is to run virt-v2v
> in debug mode (-v -x) and parse the (very extensive)
> output. Virt-v2v-wrapper does it for us in IMS, but it is merely a
> workaround.

Right, this is indeed another problem which we should address.  I
thought we had an RFE filed for this, but I cannot find it.  At the
moment the workaround you mention is very ugly and clunky, but AFAIK
it does work.

> I'd expect a conversion tool to provide a comprehensive progress,
> such as "I'm converting VM 'my_vm' and more specifically disk X/Y
> (XX%). Total conversion progress is XX%". Of course, I'd also expect
> a machine readable output (JSON, CSV, YAML…). Debug mode ensures we
> have all the data in case of failure, so I don't say remove it, but
> simply add specialized outputs.

We can discuss debug output vs progress output and formats to use
separately when fixing the above, but yes, point taken.

> The third challenge was to clean up in case of virt-v2v failure. For
> example, when it fails converting a disk to RHV, it doesn't clean
> the finished and unfinished disks.

This is a bug (https://bugzilla.redhat.com/show_bug.cgi?id=1616226).
It's been on my to-do list for quite a while , but I haven't got to
it, so patches welcome ...

> Virt-v2v-wrapper was initially written by RHV team (Tomas) for RHV
> migrations, so it sounded fair(ish). But, extending the outputs to
> OpenStack, we'll have to deal with leftovers in OpenStack too. Maybe
> a cleanup on failure option would be a good idea, with a default to
> false to not break existing behaviour.

The issue of cleaning up disks in general is a hard one to solve.

With the OpenStack backend we try our best as long as virt-v2v
exits on a normal failure path:

  
https://github.com/libguestfs/libguestfs/blob/e2bafffce24cd8c0436bf887ee166a3ae2257bbb/v2v/output_openstack.ml#L370-L384

However there are always going to be cases where that is not possible
(eg. virt-v2v segfaults or is kill -9'd or whatever), and in that case
I envisaged for OpenStack some sort of external garbage collector.  To
this end, disks which have not been finalized are given a special
description so it should be possible to find them after a full
migration has completed:

  
https://github.com/libguestfs/libguestfs/blob/e2bafffce24cd8c0436bf887ee166a3ae2257bbb/v2v/output_openstack.ml#L386-L392

IIRC virt-v2v-wrapper is sending kill -9 to virt-v2v, which it should
not do.

> The fourth challenge is to limit the resources allocated to virt-v2v
> during conversion, because concurrent conversions may have a huge
> impact on conversion host performance. 

Re: [Libguestfs] OpenStack output workflow

2018-09-26 Thread Fabien Dupont
[Adding Tomas Golembiovsky]

On Wed, Sep 26, 2018 at 12:11 PM Richard W.M. Jones 
wrote:

>
> Rather than jumping to a solution, can you explain what the problem
> is that you're trying to solve?
>
> You need to do , you tried virt-v2v, it doesn't do , etc.
>

Well, that's mainly IMS related challenges. We're working on OpenStack
output
support and migration throttling and this implies changes to
virt-v2v-wrapper.
This is then the opportunity to think about virt-v2v-wrapper maintenance and
feature set. It has been created in the first place to simplify interaction
with virt-v2v from ManageIQ.

The first challenge we faced is the interaction with virt-v2v. It's highly
versatile and proposes a lot of options for input and output. The downside
of
it is that over time is becomes more and more difficult to know them all.
And
all the error messages are made for human beings, not machines, so providing
feedback through a launcher, such as virt-v2v-wrapper, is difficult.

The second challenge was monitoring the virt-v2v process liveliness and
progress. For liveliness, the virt-v2v-wrapper stores the PID and checks
that
it's still present and when absent checks its return code for success (0) or
failure (!0), and any other launcher could do the same. For progress, the
only
way to know what happens is to run virt-v2v in debug mode (-v -x) and parse
the
(very extensive) output. Virt-v2v-wrapper does it for us in IMS, but it is
merely a workaround. I'd expect a conversion tool to provide a comprehensive
progress, such as "I'm converting VM 'my_vm' and more specifically disk X/Y
(XX%). Total conversion progress is XX%". Of course, I'd also expect a
machine
readable output (JSON, CSV, YAML…). Debug mode ensures we have all the data
in
case of failure, so I don't say remove it, but simply add specialized
outputs.

The third challenge was to clean up in case of virt-v2v failure. For
example,
when it fails converting a disk to RHV, it doesn't clean the finished and
unfinished disks. Virt-v2v-wrapper was initially written by RHV team (Tomas)
for RHV migrations, so it sounded fair(ish). But, extending the outputs to
OpenStack, we'll have to deal with leftovers in OpenStack too. Maybe a
cleanup
on failure option would be a good idea, with a default to false to not break
existing behaviour.

The fourth challenge is to limit the resources allocated to virt-v2v during
conversion, because concurrent conversions may have a huge impact on
conversion
host performance. In the case of an oVirt host, this can impact the virtual
machines that run on it. This is not covered yet by the wrapper, but
implementation will likely be based on Linux cgroups and tc.

The wrapper also adds an interesting feature: both virt-v2v and
virt-v2v-wrapper
run daemonized and we can asynchronously poll the progress. This is really
key
for IMS (and maybe for others): this allows us to start as many conversions
in
parallel as needed and monitor them. Currently, the Python code forks and
detaches itself, after providing the paths to the state file. In the
discussion
about cgroups, it was mentioned that systemd units could be used, and it
echoes
with the daemonization, as systemd-run allows running processes under
systemd
and in their own slice, on which cgroups limits can be set.

About the evolution of virt-v2v-wrapper that I'm going to describe, let me
state
that this is my personal view and it endorses only myself.

I would like to see the machine-to-machine interaction, logging and cleanup
in
virt-v2v itself because it is valuable to everyone, not only IMS.

I would also like to convert virt-v2v-wrapper to a conversion API and
Scheduler
service. The idea is that it would provide an as-a-Service endpoint for
conversions, that would allow creation of conversion jobs (POST), fetching
of
the status (GET), cancelation of a conversion (DELETE) and changing of the
limits (PATCH). In the background, a basic scheduler would simply ensure
that
all the jobs are running. Each virt-v2v process would be run as a systemd
unit
(journald could capture the debug output), so that it is independent from
the API and Scheduler processes.

I know that I can propose patches for changes to virt-v2v, or at least file
RFEs in Bugzilla (my developer skills and programing languages breadth are
limited). For the evolved wrapper, my main concern is its housing and
maintenance. It doesn't work only for oVirt, so having its lifecycle tied to
oVirt doesn't seem relevant in the long term. In fact, it can be for any
virt-v2v output, so my personal opinion is that it should live in the
virt-v2v
ecosystem and follow it's lifecycle. As for its maintenance, we still have
to
figure out who will be responsible for it, i.e. who will be able to dedicate
time to it.

Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat
> http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> virt-top is 'top' for virtual machines.  Tiny program with many
> 

Re: [Libguestfs] OpenStack output workflow

2018-09-26 Thread Richard W.M. Jones


Rather than jumping to a solution, can you explain what the problem
is that you're trying to solve?

You need to do , you tried virt-v2v, it doesn't do , etc.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top

___
Libguestfs mailing list
Libguestfs@redhat.com
https://www.redhat.com/mailman/listinfo/libguestfs


Re: [Libguestfs] OpenStack output workflow

2018-09-26 Thread Fabien Dupont
On Wed, Sep 26, 2018 at 11:39 AM Richard W.M. Jones 
wrote:

> On Wed, Sep 26, 2018 at 09:57:22AM +0200, Fabien Dupont wrote:
> > Hi,
> >
> > There has been discussion about the OpenStack output and Richard asked
> for
> > a public thread on this list, so here it is.
> >
> > For v2v from VMware to RHV, there is a Python script that does some extra
> > steps to create the virtual machine after the disks have been converted.
> We
> > want to have the same behavior for OpenStack, i.e. have virt-v2v create
> the
> > instance once the volumes have been created.
>
> Note that for RHV we create *but do not start* the virtual machine.
>
> In fact virt-v2v doesn't start the virtual machine on any output, with
> the exception of the ‘--qemu-boot’ flag (which we remove in RHEL since
> it's essentially a debugging feature).
>
> So I don't necessarily accept the premise that virt-v2v should start
> the VM on OpenStack.  One reason not to is that the VM might not have
> been running on the source, and converting a VM should not change its
> state from shutdown to running for what I think are fairly obvious
> reasons.
>
> Complicating this is that OpenStack itself doesn't seem to have a
> concept of a VM which is created but not running (in this way it is
> different from libvirt and RHV).
>
> We currently create Cinder volume(s) with the VM disk data, plus image
> properties attached to those volume(s), plus other volume properties
> [NB: in Cinder properties and image properties are different things]
> which is sufficient for someone else to start the instance (see
> virt-v2v(1) man page for exactly how to start it).
>

I do agree that we ask virt-v2v to do one more thing compared to RHV,
which is start the VM. But, virt-v2v doesn't really start the VM: it
creates it,
then OpenStack starts it once created. I think we can fairly consider that
a user converting a VM, not only disks, from VMware to OpenStack will know
it and I think we should emphasize that in the OpenStack output
documentation.

Also, I think it would be nice option for RHV to have a -oo start-vm option
that
allows starting the VM after conversion. But I might be pushing too much ;)

> For that, I've written a Python script [1] that takes a JSON file (sample
> > here [2]) as input. I expect this JSON input to be generated by virt-v2v
> > openstack output module, from the command line options and the volumes
> ids
> > generated during conversion.
> >
> > Here are the options I think we should have for the OpenStack output:
> >
> > -o openstack
> > -oo os-auth-url='http://controller.example.com:5000/v3'
> > -oo os-user-domain-name='Default'
> > -oo os-project-name='v2v-project'
> > -oo os-username='admin'
> > -oo os-password='secret'
> > -oo server-id='01234567-89ab-cdef-0123-456789abcdef'
> > -oo destination_project_id='01234567-89ab-cdef-0123-456789abcdef'
> > -oo volume_type_id='01234567-89ab-cdef-0123-456789abcdef'
> > -oo flavor_id='01234567-89ab-cdef-0123-456789abcdef'
> > -oo
> >
> security_groups_ids='01234567-89ab-cdef-0123-456789abcdef,01234567-89ab-cdef-0123-456789abcdef'
> > --mac 01:23:45:67:89:ab:network:01234567-89ab-cdef-0123-456789abcdef
> >
> > You'll see that the --mac option is not specific to OpenStack, but it
> shows
> > how it would look like with a network id. And it should be passed to the
> > post-conversion script.
> >
> > The translation to JSON is pretty straight forward and should not be
> > difficult. We simply have to agree on the JSON keys we expect and the
> where
> > the new -oo keys go. Also, the script is quite simple and relies on
> > OpenStack Python SDK, which is also used by the OpenStack CLI, so no
> > additional dependencies are required and it should be easy to maintain.
> >
> > [1]
> >
> https://gist.github.com/fdupont-redhat/934b3efb6d66a991a80149235066d7d7#file-post_conversion-py
> > [2]
> >
> https://gist.github.com/fdupont-redhat/934b3efb6d66a991a80149235066d7d7#file-test-migration-json
>
> I'm still confused about how this fits with virt-v2v, even
> conceptually.
>
> Why don't you just run virt-v2v with the options you want, then
> examine the resulting Cinder volumes, extract the properties and image
> properties and run the VM using those properties?
>
> Did you look at a converted VM and see the properties and image
> properties that we are setting?
>

That would mean moving that part into ManageIQ or virt-v2v-wrapper. But, I
don't
see why virt-v2v-wrapper is not part of librguest/virt-v2v as it is not
limited to RHV
conversions anymore. It adds a API-like interface to virt-v2v, as well as
monitoring
capabilities that are really valuable. I'm thinking about a evolution of
virt-v2v-wrapper,
and I will probably start a new thread for that.

Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat
> http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> libguestfs lets you edit virtual machines.  Supports shell scripting,
> bindings from many 

Re: [Libguestfs] OpenStack output workflow

2018-09-26 Thread Richard W.M. Jones
On Wed, Sep 26, 2018 at 09:57:22AM +0200, Fabien Dupont wrote:
> Hi,
> 
> There has been discussion about the OpenStack output and Richard asked for
> a public thread on this list, so here it is.
> 
> For v2v from VMware to RHV, there is a Python script that does some extra
> steps to create the virtual machine after the disks have been converted. We
> want to have the same behavior for OpenStack, i.e. have virt-v2v create the
> instance once the volumes have been created.

Note that for RHV we create *but do not start* the virtual machine.

In fact virt-v2v doesn't start the virtual machine on any output, with
the exception of the ‘--qemu-boot’ flag (which we remove in RHEL since
it's essentially a debugging feature).

So I don't necessarily accept the premise that virt-v2v should start
the VM on OpenStack.  One reason not to is that the VM might not have
been running on the source, and converting a VM should not change its
state from shutdown to running for what I think are fairly obvious
reasons.

Complicating this is that OpenStack itself doesn't seem to have a
concept of a VM which is created but not running (in this way it is
different from libvirt and RHV).

We currently create Cinder volume(s) with the VM disk data, plus image
properties attached to those volume(s), plus other volume properties
[NB: in Cinder properties and image properties are different things]
which is sufficient for someone else to start the instance (see
virt-v2v(1) man page for exactly how to start it).

> For that, I've written a Python script [1] that takes a JSON file (sample
> here [2]) as input. I expect this JSON input to be generated by virt-v2v
> openstack output module, from the command line options and the volumes ids
> generated during conversion.
> 
> Here are the options I think we should have for the OpenStack output:
> 
> -o openstack
> -oo os-auth-url='http://controller.example.com:5000/v3'
> -oo os-user-domain-name='Default'
> -oo os-project-name='v2v-project'
> -oo os-username='admin'
> -oo os-password='secret'
> -oo server-id='01234567-89ab-cdef-0123-456789abcdef'
> -oo destination_project_id='01234567-89ab-cdef-0123-456789abcdef'
> -oo volume_type_id='01234567-89ab-cdef-0123-456789abcdef'
> -oo flavor_id='01234567-89ab-cdef-0123-456789abcdef'
> -oo
> security_groups_ids='01234567-89ab-cdef-0123-456789abcdef,01234567-89ab-cdef-0123-456789abcdef'
> --mac 01:23:45:67:89:ab:network:01234567-89ab-cdef-0123-456789abcdef
> 
> You'll see that the --mac option is not specific to OpenStack, but it shows
> how it would look like with a network id. And it should be passed to the
> post-conversion script.
> 
> The translation to JSON is pretty straight forward and should not be
> difficult. We simply have to agree on the JSON keys we expect and the where
> the new -oo keys go. Also, the script is quite simple and relies on
> OpenStack Python SDK, which is also used by the OpenStack CLI, so no
> additional dependencies are required and it should be easy to maintain.
> 
> [1]
> https://gist.github.com/fdupont-redhat/934b3efb6d66a991a80149235066d7d7#file-post_conversion-py
> [2]
> https://gist.github.com/fdupont-redhat/934b3efb6d66a991a80149235066d7d7#file-test-migration-json

I'm still confused about how this fits with virt-v2v, even
conceptually.

Why don't you just run virt-v2v with the options you want, then
examine the resulting Cinder volumes, extract the properties and image
properties and run the VM using those properties?

Did you look at a converted VM and see the properties and image
properties that we are setting?

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org

___
Libguestfs mailing list
Libguestfs@redhat.com
https://www.redhat.com/mailman/listinfo/libguestfs

[Libguestfs] OpenStack output workflow

2018-09-26 Thread Fabien Dupont
Hi,

There has been discussion about the OpenStack output and Richard asked for
a public thread on this list, so here it is.

For v2v from VMware to RHV, there is a Python script that does some extra
steps to create the virtual machine after the disks have been converted. We
want to have the same behavior for OpenStack, i.e. have virt-v2v create the
instance once the volumes have been created.

For that, I've written a Python script [1] that takes a JSON file (sample
here [2]) as input. I expect this JSON input to be generated by virt-v2v
openstack output module, from the command line options and the volumes ids
generated during conversion.

Here are the options I think we should have for the OpenStack output:

-o openstack
-oo os-auth-url='http://controller.example.com:5000/v3'
-oo os-user-domain-name='Default'
-oo os-project-name='v2v-project'
-oo os-username='admin'
-oo os-password='secret'
-oo server-id='01234567-89ab-cdef-0123-456789abcdef'
-oo destination_project_id='01234567-89ab-cdef-0123-456789abcdef'
-oo volume_type_id='01234567-89ab-cdef-0123-456789abcdef'
-oo flavor_id='01234567-89ab-cdef-0123-456789abcdef'
-oo
security_groups_ids='01234567-89ab-cdef-0123-456789abcdef,01234567-89ab-cdef-0123-456789abcdef'
--mac 01:23:45:67:89:ab:network:01234567-89ab-cdef-0123-456789abcdef

You'll see that the --mac option is not specific to OpenStack, but it shows
how it would look like with a network id. And it should be passed to the
post-conversion script.

The translation to JSON is pretty straight forward and should not be
difficult. We simply have to agree on the JSON keys we expect and the where
the new -oo keys go. Also, the script is quite simple and relies on
OpenStack Python SDK, which is also used by the OpenStack CLI, so no
additional dependencies are required and it should be easy to maintain.

[1]
https://gist.github.com/fdupont-redhat/934b3efb6d66a991a80149235066d7d7#file-post_conversion-py
[2]
https://gist.github.com/fdupont-redhat/934b3efb6d66a991a80149235066d7d7#file-test-migration-json

-- 

*Fabien Dupont*

PRINCIPAL SOFTWARE ENGINEER

Red Hat - Solutions Engineering

fab...@redhat.com M: +33 (0) 662 784 971 <+33662784971>

  *TRIED. TESTED. TRUSTED.*

Twitter: @redhatway  | Instagram: @redhatinc
 | Snapchat: @redhatsnaps
___
Libguestfs mailing list
Libguestfs@redhat.com
https://www.redhat.com/mailman/listinfo/libguestfs