Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-02-14 Thread Luca 'remix_tj' Lorenzetto
On Tue, Feb 6, 2018 at 11:19 AM, Richard W.M. Jones  wrote:
> On Tue, Feb 06, 2018 at 11:11:37AM +0100, Luca 'remix_tj' Lorenzetto wrote:
>> Il 6 feb 2018 10:52 AM, "Yaniv Kaul"  ha scritto:
>>
>>
>> I assume its network interfaces are also a bottleneck as well. Certainly if
>> they are 1g.
>> Y.
>>
>>
>> That's not the case, vcenter uses 10g and also all the involved hosts.
>>
>> We first supposed the culprit was network, but investigations has cleared
>> its position. Network usage is under 40% with 4 ongoing migrations.
>
> The problem is two-fold and is common to all vCenter transformations:
>
> (1) A single https connection is used and each block of data that is
> requested is processed serially.
>
> (2) vCenter has to forward each request to the ESXi hypervisor.
>
> (1) + (2) => most time is spent waiting on the lengthy round trips for
> each requested block of data.
>
> This is why overlapping multiple parallel conversions works and
> (although each conversion is just as slow) improves throughput,
> because you're filling in the long idle gaps by serving other
> conversions.
>
[cut]

FYI it was a cpu utilization issue. Now that vcenter has a lower
average cpu usage, migration times halved and returned back to the
original estimations.

Thank Richard for the infos about virt-v2v, we improved our knowledge
on this tool :-)

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-02-06 Thread Richard W.M. Jones
On Tue, Feb 06, 2018 at 11:11:37AM +0100, Luca 'remix_tj' Lorenzetto wrote:
> Il 6 feb 2018 10:52 AM, "Yaniv Kaul"  ha scritto:
> 
> 
> I assume its network interfaces are also a bottleneck as well. Certainly if
> they are 1g.
> Y.
> 
> 
> That's not the case, vcenter uses 10g and also all the involved hosts.
> 
> We first supposed the culprit was network, but investigations has cleared
> its position. Network usage is under 40% with 4 ongoing migrations.

The problem is two-fold and is common to all vCenter transformations:

(1) A single https connection is used and each block of data that is
requested is processed serially.

(2) vCenter has to forward each request to the ESXi hypervisor.

(1) + (2) => most time is spent waiting on the lengthy round trips for
each requested block of data.

This is why overlapping multiple parallel conversions works and
(although each conversion is just as slow) improves throughput,
because you're filling in the long idle gaps by serving other
conversions.

This is also why other methods perform so much better.  VMX over SSH
uses a single connection but connects directly to the ESXi hypervisor,
so cause (2) is eliminated.  VMX over NFS eliminates VMware servers
entirely and can make multiple parallel requests, eliminating (1) and
(2).  VDDK [in ideal circumstances] can mount the FC storage directly
on the conversion host meaning the ordinary network is not even used
and all requests travel over the SAN.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-02-06 Thread Luca 'remix_tj' Lorenzetto
Il 6 feb 2018 10:52 AM, "Yaniv Kaul"  ha scritto:


I assume its network interfaces are also a bottleneck as well. Certainly if
they are 1g.
Y.


That's not the case, vcenter uses 10g and also all the involved hosts.

We first supposed the culprit was network, but investigations has cleared
its position. Network usage is under 40% with 4 ongoing migrations.

Luca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-02-06 Thread Yaniv Kaul
On Feb 6, 2018 11:06 AM, "Luca 'remix_tj' Lorenzetto" <
lorenzetto.l...@gmail.com> wrote:

On Mon, Feb 5, 2018 at 11:13 PM, Richard W.M. Jones 
wrote:
> http://libguestfs.org/virt-v2v.1.html#vmware-vcenter-resources
>
> You should be able to run multiple conversions in parallel
> to improve throughput.
>
> The only long-term solution is to use a different method such as VMX
> over SSH.  vCenter is just fundamentally bad.

4 conversions in parallel works, but each one is very slow. But i
think i've to blame vcenter cpu which is stuck at 100%.


I assume its network interfaces are also a bottleneck as well. Certainly if
they are 1g.
Y.


Thank you for the directions and suggestions,

Luca

--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
lorenzetto.l...@gmail.com>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-02-06 Thread Luca 'remix_tj' Lorenzetto
On Mon, Feb 5, 2018 at 11:13 PM, Richard W.M. Jones  wrote:
> http://libguestfs.org/virt-v2v.1.html#vmware-vcenter-resources
>
> You should be able to run multiple conversions in parallel
> to improve throughput.
>
> The only long-term solution is to use a different method such as VMX
> over SSH.  vCenter is just fundamentally bad.

4 conversions in parallel works, but each one is very slow. But i
think i've to blame vcenter cpu which is stuck at 100%.

Thank you for the directions and suggestions,

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-02-05 Thread Richard W.M. Jones
On Mon, Feb 05, 2018 at 10:57:58PM +0100, Luca 'remix_tj' Lorenzetto wrote:
> On Fri, Feb 2, 2018 at 12:52 PM, Richard W.M. Jones  wrote:
> > There is a section about this in the virt-v2v man page.  I'm on
> > a train at the moment but you should be able to find it.  Try to
> > run many conversions, at least 4 or 8 would be good places to start.
> 
> Hello Richard,
> 
> read the man but found nothing explicit about resource usage. Anyway,
> digging on our setup i found out that vcenter when on low cpu usage is
> 95%.
> I think our windows admins should take care of this.

http://libguestfs.org/virt-v2v.1.html#vmware-vcenter-resources

You should be able to run multiple conversions in parallel
to improve throughput.

The only long-term solution is to use a different method such as VMX
over SSH.  vCenter is just fundamentally bad.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-02-05 Thread Luca 'remix_tj' Lorenzetto
On Fri, Feb 2, 2018 at 12:52 PM, Richard W.M. Jones  wrote:
> There is a section about this in the virt-v2v man page.  I'm on
> a train at the moment but you should be able to find it.  Try to
> run many conversions, at least 4 or 8 would be good places to start.

Hello Richard,

read the man but found nothing explicit about resource usage. Anyway,
digging on our setup i found out that vcenter when on low cpu usage is
95%.
I think our windows admins should take care of this.

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-02-02 Thread Richard W.M. Jones
On Fri, Feb 02, 2018 at 12:20:14PM +0100, Luca 'remix_tj' Lorenzetto wrote:
> Hello Richard,
> 
> unfortunately upgrading virt-v2v is not an option. Would be nice, but
> integration with vdsm is not yet ready for that options.
> 
> On Thu, Jan 25, 2018 at 11:06 AM, Richard W.M. Jones  
> wrote:
> [cut]
> > I don't know why it slowed down, but I'm pretty sure it's got nothing
> > to do with the version of oVirt/RHV.  Especially in the initial phase
> > where it's virt-v2v reading the guest from vCenter.  Something must
> > have changed or be different in the test and production environments.
> >
> 
> > Are you converting the same guests?  virt-v2v is data-driven, so
> > different guests require different operations, and those can take
> > different amount of time to run.
> >
> 
> I'm not migrating the same guests, i'm migrating different guest, but
> most of them share the same os baseline.
> Most of these vms are from the same RHEL 7 template and have little
> data difference (few gigs).
> 
> Do you know which is the performance impact on vcenter? I'd like to
> tune as best as possible the vcenter to improve the migration time.

There is a section about this in the virt-v2v man page.  I'm on
a train at the moment but you should be able to find it.  Try to
run many conversions, at least 4 or 8 would be good places to start.

> We have to migrate ~300 guests, and our maintenance window is very
> short. We don't want continue the migration for months.

SSH or VDDK method would be far faster but if you can't upgrade
you're stuck with https to vCenter.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines.  Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-02-02 Thread Luca 'remix_tj' Lorenzetto
Hello Richard,

unfortunately upgrading virt-v2v is not an option. Would be nice, but
integration with vdsm is not yet ready for that options.

On Thu, Jan 25, 2018 at 11:06 AM, Richard W.M. Jones  wrote:
[cut]
> I don't know why it slowed down, but I'm pretty sure it's got nothing
> to do with the version of oVirt/RHV.  Especially in the initial phase
> where it's virt-v2v reading the guest from vCenter.  Something must
> have changed or be different in the test and production environments.
>

> Are you converting the same guests?  virt-v2v is data-driven, so
> different guests require different operations, and those can take
> different amount of time to run.
>

I'm not migrating the same guests, i'm migrating different guest, but
most of them share the same os baseline.
Most of these vms are from the same RHEL 7 template and have little
data difference (few gigs).

Do you know which is the performance impact on vcenter? I'd like to
tune as best as possible the vcenter to improve the migration time.

We have to migrate ~300 guests, and our maintenance window is very
short. We don't want continue the migration for months.

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-01-25 Thread Richard W.M. Jones
On Thu, Jan 25, 2018 at 10:53:28AM +0100, Luca 'remix_tj' Lorenzetto wrote:
> On Thu, Jan 25, 2018 at 10:08 AM, Richard W.M. Jones  
> wrote:
> > There's got to be some difference between your staging environment and
> > your production environment, and I'm pretty sure it has nothing to do
> > with the version of oVirt.
> >
> > Are you running virt-v2v inside a virtual machine, and previously you
> > ran it on bare-metal?  Or did you disable nested KVM?  That seems like
> > the most likely explanation for the difference (although I'm surprised
> > that the difference is so large).
> >
> > Rich.
> >
> 
> Hello Rich,
> 
> i'm running virt-v2v throught the import option of oVirt.

Unfortunately the ‘-i vmx’ method is not yet supported when using the
oVirt UI.  However it will work from the command line[0] if you just
upgrade virt-v2v using the RHEL 7.5 preview repo I linked to before.

‘-i vmx’ will be by far the fastest way to transfer guests available
currently, (unless you want to get into VDDK which currently requires
a lot of fiddly setup[1]).

> [root@kvm01 ~]# rpm -qa virt-v2v
> virt-v2v-1.36.3-6.el7_4.3.x86_64
> [root@kvm01 ~]# rpm -qa libguestfs
> libguestfs-1.36.3-6.el7_4.3.x86_64
> [root@kvm01 ~]# rpm -qa "redhat-virtualization-host-image-update*"
> redhat-virtualization-host-image-update-placeholder-4.1-8.1.el7.noarch
> redhat-virtualization-host-image-update-4.1-20171207.0.el7_4.noarch
> 
> (yes, i'm running RHV, but i think this shouldn't change the behaviour)
> 
> I don't set anything in the commandline or whatever, i set only the
> source and destination throught the API. So virt-v2v is coordinated
> via vdsm and runs on the bare-metal host.
> 
> The network distance is "0", because vcenter, source vmware hosts, kvm
> hosts and ovirt hosts lies in the same network. The only annotation is
> that also vCenter is a VM, running on esx environment.
> 
> Network interfaces both on source and destination are 10Gbit, but
> there may be a little slowdown on vcenter side because has to get the
> data from esx's datastore and forward to the ovirt host.

I don't know why it slowed down, but I'm pretty sure it's got nothing
to do with the version of oVirt/RHV.  Especially in the initial phase
where it's virt-v2v reading the guest from vCenter.  Something must
have changed or be different in the test and production environments.

Are you converting the same guests?  virt-v2v is data-driven, so
different guests require different operations, and those can take
different amount of time to run.

Rich.

[0] http://libguestfs.org/virt-v2v.1.html#input-from-vmware-vmx
[1] http://libguestfs.org/virt-v2v.1.html#input-from-vddk

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-01-25 Thread Luca 'remix_tj' Lorenzetto
On Thu, Jan 25, 2018 at 10:08 AM, Richard W.M. Jones  wrote:
> There's got to be some difference between your staging environment and
> your production environment, and I'm pretty sure it has nothing to do
> with the version of oVirt.
>
> Are you running virt-v2v inside a virtual machine, and previously you
> ran it on bare-metal?  Or did you disable nested KVM?  That seems like
> the most likely explanation for the difference (although I'm surprised
> that the difference is so large).
>
> Rich.
>

Hello Rich,

i'm running virt-v2v throught the import option of oVirt.

[root@kvm01 ~]# rpm -qa virt-v2v
virt-v2v-1.36.3-6.el7_4.3.x86_64
[root@kvm01 ~]# rpm -qa libguestfs
libguestfs-1.36.3-6.el7_4.3.x86_64
[root@kvm01 ~]# rpm -qa "redhat-virtualization-host-image-update*"
redhat-virtualization-host-image-update-placeholder-4.1-8.1.el7.noarch
redhat-virtualization-host-image-update-4.1-20171207.0.el7_4.noarch

(yes, i'm running RHV, but i think this shouldn't change the behaviour)

I don't set anything in the commandline or whatever, i set only the
source and destination throught the API. So virt-v2v is coordinated
via vdsm and runs on the bare-metal host.

The network distance is "0", because vcenter, source vmware hosts, kvm
hosts and ovirt hosts lies in the same network. The only annotation is
that also vCenter is a VM, running on esx environment.

Network interfaces both on source and destination are 10Gbit, but
there may be a little slowdown on vcenter side because has to get the
data from esx's datastore and forward to the ovirt host.

Just for reference this is the virt-v2v i found with ps on an host
when converting (this may be not the one that generated the output i
reported before, but all are the same):

/usr/bin/virt-v2v -v -x -ic
vpx://vmwareuser%40domain@vcenter/DC/Cluster/Host?no_verify=1 -o vdsm
-of raw -oa preallocated --vdsm-image-uuid
9ef9a0fd-b9e0-4adb-a05a-70560eca553d --vdsm-vol-uuid
8fc08042-34ec-4018-a4d4-622fda51f4e8 --password-file
/var/run/vdsm/v2v/34afd77c-edbd-459e-a221-0df56c42274b.tmp
--vdsm-vm-uuid 34afd77c-edbd-459e-a221-0df56c42274b --vdsm-ovf-output
/var/run/vdsm/v2v --machine-readable -os
/rhev/data-center/e8263fb4-114d-4706-b1c0-5defcd15d16b/9ba693b0-7588-411f-b97c-ec2de619d2f8
vmtoconvert


Luca




-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-01-25 Thread Richard W.M. Jones
On Thu, Jan 25, 2018 at 09:08:49AM +, Richard W.M. Jones wrote:
> On Wed, Jan 24, 2018 at 11:49:13PM +0100, Luca 'remix_tj' Lorenzetto wrote:
> > Hello,
> > 
> > i've started my migrations from vmware today. I had successfully
> > migrated over 200 VM from vmware to another cluster based on 4.0 using
> > our home-made scripts interacting with the API's. All the migrated vms
> > are running RHEL 6 or 7, with no SELinux.
> > 
> > We understood a lot about the necessities and we recorded also some
> > metrics about migration times. In July, with 4.0 as destination, we
> > were migrating ~30gb vm in ~40 mins.
> > It was an acceptable time, considering that about 50% of our vms stand
> > around that size.
> > 
> > Today we started migrating to the production cluster, that is,
> > instead, running 4.1.8. With the same scripts, the same api calls, and
> > a vm of about 50gb we were supposing that we will have the vm running
> > in the new cluster after 70 minutes, more or less.
> > 
> > Instead, the migration is taking more than 2 hours, and this not
> > because of the slow conversion time by qemu-img given that we're
> > transferring an entire disk via http.
> > Looking at the log, seems that activities executed before qemu-img
> > took more than 2000 seconds. As example, appears to me that dracut
> > took more than 14 minutes, which is in my opinion a bit long.
> 
> There's got to be some difference between your staging environment and
> your production environment, and I'm pretty sure it has nothing to do
> with the version of oVirt.
> 
> Are you running virt-v2v inside a virtual machine, and previously you
> ran it on bare-metal?  Or did you disable nested KVM?  That seems like
> the most likely explanation for the difference (although I'm surprised
> that the difference is so large).

Another factor would be the network "distance" between virt-v2v and
VMware.  More hops?  Slower network interfaces?

Also you don't mention which version of virt-v2v you're using, but if
it's new enough then you should use ‘-i vmx’ conversions, either
directly from NFS, or over SSH from the ESXi hypervisor.  That will be
far quicker than conversions over HTTPS from vCenter (I mean, orders
of magnitude quicker).

The RHEL 7.5 preview repo which supports this is:

  https://www.redhat.com/archives/libguestfs/2017-November/msg6.html

Rich.

> > Is there any option to get a quicker conversion? Also some tasks to
> > run in the guests before the conversion are accepted.
> > 
> > We have to migrate ~300 vms in 2.5 months, and we're only at 11 after
> > 7 hours (and today an exception that allowed us to start 4 hours in
> > advance, but usually our maintenance time is significantly lower).
> > 
> > This is a filtered out log reporting only the rows were we can
> > understand how much time has passed:
> > 
> > [   0.0] Opening the source -i libvirt -ic
> > vpx://vmwareuser%40domain@vcenter/DC/Cluster/Host?no_verify=1
> > vmtoconvert
> > [   6.1] Creating an overlay to protect the source from being modified
> > [   7.4] Initializing the target -o vdsm -os
> > /rhev/data-center/e8263fb4-114d-4706-b1c0-5defcd15d16b/a118578a-4cf2-4e0c-ac47-20e9f0321da1
> > --vdsm-image-uuid 1a93e503-ce57-4631-8dd2-eeeae45866ca --vdsm-vol-uuid
> > 88d92582-0f53-43b0-89ff-af1c17ea8618 --vdsm-vm-uuid
> > 1434e14f-e228-41c1-b769-dcf48b258b12 --vdsm-ovf-output
> > /var/run/vdsm/v2v
> > [   7.4] Opening the overlay
> > [00034ms] /usr/libexec/qemu-kvm \
> > [0.00] Initializing cgroup subsys cpu
> > [0.00] Initializing cgroup subsys cpuacct
> > [0.00] Linux version 3.10.0-693.11.1.el7.x86_64
> > (mockbu...@x86-041.build.eng.bos.redhat.com) (gcc version 4.8.5
> > 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Fri Oct 27 05:39:05 EDT
> > 2017
> > [0.00] Command line: panic=1 console=ttyS0 edd=off
> > udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1
> > cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable
> > 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1
> > guestfs_network=1 TERM=linux guestfs_identifier=v2v
> > [0.00] e820: BIOS-provided physical RAM map:
> > [0.00] BIOS-e820: [mem 0x-0x0009f7ff] usable
> > [0.00] BIOS-e820: [mem 0x0009f800-0x0009] 
> > reserved
> > [0.00] BIOS-e820: [mem 0x000f-0x000f] 
> > reserved
> > [0.00] BIOS-e820: [mem 0x0010-0x7cfddfff] usable
> > [0.00] BIOS-e820: [mem 0x7cfde000-0x7cff] 
> > reserved
> > [0.00] BIOS-e820: [mem 0xfeffc000-0xfeff] 
> > reserved
> > [0.00] BIOS-e820: [mem 0xfffc-0x] 
> > reserved
> > [0.00] NX (Execute Disable) protection: active
> > [0.00] SMBIOS 2.8 present.
> > [0.00] Hypervisor detected: KVM
> > [0.00] e820: last_pfn = 0x7cfde max_arch_pfn = 0x4
> > [0.00] 

Re: [ovirt-users] Slow conversion from VMware in 4.1

2018-01-25 Thread Richard W.M. Jones
On Wed, Jan 24, 2018 at 11:49:13PM +0100, Luca 'remix_tj' Lorenzetto wrote:
> Hello,
> 
> i've started my migrations from vmware today. I had successfully
> migrated over 200 VM from vmware to another cluster based on 4.0 using
> our home-made scripts interacting with the API's. All the migrated vms
> are running RHEL 6 or 7, with no SELinux.
> 
> We understood a lot about the necessities and we recorded also some
> metrics about migration times. In July, with 4.0 as destination, we
> were migrating ~30gb vm in ~40 mins.
> It was an acceptable time, considering that about 50% of our vms stand
> around that size.
> 
> Today we started migrating to the production cluster, that is,
> instead, running 4.1.8. With the same scripts, the same api calls, and
> a vm of about 50gb we were supposing that we will have the vm running
> in the new cluster after 70 minutes, more or less.
> 
> Instead, the migration is taking more than 2 hours, and this not
> because of the slow conversion time by qemu-img given that we're
> transferring an entire disk via http.
> Looking at the log, seems that activities executed before qemu-img
> took more than 2000 seconds. As example, appears to me that dracut
> took more than 14 minutes, which is in my opinion a bit long.

There's got to be some difference between your staging environment and
your production environment, and I'm pretty sure it has nothing to do
with the version of oVirt.

Are you running virt-v2v inside a virtual machine, and previously you
ran it on bare-metal?  Or did you disable nested KVM?  That seems like
the most likely explanation for the difference (although I'm surprised
that the difference is so large).

Rich.

> Is there any option to get a quicker conversion? Also some tasks to
> run in the guests before the conversion are accepted.
> 
> We have to migrate ~300 vms in 2.5 months, and we're only at 11 after
> 7 hours (and today an exception that allowed us to start 4 hours in
> advance, but usually our maintenance time is significantly lower).
> 
> This is a filtered out log reporting only the rows were we can
> understand how much time has passed:
> 
> [   0.0] Opening the source -i libvirt -ic
> vpx://vmwareuser%40domain@vcenter/DC/Cluster/Host?no_verify=1
> vmtoconvert
> [   6.1] Creating an overlay to protect the source from being modified
> [   7.4] Initializing the target -o vdsm -os
> /rhev/data-center/e8263fb4-114d-4706-b1c0-5defcd15d16b/a118578a-4cf2-4e0c-ac47-20e9f0321da1
> --vdsm-image-uuid 1a93e503-ce57-4631-8dd2-eeeae45866ca --vdsm-vol-uuid
> 88d92582-0f53-43b0-89ff-af1c17ea8618 --vdsm-vm-uuid
> 1434e14f-e228-41c1-b769-dcf48b258b12 --vdsm-ovf-output
> /var/run/vdsm/v2v
> [   7.4] Opening the overlay
> [00034ms] /usr/libexec/qemu-kvm \
> [0.00] Initializing cgroup subsys cpu
> [0.00] Initializing cgroup subsys cpuacct
> [0.00] Linux version 3.10.0-693.11.1.el7.x86_64
> (mockbu...@x86-041.build.eng.bos.redhat.com) (gcc version 4.8.5
> 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Fri Oct 27 05:39:05 EDT
> 2017
> [0.00] Command line: panic=1 console=ttyS0 edd=off
> udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1
> cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable
> 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1
> guestfs_network=1 TERM=linux guestfs_identifier=v2v
> [0.00] e820: BIOS-provided physical RAM map:
> [0.00] BIOS-e820: [mem 0x-0x0009f7ff] usable
> [0.00] BIOS-e820: [mem 0x0009f800-0x0009] reserved
> [0.00] BIOS-e820: [mem 0x000f-0x000f] reserved
> [0.00] BIOS-e820: [mem 0x0010-0x7cfddfff] usable
> [0.00] BIOS-e820: [mem 0x7cfde000-0x7cff] reserved
> [0.00] BIOS-e820: [mem 0xfeffc000-0xfeff] reserved
> [0.00] BIOS-e820: [mem 0xfffc-0x] reserved
> [0.00] NX (Execute Disable) protection: active
> [0.00] SMBIOS 2.8 present.
> [0.00] Hypervisor detected: KVM
> [0.00] e820: last_pfn = 0x7cfde max_arch_pfn = 0x4
> [0.00] x86 PAT enabled: cpu 0, old 0x7040600070406, new 
> 0x7010600070106
> [0.00] found SMP MP-table at [mem 0x000f72f0-0x000f72ff]
> mapped at [880f72f0]
> [0.00] Using GB pages for direct mapping
> [0.00] RAMDISK: [mem 0x7ccb2000-0x7cfc]
> [0.00] Early table checksum verification disabled
> [0.00] ACPI: RSDP 000f70d0 00014 (v00 BOCHS )
> [0.00] ACPI: RSDT 7cfe14d5 0002C (v01 BOCHS  BXPCRSDT
> 0001 BXPC 0001)
> [0.00] ACPI: FACP 7cfe13e9 00074 (v01 BOCHS  BXPCFACP
> 0001 BXPC 0001)
> [0.00] ACPI: DSDT 7cfe0040 013A9 (v01 BOCHS  BXPCDSDT
> 0001 BXPC 0001)
> [0.00] ACPI: FACS 7cfe 00040
> [0.00] ACPI: APIC 7cfe145d 00078 (v01