Re: [PATCH 0/6] migration/postcopy: enable compress during postcopy

2019-11-07 Thread Dr. David Alan Gilbert
* Wei Yang (richardw.y...@linux.intel.com) wrote:
> On Thu, Nov 07, 2019 at 09:15:44AM +, Dr. David Alan Gilbert wrote:
> >* Wei Yang (richardw.y...@linux.intel.com) wrote:
> >> On Wed, Nov 06, 2019 at 08:11:44PM +, Dr. David Alan Gilbert wrote:
> >> >* Wei Yang (richardw.y...@linux.intel.com) wrote:
> >> >> This patch set tries enable compress during postcopy.
> >> >> 
> >> >> postcopy requires to place a whole host page, while migration thread 
> >> >> migrate
> >> >> memory in target page size. This makes postcopy need to collect all 
> >> >> target
> >> >> pages in one host page before placing via userfaultfd.
> >> >> 
> >> >> To enable compress during postcopy, there are two problems to solve:
> >> >> 
> >> >> 1. Random order for target page arrival
> >> >> 2. Target pages in one host page arrives without interrupt by target
> >> >>page from other host page
> >> >> 
> >> >> The first one is handled by counting the number of target pages arrived
> >> >> instead of the last target page arrived.
> >> >> 
> >> >> The second one is handled by:
> >> >> 
> >> >> 1. Flush compress thread for each host page
> >> >> 2. Wait for decompress thread for before placing host page
> >> >> 
> >> >> With the combination of these two changes, compress is enabled during
> >> >> postcopy.
> >> >
> >> >What have you tested this with? 2MB huge pages I guess?
> >> >
> >> 
> >> I tried with this qemu option:
> >> 
> >>-object 
> >> memory-backend-file,id=mem1,mem-path=/dev/hugepages/guest2,size=4G \
> >>-device pc-dimm,id=dimm1,memdev=mem1
> >> 
> >> /dev/hugepages/guest2 is a file under hugetlbfs
> >> 
> >>hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
> >
> >OK, yes that should be fine.
> >I suspect on Power/ARM where they have normal memory with 16/64k pages,
> >the cost of the flush will mean compression is more expensive in
> >postcopy mode; but still makes it possible.
> >
> 
> Not get your point clearly about more expensive. You mean more expensive on
> ARM/Power?

Yes;  you're doing a flush at the end of each host page;  on x86 without
hugepages you don't do anything, on arm/power you'll need to do a flush
at the end of each of their normal pages - so that's a bit more
expensive.

> If the solution looks good to you, I would prepare v2.

Yes; I think it is OK.

Dave

> >Dave
> >
> >> >Dave
> >> >
> >> >> Wei Yang (6):
> >> >>   migration/postcopy: reduce memset when it is zero page and
> >> >> matches_target_page_size
> >> >>   migration/postcopy: wait for decompress thread in precopy
> >> >>   migration/postcopy: count target page number to decide the
> >> >> place_needed
> >> >>   migration/postcopy: set all_zero to true on the first target page
> >> >>   migration/postcopy: enable random order target page arrival
> >> >>   migration/postcopy: enable compress during postcopy
> >> >> 
> >> >>  migration/migration.c | 11 
> >> >>  migration/ram.c   | 65 ++-
> >> >>  2 files changed, 45 insertions(+), 31 deletions(-)
> >> >> 
> >> >> -- 
> >> >> 2.17.1
> >> >> 
> >> >--
> >> >Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> >> 
> >> -- 
> >> Wei Yang
> >> Help you, Help me
> >--
> >Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> 
> -- 
> Wei Yang
> Help you, Help me
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH 0/6] migration/postcopy: enable compress during postcopy

2019-11-07 Thread Wei Yang
On Thu, Nov 07, 2019 at 09:15:44AM +, Dr. David Alan Gilbert wrote:
>* Wei Yang (richardw.y...@linux.intel.com) wrote:
>> On Wed, Nov 06, 2019 at 08:11:44PM +, Dr. David Alan Gilbert wrote:
>> >* Wei Yang (richardw.y...@linux.intel.com) wrote:
>> >> This patch set tries enable compress during postcopy.
>> >> 
>> >> postcopy requires to place a whole host page, while migration thread 
>> >> migrate
>> >> memory in target page size. This makes postcopy need to collect all target
>> >> pages in one host page before placing via userfaultfd.
>> >> 
>> >> To enable compress during postcopy, there are two problems to solve:
>> >> 
>> >> 1. Random order for target page arrival
>> >> 2. Target pages in one host page arrives without interrupt by target
>> >>page from other host page
>> >> 
>> >> The first one is handled by counting the number of target pages arrived
>> >> instead of the last target page arrived.
>> >> 
>> >> The second one is handled by:
>> >> 
>> >> 1. Flush compress thread for each host page
>> >> 2. Wait for decompress thread for before placing host page
>> >> 
>> >> With the combination of these two changes, compress is enabled during
>> >> postcopy.
>> >
>> >What have you tested this with? 2MB huge pages I guess?
>> >
>> 
>> I tried with this qemu option:
>> 
>>-object 
>> memory-backend-file,id=mem1,mem-path=/dev/hugepages/guest2,size=4G \
>>-device pc-dimm,id=dimm1,memdev=mem1
>> 
>> /dev/hugepages/guest2 is a file under hugetlbfs
>> 
>>hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
>
>OK, yes that should be fine.
>I suspect on Power/ARM where they have normal memory with 16/64k pages,
>the cost of the flush will mean compression is more expensive in
>postcopy mode; but still makes it possible.
>

Not get your point clearly about more expensive. You mean more expensive on
ARM/Power?

If the solution looks good to you, I would prepare v2.

>Dave
>
>> >Dave
>> >
>> >> Wei Yang (6):
>> >>   migration/postcopy: reduce memset when it is zero page and
>> >> matches_target_page_size
>> >>   migration/postcopy: wait for decompress thread in precopy
>> >>   migration/postcopy: count target page number to decide the
>> >> place_needed
>> >>   migration/postcopy: set all_zero to true on the first target page
>> >>   migration/postcopy: enable random order target page arrival
>> >>   migration/postcopy: enable compress during postcopy
>> >> 
>> >>  migration/migration.c | 11 
>> >>  migration/ram.c   | 65 ++-
>> >>  2 files changed, 45 insertions(+), 31 deletions(-)
>> >> 
>> >> -- 
>> >> 2.17.1
>> >> 
>> >--
>> >Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
>> 
>> -- 
>> Wei Yang
>> Help you, Help me
>--
>Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

-- 
Wei Yang
Help you, Help me



Re: [PATCH 0/6] migration/postcopy: enable compress during postcopy

2019-11-07 Thread Dr. David Alan Gilbert
* Wei Yang (richardw.y...@linux.intel.com) wrote:
> On Wed, Nov 06, 2019 at 08:11:44PM +, Dr. David Alan Gilbert wrote:
> >* Wei Yang (richardw.y...@linux.intel.com) wrote:
> >> This patch set tries enable compress during postcopy.
> >> 
> >> postcopy requires to place a whole host page, while migration thread 
> >> migrate
> >> memory in target page size. This makes postcopy need to collect all target
> >> pages in one host page before placing via userfaultfd.
> >> 
> >> To enable compress during postcopy, there are two problems to solve:
> >> 
> >> 1. Random order for target page arrival
> >> 2. Target pages in one host page arrives without interrupt by target
> >>page from other host page
> >> 
> >> The first one is handled by counting the number of target pages arrived
> >> instead of the last target page arrived.
> >> 
> >> The second one is handled by:
> >> 
> >> 1. Flush compress thread for each host page
> >> 2. Wait for decompress thread for before placing host page
> >> 
> >> With the combination of these two changes, compress is enabled during
> >> postcopy.
> >
> >What have you tested this with? 2MB huge pages I guess?
> >
> 
> I tried with this qemu option:
> 
>-object memory-backend-file,id=mem1,mem-path=/dev/hugepages/guest2,size=4G 
> \
>-device pc-dimm,id=dimm1,memdev=mem1
> 
> /dev/hugepages/guest2 is a file under hugetlbfs
> 
>hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)

OK, yes that should be fine.
I suspect on Power/ARM where they have normal memory with 16/64k pages,
the cost of the flush will mean compression is more expensive in
postcopy mode; but still makes it possible.

Dave

> >Dave
> >
> >> Wei Yang (6):
> >>   migration/postcopy: reduce memset when it is zero page and
> >> matches_target_page_size
> >>   migration/postcopy: wait for decompress thread in precopy
> >>   migration/postcopy: count target page number to decide the
> >> place_needed
> >>   migration/postcopy: set all_zero to true on the first target page
> >>   migration/postcopy: enable random order target page arrival
> >>   migration/postcopy: enable compress during postcopy
> >> 
> >>  migration/migration.c | 11 
> >>  migration/ram.c   | 65 ++-
> >>  2 files changed, 45 insertions(+), 31 deletions(-)
> >> 
> >> -- 
> >> 2.17.1
> >> 
> >--
> >Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> 
> -- 
> Wei Yang
> Help you, Help me
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH 0/6] migration/postcopy: enable compress during postcopy

2019-11-06 Thread Wei Yang
On Wed, Nov 06, 2019 at 08:11:44PM +, Dr. David Alan Gilbert wrote:
>* Wei Yang (richardw.y...@linux.intel.com) wrote:
>> This patch set tries enable compress during postcopy.
>> 
>> postcopy requires to place a whole host page, while migration thread migrate
>> memory in target page size. This makes postcopy need to collect all target
>> pages in one host page before placing via userfaultfd.
>> 
>> To enable compress during postcopy, there are two problems to solve:
>> 
>> 1. Random order for target page arrival
>> 2. Target pages in one host page arrives without interrupt by target
>>page from other host page
>> 
>> The first one is handled by counting the number of target pages arrived
>> instead of the last target page arrived.
>> 
>> The second one is handled by:
>> 
>> 1. Flush compress thread for each host page
>> 2. Wait for decompress thread for before placing host page
>> 
>> With the combination of these two changes, compress is enabled during
>> postcopy.
>
>What have you tested this with? 2MB huge pages I guess?
>

I tried with this qemu option:

   -object memory-backend-file,id=mem1,mem-path=/dev/hugepages/guest2,size=4G \
   -device pc-dimm,id=dimm1,memdev=mem1

/dev/hugepages/guest2 is a file under hugetlbfs

   hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)

>Dave
>
>> Wei Yang (6):
>>   migration/postcopy: reduce memset when it is zero page and
>> matches_target_page_size
>>   migration/postcopy: wait for decompress thread in precopy
>>   migration/postcopy: count target page number to decide the
>> place_needed
>>   migration/postcopy: set all_zero to true on the first target page
>>   migration/postcopy: enable random order target page arrival
>>   migration/postcopy: enable compress during postcopy
>> 
>>  migration/migration.c | 11 
>>  migration/ram.c   | 65 ++-
>>  2 files changed, 45 insertions(+), 31 deletions(-)
>> 
>> -- 
>> 2.17.1
>> 
>--
>Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

-- 
Wei Yang
Help you, Help me



Re: [PATCH 0/6] migration/postcopy: enable compress during postcopy

2019-11-06 Thread Dr. David Alan Gilbert
* Wei Yang (richardw.y...@linux.intel.com) wrote:
> This patch set tries enable compress during postcopy.
> 
> postcopy requires to place a whole host page, while migration thread migrate
> memory in target page size. This makes postcopy need to collect all target
> pages in one host page before placing via userfaultfd.
> 
> To enable compress during postcopy, there are two problems to solve:
> 
> 1. Random order for target page arrival
> 2. Target pages in one host page arrives without interrupt by target
>page from other host page
> 
> The first one is handled by counting the number of target pages arrived
> instead of the last target page arrived.
> 
> The second one is handled by:
> 
> 1. Flush compress thread for each host page
> 2. Wait for decompress thread for before placing host page
> 
> With the combination of these two changes, compress is enabled during
> postcopy.

What have you tested this with? 2MB huge pages I guess?

Dave

> Wei Yang (6):
>   migration/postcopy: reduce memset when it is zero page and
> matches_target_page_size
>   migration/postcopy: wait for decompress thread in precopy
>   migration/postcopy: count target page number to decide the
> place_needed
>   migration/postcopy: set all_zero to true on the first target page
>   migration/postcopy: enable random order target page arrival
>   migration/postcopy: enable compress during postcopy
> 
>  migration/migration.c | 11 
>  migration/ram.c   | 65 ++-
>  2 files changed, 45 insertions(+), 31 deletions(-)
> 
> -- 
> 2.17.1
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH 0/6] migration/postcopy: enable compress during postcopy

2019-10-18 Thread Wei Yang
On Fri, Oct 18, 2019 at 09:50:05AM -0700, no-re...@patchew.org wrote:
>Patchew URL: 
>https://patchew.org/QEMU/20191018004850.9888-1-richardw.y...@linux.intel.com/
>
>
>
>Hi,
>
>This series failed the docker-mingw@fedora build test. Please find the testing 
>commands and
>their output below. If you have Docker installed, you can probably reproduce it
>locally.
>
>=== TEST SCRIPT BEGIN ===
>#! /bin/bash
>export ARCH=x86_64
>make docker-image-fedora V=1 NETWORK=1
>time make docker-test-mingw@fedora J=14 NETWORK=1
>=== TEST SCRIPT END ===
>
>  CC  aarch64-softmmu/hw/timer/allwinner-a10-pit.o
>In file included from /tmp/qemu-test/src/migration/ram.c:29:
>/tmp/qemu-test/src/migration/ram.c: In function 'ram_load_postcopy':
>/tmp/qemu-test/src/migration/ram.c:4177:56: error: cast from pointer to 
>integer of different size [-Werror=pointer-to-int-cast]
> void *place_dest = (void *)QEMU_ALIGN_DOWN((unsigned long)host,
>^

Sounds should use uintptr_t.

Would change it in next version.

>/tmp/qemu-test/src/include/qemu/osdep.h:268:33: note: in definition of macro 
>'QEMU_ALIGN_DOWN'
> #define QEMU_ALIGN_DOWN(n, m) ((n) / (m) * (m))
> ^
>cc1: all warnings being treated as errors
>make[1]: *** [/tmp/qemu-test/src/rules.mak:69: migration/ram.o] Error 1
>make[1]: *** Waiting for unfinished jobs
>  CC  x86_64-softmmu/target/i386/arch_dump.o
>  CC  aarch64-softmmu/hw/usb/tusb6010.o
>---
>  CC  aarch64-softmmu/hw/arm/xlnx-zynqmp.o
>In file included from /tmp/qemu-test/src/migration/ram.c:29:
>/tmp/qemu-test/src/migration/ram.c: In function 'ram_load_postcopy':
>/tmp/qemu-test/src/migration/ram.c:4177:56: error: cast from pointer to 
>integer of different size [-Werror=pointer-to-int-cast]
> void *place_dest = (void *)QEMU_ALIGN_DOWN((unsigned long)host,
>^
>/tmp/qemu-test/src/include/qemu/osdep.h:268:33: note: in definition of macro 
>'QEMU_ALIGN_DOWN'
> #define QEMU_ALIGN_DOWN(n, m) ((n) / (m) * (m))
> ^
>cc1: all warnings being treated as errors
>make[1]: *** [/tmp/qemu-test/src/rules.mak:69: migration/ram.o] Error 1
>make[1]: *** Waiting for unfinished jobs
>make: *** [Makefile:482: aarch64-softmmu/all] Error 2
>make: *** Waiting for unfinished jobs
>make: *** [Makefile:482: x86_64-softmmu/all] Error 2
>Traceback (most recent call last):
>  File "./tests/docker/docker.py", line 662, in 
>sys.exit(main())
>---
>raise CalledProcessError(retcode, cmd)
>subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', 
>'--label', 'com.qemu.instance.uuid=90570434880344249cff701baa188163', '-u', 
>'1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', 
>'-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 
>'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', 
>'/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', 
>'/var/tmp/patchew-tester-tmp-dh8p6f27/src/docker-src.2019-10-18-12.47.19.4164:/var/tmp/qemu:z,ro',
> 'qemu:fedora', '/var/tmp/qemu/run', 'test-mingw']' returned non-zero exit 
>status 2.
>filter=--filter=label=com.qemu.instance.uuid=90570434880344249cff701baa188163
>make[1]: *** [docker-run] Error 1
>make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-dh8p6f27/src'
>make: *** [docker-run-test-mingw@fedora] Error 2
>
>real2m45.691s
>user0m8.390s
>
>
>The full log is available at
>http://patchew.org/logs/20191018004850.9888-1-richardw.y...@linux.intel.com/testing.docker-mingw@fedora/?type=message.
>---
>Email generated automatically by Patchew [https://patchew.org/].
>Please send your feedback to patchew-de...@redhat.com

-- 
Wei Yang
Help you, Help me



Re: [PATCH 0/6] migration/postcopy: enable compress during postcopy

2019-10-18 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20191018004850.9888-1-richardw.y...@linux.intel.com/



Hi,

This series failed the docker-mingw@fedora build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#! /bin/bash
export ARCH=x86_64
make docker-image-fedora V=1 NETWORK=1
time make docker-test-mingw@fedora J=14 NETWORK=1
=== TEST SCRIPT END ===

  CC  aarch64-softmmu/hw/timer/allwinner-a10-pit.o
In file included from /tmp/qemu-test/src/migration/ram.c:29:
/tmp/qemu-test/src/migration/ram.c: In function 'ram_load_postcopy':
/tmp/qemu-test/src/migration/ram.c:4177:56: error: cast from pointer to integer 
of different size [-Werror=pointer-to-int-cast]
 void *place_dest = (void *)QEMU_ALIGN_DOWN((unsigned long)host,
^
/tmp/qemu-test/src/include/qemu/osdep.h:268:33: note: in definition of macro 
'QEMU_ALIGN_DOWN'
 #define QEMU_ALIGN_DOWN(n, m) ((n) / (m) * (m))
 ^
cc1: all warnings being treated as errors
make[1]: *** [/tmp/qemu-test/src/rules.mak:69: migration/ram.o] Error 1
make[1]: *** Waiting for unfinished jobs
  CC  x86_64-softmmu/target/i386/arch_dump.o
  CC  aarch64-softmmu/hw/usb/tusb6010.o
---
  CC  aarch64-softmmu/hw/arm/xlnx-zynqmp.o
In file included from /tmp/qemu-test/src/migration/ram.c:29:
/tmp/qemu-test/src/migration/ram.c: In function 'ram_load_postcopy':
/tmp/qemu-test/src/migration/ram.c:4177:56: error: cast from pointer to integer 
of different size [-Werror=pointer-to-int-cast]
 void *place_dest = (void *)QEMU_ALIGN_DOWN((unsigned long)host,
^
/tmp/qemu-test/src/include/qemu/osdep.h:268:33: note: in definition of macro 
'QEMU_ALIGN_DOWN'
 #define QEMU_ALIGN_DOWN(n, m) ((n) / (m) * (m))
 ^
cc1: all warnings being treated as errors
make[1]: *** [/tmp/qemu-test/src/rules.mak:69: migration/ram.o] Error 1
make[1]: *** Waiting for unfinished jobs
make: *** [Makefile:482: aarch64-softmmu/all] Error 2
make: *** Waiting for unfinished jobs
make: *** [Makefile:482: x86_64-softmmu/all] Error 2
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 662, in 
sys.exit(main())
---
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', 
'--label', 'com.qemu.instance.uuid=90570434880344249cff701baa188163', '-u', 
'1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', 
'-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 
'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', 
'/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', 
'/var/tmp/patchew-tester-tmp-dh8p6f27/src/docker-src.2019-10-18-12.47.19.4164:/var/tmp/qemu:z,ro',
 'qemu:fedora', '/var/tmp/qemu/run', 'test-mingw']' returned non-zero exit 
status 2.
filter=--filter=label=com.qemu.instance.uuid=90570434880344249cff701baa188163
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-dh8p6f27/src'
make: *** [docker-run-test-mingw@fedora] Error 2

real2m45.691s
user0m8.390s


The full log is available at
http://patchew.org/logs/20191018004850.9888-1-richardw.y...@linux.intel.com/testing.docker-mingw@fedora/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com