Re: [Qemu-devel] [PATCH RFC] migration: set cpu throttle value by workload

2017-02-24 Thread Dr. David Alan Gilbert
* Chao Fan (fanc.f...@cn.fujitsu.com) wrote:
> On Fri, Jan 27, 2017 at 12:07:27PM +, Dr. David Alan Gilbert wrote:
> >* Chao Fan (fanc.f...@cn.fujitsu.com) wrote:
> >> Hi all,
> >> 
> >> This is a test for this RFC patch.
> >> 
> >> Start vm as following:
> >> cmdline="./x86_64-softmmu/qemu-system-x86_64 -m 2560 \
> >> -drive if=none,file=/nfs/img/fedora.qcow2,format=qcow2,id=foo \
> >> -netdev tap,id=hn0,queues=1 \
> >> -device virtio-net-pci,id=net-pci0,netdev=hn0 \
> >> -device virtio-blk,drive=foo \
> >> -enable-kvm -M pc -cpu host \
> >> -vnc :3 \
> >> -monitor stdio"
> >> 
> >> Continue running benchmark program named himeno[*](modified base on
> >> original source). The code is in the attach file, make it in MIDDLE.
> >> It costs much cpu calculation and memory. Then migrate the guest.
> >> The source host and target host are in one switch.
> >> 
> >> "before" means the upstream version, "after" means applying this patch.
> >> "idpr" means "inst_dirty_pages_rate", a new variable in this RFC PATCH.
> >> "count" is "dirty sync count" in "info migrate".
> >> "time" is "total time" in "info migrate".
> >> "ct pct" is "cpu throttle percentage" in "info migrate".
> >> 
> >>  
> >> | |before|after| 
> >> |-|--|-| 
> >> |count|time(s)|ct pct|time(s)| idpr |ct pct| 
> >> |-|---|--|---|--|--| 
> >> |  1  |3  |   0  |4  |   x  |   0  | 
> >> |  2  |   53  |   0  |   53  | 14237|   0  | 
> >> |  3  |   97  |   0  |   95  |  3142|   0  | 
> >> |  4  |  109  |   0  |  105  | 11085|   0  | 
> >> |  5  |  117  |   0  |  113  | 12894|   0  | 
> >> |  6  |  125  |  20  |  121  | 13549|  67  | 
> >> |  7  |  133  |  20  |  130  | 13550|  67  | 
> >> |  8  |  141  |  20  |  136  | 13587|  67  | 
> >> |  9  |  149  |  30  |  144  | 13553|  99  | 
> >> | 10  |  156  |  30  |  152  |  1474|  99  |  
> >> | 11  |  164  |  30  |  152  |  1706|  99  |  
> >> | 12  |  172  |  40  |  153  |   0  |  99  |  
> >> | 13  |  180  |  40  |  153  |   0  |   x  |  
> >> | 14  |  188  |  40  |-|
> >> | 15  |  195  |  50  |  completed  |  
> >> | 16  |  203  |  50  | |  
> >> | 17  |  211  |  50  | |  
> >> | 18  |  219  |  60  | |  
> >> | 19  |  227  |  60  | |  
> >> | 20  |  235  |  60  | |  
> >> | 21  |  242  |  70  | |  
> >> | 22  |  250  |  70  | |  
> >> | 23  |  258  |  70  | |  
> >> | 24  |  266  |  80  | |  
> >> | 25  |  274  |  80  | |  
> >> | 26  |  281  |  80  | |  
> >> | 27  |  289  |  90  | |  
> >> | 28  |  297  |  90  | |  
> >> | 29  |  305  |  90  | |  
> >> | 30  |  315  |  99  | |  
> >> | 31  |  320  |  99  | |  
> >> | 32  |  320  |  99  | |  
> >> | 33  |  321  |  99  | |  
> >> | 34  |  321  |  99  | |  
> >> || |
> >> |completed   | |
> >> 
> >> 
> >> And the "info migrate" when completed:
> >> 
> >> before:
> >> capabilities: xbzrle: off rdma-pin-all: off auto-converge: on
> >> zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off 
> >> Migration status: completed
> >> total time: 321091 milliseconds
> >> downtime: 573 milliseconds
> >> setup: 40 milliseconds
> >> transferred ram: 10509346 kbytes
> >> throughput: 268.13 mbps
> >> remaining ram: 0 kbytes
> >> total ram: 2638664 kbytes
> >> duplicate: 362439 pages
> >> skipped: 0 pages
> >> normal: 2621414 pages
> >> normal bytes: 10485656 kbytes
> >> dirty sync count: 34
> >> 
> >> after:
> >> capabilities: xbzrle: off rdma-pin-all: off auto-converge: on
> >> zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off 
> >> Migration status: completed
> >> total time: 152652 milliseconds
> >> downtime: 290 milliseconds
> >> setup: 47 milliseconds
> >> transferred ram: 4997452 kbytes
> >> throughput: 268.20 mbps
> >> remaining ram: 0 kbytes
> >> total ram: 2638664 kbytes
> >> duplicate: 359598 pages
> >> skipped: 0 pages
> >> normal: 1246136 pages
> >> normal bytes: 4984544 kbytes
> >> dirty sync count: 13
> >> 
> >> It's clear that the total time is much better(321s VS 153s).
> >> The guest began cpu throttle in the 6th dirty sync. But at this time,
> >> the dirty pages born too much in this guest. So the default
> >> cpu throttle percentage(20 and 10) is too small for this condition. I
> >> just use (inst_dirty_pages_rate / 200) to calculate the cpu throttle
> >> value. This is just an adhoc algorithm, not supported by any theories. 
> >> 
> >> Of course on the other 

Re: [Qemu-devel] [PATCH RFC] migration: set cpu throttle value by workload

2017-02-05 Thread Chao Fan
On Fri, Jan 27, 2017 at 12:07:27PM +, Dr. David Alan Gilbert wrote:
>* Chao Fan (fanc.f...@cn.fujitsu.com) wrote:
>> Hi all,
>> 
>> This is a test for this RFC patch.
>> 
>> Start vm as following:
>> cmdline="./x86_64-softmmu/qemu-system-x86_64 -m 2560 \
>> -drive if=none,file=/nfs/img/fedora.qcow2,format=qcow2,id=foo \
>> -netdev tap,id=hn0,queues=1 \
>> -device virtio-net-pci,id=net-pci0,netdev=hn0 \
>> -device virtio-blk,drive=foo \
>> -enable-kvm -M pc -cpu host \
>> -vnc :3 \
>> -monitor stdio"
>> 
>> Continue running benchmark program named himeno[*](modified base on
>> original source). The code is in the attach file, make it in MIDDLE.
>> It costs much cpu calculation and memory. Then migrate the guest.
>> The source host and target host are in one switch.
>> 
>> "before" means the upstream version, "after" means applying this patch.
>> "idpr" means "inst_dirty_pages_rate", a new variable in this RFC PATCH.
>> "count" is "dirty sync count" in "info migrate".
>> "time" is "total time" in "info migrate".
>> "ct pct" is "cpu throttle percentage" in "info migrate".
>> 
>>  
>> | |before|after| 
>> |-|--|-| 
>> |count|time(s)|ct pct|time(s)| idpr |ct pct| 
>> |-|---|--|---|--|--| 
>> |  1  |3  |   0  |4  |   x  |   0  | 
>> |  2  |   53  |   0  |   53  | 14237|   0  | 
>> |  3  |   97  |   0  |   95  |  3142|   0  | 
>> |  4  |  109  |   0  |  105  | 11085|   0  | 
>> |  5  |  117  |   0  |  113  | 12894|   0  | 
>> |  6  |  125  |  20  |  121  | 13549|  67  | 
>> |  7  |  133  |  20  |  130  | 13550|  67  | 
>> |  8  |  141  |  20  |  136  | 13587|  67  | 
>> |  9  |  149  |  30  |  144  | 13553|  99  | 
>> | 10  |  156  |  30  |  152  |  1474|  99  |  
>> | 11  |  164  |  30  |  152  |  1706|  99  |  
>> | 12  |  172  |  40  |  153  |   0  |  99  |  
>> | 13  |  180  |  40  |  153  |   0  |   x  |  
>> | 14  |  188  |  40  |-|
>> | 15  |  195  |  50  |  completed  |  
>> | 16  |  203  |  50  | |  
>> | 17  |  211  |  50  | |  
>> | 18  |  219  |  60  | |  
>> | 19  |  227  |  60  | |  
>> | 20  |  235  |  60  | |  
>> | 21  |  242  |  70  | |  
>> | 22  |  250  |  70  | |  
>> | 23  |  258  |  70  | |  
>> | 24  |  266  |  80  | |  
>> | 25  |  274  |  80  | |  
>> | 26  |  281  |  80  | |  
>> | 27  |  289  |  90  | |  
>> | 28  |  297  |  90  | |  
>> | 29  |  305  |  90  | |  
>> | 30  |  315  |  99  | |  
>> | 31  |  320  |  99  | |  
>> | 32  |  320  |  99  | |  
>> | 33  |  321  |  99  | |  
>> | 34  |  321  |  99  | |  
>> || |
>> |completed   | |
>> 
>> 
>> And the "info migrate" when completed:
>> 
>> before:
>> capabilities: xbzrle: off rdma-pin-all: off auto-converge: on
>> zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off 
>> Migration status: completed
>> total time: 321091 milliseconds
>> downtime: 573 milliseconds
>> setup: 40 milliseconds
>> transferred ram: 10509346 kbytes
>> throughput: 268.13 mbps
>> remaining ram: 0 kbytes
>> total ram: 2638664 kbytes
>> duplicate: 362439 pages
>> skipped: 0 pages
>> normal: 2621414 pages
>> normal bytes: 10485656 kbytes
>> dirty sync count: 34
>> 
>> after:
>> capabilities: xbzrle: off rdma-pin-all: off auto-converge: on
>> zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off 
>> Migration status: completed
>> total time: 152652 milliseconds
>> downtime: 290 milliseconds
>> setup: 47 milliseconds
>> transferred ram: 4997452 kbytes
>> throughput: 268.20 mbps
>> remaining ram: 0 kbytes
>> total ram: 2638664 kbytes
>> duplicate: 359598 pages
>> skipped: 0 pages
>> normal: 1246136 pages
>> normal bytes: 4984544 kbytes
>> dirty sync count: 13
>> 
>> It's clear that the total time is much better(321s VS 153s).
>> The guest began cpu throttle in the 6th dirty sync. But at this time,
>> the dirty pages born too much in this guest. So the default
>> cpu throttle percentage(20 and 10) is too small for this condition. I
>> just use (inst_dirty_pages_rate / 200) to calculate the cpu throttle
>> value. This is just an adhoc algorithm, not supported by any theories. 
>> 
>> Of course on the other hand, the cpu throttle percentage is higher, the
>> guest runs more slowly. But in the result, after applying this patch,
>> the guest spend 23s with the cpu throttle percentage is 67 (total time
>> from 121 to 144), and 9s with cpu throttle percentage is 99 (total time
>> 

Re: [Qemu-devel] [PATCH RFC] migration: set cpu throttle value by workload

2017-01-27 Thread Dr. David Alan Gilbert
* Chao Fan (fanc.f...@cn.fujitsu.com) wrote:
> Hi all,
> 
> This is a test for this RFC patch.
> 
> Start vm as following:
> cmdline="./x86_64-softmmu/qemu-system-x86_64 -m 2560 \
> -drive if=none,file=/nfs/img/fedora.qcow2,format=qcow2,id=foo \
> -netdev tap,id=hn0,queues=1 \
> -device virtio-net-pci,id=net-pci0,netdev=hn0 \
> -device virtio-blk,drive=foo \
> -enable-kvm -M pc -cpu host \
> -vnc :3 \
> -monitor stdio"
> 
> Continue running benchmark program named himeno[*](modified base on
> original source). The code is in the attach file, make it in MIDDLE.
> It costs much cpu calculation and memory. Then migrate the guest.
> The source host and target host are in one switch.
> 
> "before" means the upstream version, "after" means applying this patch.
> "idpr" means "inst_dirty_pages_rate", a new variable in this RFC PATCH.
> "count" is "dirty sync count" in "info migrate".
> "time" is "total time" in "info migrate".
> "ct pct" is "cpu throttle percentage" in "info migrate".
> 
>  
> | |before|after| 
> |-|--|-| 
> |count|time(s)|ct pct|time(s)| idpr |ct pct| 
> |-|---|--|---|--|--| 
> |  1  |3  |   0  |4  |   x  |   0  | 
> |  2  |   53  |   0  |   53  | 14237|   0  | 
> |  3  |   97  |   0  |   95  |  3142|   0  | 
> |  4  |  109  |   0  |  105  | 11085|   0  | 
> |  5  |  117  |   0  |  113  | 12894|   0  | 
> |  6  |  125  |  20  |  121  | 13549|  67  | 
> |  7  |  133  |  20  |  130  | 13550|  67  | 
> |  8  |  141  |  20  |  136  | 13587|  67  | 
> |  9  |  149  |  30  |  144  | 13553|  99  | 
> | 10  |  156  |  30  |  152  |  1474|  99  |  
> | 11  |  164  |  30  |  152  |  1706|  99  |  
> | 12  |  172  |  40  |  153  |   0  |  99  |  
> | 13  |  180  |  40  |  153  |   0  |   x  |  
> | 14  |  188  |  40  |-|
> | 15  |  195  |  50  |  completed  |  
> | 16  |  203  |  50  | |  
> | 17  |  211  |  50  | |  
> | 18  |  219  |  60  | |  
> | 19  |  227  |  60  | |  
> | 20  |  235  |  60  | |  
> | 21  |  242  |  70  | |  
> | 22  |  250  |  70  | |  
> | 23  |  258  |  70  | |  
> | 24  |  266  |  80  | |  
> | 25  |  274  |  80  | |  
> | 26  |  281  |  80  | |  
> | 27  |  289  |  90  | |  
> | 28  |  297  |  90  | |  
> | 29  |  305  |  90  | |  
> | 30  |  315  |  99  | |  
> | 31  |  320  |  99  | |  
> | 32  |  320  |  99  | |  
> | 33  |  321  |  99  | |  
> | 34  |  321  |  99  | |  
> || |
> |completed   | |
> 
> 
> And the "info migrate" when completed:
> 
> before:
> capabilities: xbzrle: off rdma-pin-all: off auto-converge: on
> zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off 
> Migration status: completed
> total time: 321091 milliseconds
> downtime: 573 milliseconds
> setup: 40 milliseconds
> transferred ram: 10509346 kbytes
> throughput: 268.13 mbps
> remaining ram: 0 kbytes
> total ram: 2638664 kbytes
> duplicate: 362439 pages
> skipped: 0 pages
> normal: 2621414 pages
> normal bytes: 10485656 kbytes
> dirty sync count: 34
> 
> after:
> capabilities: xbzrle: off rdma-pin-all: off auto-converge: on
> zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off 
> Migration status: completed
> total time: 152652 milliseconds
> downtime: 290 milliseconds
> setup: 47 milliseconds
> transferred ram: 4997452 kbytes
> throughput: 268.20 mbps
> remaining ram: 0 kbytes
> total ram: 2638664 kbytes
> duplicate: 359598 pages
> skipped: 0 pages
> normal: 1246136 pages
> normal bytes: 4984544 kbytes
> dirty sync count: 13
> 
> It's clear that the total time is much better(321s VS 153s).
> The guest began cpu throttle in the 6th dirty sync. But at this time,
> the dirty pages born too much in this guest. So the default
> cpu throttle percentage(20 and 10) is too small for this condition. I
> just use (inst_dirty_pages_rate / 200) to calculate the cpu throttle
> value. This is just an adhoc algorithm, not supported by any theories. 
> 
> Of course on the other hand, the cpu throttle percentage is higher, the
> guest runs more slowly. But in the result, after applying this patch,
> the guest spend 23s with the cpu throttle percentage is 67 (total time
> from 121 to 144), and 9s with cpu throttle percentage is 99 (total time
> from 144 to completed). But in the upstream version, the guest spend
> 73s with the cpu throttle percentage is 70.80.90 (total time from 21 to
> 30), 6s with the cpu throttle percentage is 

Re: [Qemu-devel] [PATCH RFC] migration: set cpu throttle value by workload

2017-01-17 Thread Chao Fan
Hi all,

This is a test for this RFC patch.

Start vm as following:
cmdline="./x86_64-softmmu/qemu-system-x86_64 -m 2560 \
-drive if=none,file=/nfs/img/fedora.qcow2,format=qcow2,id=foo \
-netdev tap,id=hn0,queues=1 \
-device virtio-net-pci,id=net-pci0,netdev=hn0 \
-device virtio-blk,drive=foo \
-enable-kvm -M pc -cpu host \
-vnc :3 \
-monitor stdio"

Continue running benchmark program named himeno[*](modified base on
original source). The code is in the attach file, make it in MIDDLE.
It costs much cpu calculation and memory. Then migrate the guest.
The source host and target host are in one switch.

"before" means the upstream version, "after" means applying this patch.
"idpr" means "inst_dirty_pages_rate", a new variable in this RFC PATCH.
"count" is "dirty sync count" in "info migrate".
"time" is "total time" in "info migrate".
"ct pct" is "cpu throttle percentage" in "info migrate".

 
| |before|after| 
|-|--|-| 
|count|time(s)|ct pct|time(s)| idpr |ct pct| 
|-|---|--|---|--|--| 
|  1  |3  |   0  |4  |   x  |   0  | 
|  2  |   53  |   0  |   53  | 14237|   0  | 
|  3  |   97  |   0  |   95  |  3142|   0  | 
|  4  |  109  |   0  |  105  | 11085|   0  | 
|  5  |  117  |   0  |  113  | 12894|   0  | 
|  6  |  125  |  20  |  121  | 13549|  67  | 
|  7  |  133  |  20  |  130  | 13550|  67  | 
|  8  |  141  |  20  |  136  | 13587|  67  | 
|  9  |  149  |  30  |  144  | 13553|  99  | 
| 10  |  156  |  30  |  152  |  1474|  99  |  
| 11  |  164  |  30  |  152  |  1706|  99  |  
| 12  |  172  |  40  |  153  |   0  |  99  |  
| 13  |  180  |  40  |  153  |   0  |   x  |  
| 14  |  188  |  40  |-|
| 15  |  195  |  50  |  completed  |  
| 16  |  203  |  50  | |  
| 17  |  211  |  50  | |  
| 18  |  219  |  60  | |  
| 19  |  227  |  60  | |  
| 20  |  235  |  60  | |  
| 21  |  242  |  70  | |  
| 22  |  250  |  70  | |  
| 23  |  258  |  70  | |  
| 24  |  266  |  80  | |  
| 25  |  274  |  80  | |  
| 26  |  281  |  80  | |  
| 27  |  289  |  90  | |  
| 28  |  297  |  90  | |  
| 29  |  305  |  90  | |  
| 30  |  315  |  99  | |  
| 31  |  320  |  99  | |  
| 32  |  320  |  99  | |  
| 33  |  321  |  99  | |  
| 34  |  321  |  99  | |  
|| |
|completed   | |


And the "info migrate" when completed:

before:
capabilities: xbzrle: off rdma-pin-all: off auto-converge: on
zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off 
Migration status: completed
total time: 321091 milliseconds
downtime: 573 milliseconds
setup: 40 milliseconds
transferred ram: 10509346 kbytes
throughput: 268.13 mbps
remaining ram: 0 kbytes
total ram: 2638664 kbytes
duplicate: 362439 pages
skipped: 0 pages
normal: 2621414 pages
normal bytes: 10485656 kbytes
dirty sync count: 34

after:
capabilities: xbzrle: off rdma-pin-all: off auto-converge: on
zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off 
Migration status: completed
total time: 152652 milliseconds
downtime: 290 milliseconds
setup: 47 milliseconds
transferred ram: 4997452 kbytes
throughput: 268.20 mbps
remaining ram: 0 kbytes
total ram: 2638664 kbytes
duplicate: 359598 pages
skipped: 0 pages
normal: 1246136 pages
normal bytes: 4984544 kbytes
dirty sync count: 13

It's clear that the total time is much better(321s VS 153s).
The guest began cpu throttle in the 6th dirty sync. But at this time,
the dirty pages born too much in this guest. So the default
cpu throttle percentage(20 and 10) is too small for this condition. I
just use (inst_dirty_pages_rate / 200) to calculate the cpu throttle
value. This is just an adhoc algorithm, not supported by any theories. 

Of course on the other hand, the cpu throttle percentage is higher, the
guest runs more slowly. But in the result, after applying this patch,
the guest spend 23s with the cpu throttle percentage is 67 (total time
from 121 to 144), and 9s with cpu throttle percentage is 99 (total time
from 144 to completed). But in the upstream version, the guest spend
73s with the cpu throttle percentage is 70.80.90 (total time from 21 to
30), 6s with the cpu throttle percentage is 99 (total time from 30 to
completed). So I think the influence to the guest performance after my
patch is fewer than the upstream version.

Any comments will be welcome.

[*]http://accc.riken.jp/en/supercom/himenobmt/

Thanks,

Chao FanOn Thu, Dec 29, 2016 at 05:16:19PM +0800, Chao 

Re: [Qemu-devel] [PATCH RFC] migration: set cpu throttle value by workload

2017-01-11 Thread Cao jin
Hi,
We have been waiting for a long time on this topic,we have interests in
improving the migration performance, and we think this could benefit in
certain condition like heavy work load, the throttle value is a dynamic
value than fixed increment. Your comments would be important to us,
thanks in advance.

-- 
Sincerely,
Cao jin

On 12/29/2016 05:16 PM, Chao Fan wrote:
> This RFC PATCH is my demo about the new feature, here is my POC mail:
> https://lists.gnu.org/archive/html/qemu-devel/2016-12/msg00646.html
> 
> When migration_bitmap_sync executed, get the time and read bitmap to
> calculate how many dirty pages born between two sync.
> Use inst_dirty_pages / (time_now - time_prev) / ram_size to get
> inst_dirty_pages_rate. Then map from the inst_dirty_pages_rate
> to cpu throttle value. I have no idea how to map it. So I just do
> that in a simple way. The mapping way is just a guess and should
> be improved.
> 
> This is just a demo. There are more methods.
> 1.In another file, calculate the inst_dirty_pages_rate every second
>   or two seconds or another fixed time. Then set the cpu throttle
>   value according to the inst_dirty_pages_rate
> 2.When inst_dirty_pages_rate gets a threshold, begin cpu throttle
>   and set the throttle value.
> 
> Any comments will be welcome.
> 
> Signed-off-by: Chao Fan 
> ---
>  include/qemu/bitmap.h | 17 +
>  migration/ram.c   | 49 +
>  2 files changed, 66 insertions(+)
> 
> diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
> index 63ea2d0..dc99f9b 100644
> --- a/include/qemu/bitmap.h
> +++ b/include/qemu/bitmap.h
> @@ -235,4 +235,21 @@ static inline unsigned long *bitmap_zero_extend(unsigned 
> long *old,
>  return new;
>  }
>  
> +static inline unsigned long bitmap_weight(const unsigned long *src, long 
> nbits)
> +{
> +unsigned long i, count = 0, nlong = nbits / BITS_PER_LONG;
> +
> +if (small_nbits(nbits)) {
> +return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits));
> +}
> +for (i = 0; i < nlong; i++) {
> +count += hweight_long(src[i]);
> +}
> +if (nbits % BITS_PER_LONG) {
> +count += hweight_long(src[i] & BITMAP_LAST_WORD_MASK(nbits));
> +}
> +
> +return count;
> +}
> +
>  #endif /* BITMAP_H */
> diff --git a/migration/ram.c b/migration/ram.c
> index a1c8089..f96e3e3 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -44,6 +44,7 @@
>  #include "exec/ram_addr.h"
>  #include "qemu/rcu_queue.h"
>  #include "migration/colo.h"
> +#include "hw/boards.h"
>  
>  #ifdef DEBUG_MIGRATION_RAM
>  #define DPRINTF(fmt, ...) \
> @@ -599,6 +600,9 @@ static int64_t num_dirty_pages_period;
>  static uint64_t xbzrle_cache_miss_prev;
>  static uint64_t iterations_prev;
>  
> +static int64_t dirty_pages_time_prev;
> +static int64_t dirty_pages_time_now;
> +
>  static void migration_bitmap_sync_init(void)
>  {
>  start_time = 0;
> @@ -606,6 +610,49 @@ static void migration_bitmap_sync_init(void)
>  num_dirty_pages_period = 0;
>  xbzrle_cache_miss_prev = 0;
>  iterations_prev = 0;
> +
> +dirty_pages_time_prev = 0;
> +dirty_pages_time_now = 0;
> +}
> +
> +static void migration_inst_rate(void)
> +{
> +RAMBlock *block;
> +MigrationState *s = migrate_get_current();
> +int64_t inst_dirty_pages_rate, inst_dirty_pages = 0;
> +int64_t i;
> +unsigned long *num;
> +unsigned long len = 0;
> +
> +dirty_pages_time_now = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
> +if (dirty_pages_time_prev != 0) {
> +rcu_read_lock();
> +DirtyMemoryBlocks *blocks = atomic_rcu_read(
> + _list.dirty_memory[DIRTY_MEMORY_MIGRATION]);
> +QLIST_FOREACH_RCU(block, _list.blocks, next) {
> +if (len == 0) {
> +len = block->offset;
> +}
> +len += block->used_length;
> +}
> +ram_addr_t idx = (len >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SIZE;
> +if (((len >> TARGET_PAGE_BITS) % DIRTY_MEMORY_BLOCK_SIZE) != 0) {
> +idx++;
> +}
> +for (i = 0; i < idx; i++) {
> +num = blocks->blocks[i];
> +inst_dirty_pages += bitmap_weight(num, DIRTY_MEMORY_BLOCK_SIZE);
> +}
> +rcu_read_unlock();
> +
> +inst_dirty_pages_rate = inst_dirty_pages * TARGET_PAGE_SIZE *
> +1024 * 1024 * 1000 /
> +(dirty_pages_time_now - dirty_pages_time_prev) /
> +current_machine->ram_size;
> +s->parameters.cpu_throttle_initial = inst_dirty_pages_rate / 200;
> +s->parameters.cpu_throttle_increment = inst_dirty_pages_rate / 200;
> +}
> +dirty_pages_time_prev = dirty_pages_time_now;
>  }
>  
>  static void migration_bitmap_sync(void)
> @@ -629,6 +676,8 @@ static void migration_bitmap_sync(void)
>  trace_migration_bitmap_sync_start();
>  

Re: [Qemu-devel] [PATCH RFC] migration: set cpu throttle value by workload

2016-12-29 Thread Chao Fan
Hi all,

There is something to explain in this RFC PATCH.

On Thu, Dec 29, 2016 at 05:16:19PM +0800, Chao Fan wrote:
>This RFC PATCH is my demo about the new feature, here is my POC mail:
>https://lists.gnu.org/archive/html/qemu-devel/2016-12/msg00646.html
>
>When migration_bitmap_sync executed, get the time and read bitmap to
>calculate how many dirty pages born between two sync.
>Use inst_dirty_pages / (time_now - time_prev) / ram_size to get
>inst_dirty_pages_rate. Then map from the inst_dirty_pages_rate
>to cpu throttle value. I have no idea how to map it. So I just do
>that in a simple way. The mapping way is just a guess and should
>be improved.
>
>This is just a demo. There are more methods.
>1.In another file, calculate the inst_dirty_pages_rate every second
>  or two seconds or another fixed time. Then set the cpu throttle
>  value according to the inst_dirty_pages_rate
>2.When inst_dirty_pages_rate gets a threshold, begin cpu throttle
>  and set the throttle value.
>
>Any comments will be welcome.
>
>Signed-off-by: Chao Fan 
>---
> include/qemu/bitmap.h | 17 +
> migration/ram.c   | 49 +
> 2 files changed, 66 insertions(+)
>
>diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
>index 63ea2d0..dc99f9b 100644
>--- a/include/qemu/bitmap.h
>+++ b/include/qemu/bitmap.h
>@@ -235,4 +235,21 @@ static inline unsigned long *bitmap_zero_extend(unsigned 
>long *old,
> return new;
> }
> 
>+static inline unsigned long bitmap_weight(const unsigned long *src, long 
>nbits)

It is a function imported from kernel, to calculate the number of
dirty pages.

>+{
>+unsigned long i, count = 0, nlong = nbits / BITS_PER_LONG;
>+
>+if (small_nbits(nbits)) {
>+return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits));
>+}
>+for (i = 0; i < nlong; i++) {
>+count += hweight_long(src[i]);
>+}
>+if (nbits % BITS_PER_LONG) {
>+count += hweight_long(src[i] & BITMAP_LAST_WORD_MASK(nbits));
>+}
>+
>+return count;
>+}
>+
> #endif /* BITMAP_H */
>diff --git a/migration/ram.c b/migration/ram.c
>index a1c8089..f96e3e3 100644
>--- a/migration/ram.c
>+++ b/migration/ram.c
>@@ -44,6 +44,7 @@
> #include "exec/ram_addr.h"
> #include "qemu/rcu_queue.h"
> #include "migration/colo.h"
>+#include "hw/boards.h"
> 
> #ifdef DEBUG_MIGRATION_RAM
> #define DPRINTF(fmt, ...) \
>@@ -599,6 +600,9 @@ static int64_t num_dirty_pages_period;
> static uint64_t xbzrle_cache_miss_prev;
> static uint64_t iterations_prev;
> 
>+static int64_t dirty_pages_time_prev;
>+static int64_t dirty_pages_time_now;
>+
> static void migration_bitmap_sync_init(void)
> {
> start_time = 0;
>@@ -606,6 +610,49 @@ static void migration_bitmap_sync_init(void)
> num_dirty_pages_period = 0;
> xbzrle_cache_miss_prev = 0;
> iterations_prev = 0;
>+
>+dirty_pages_time_prev = 0;
>+dirty_pages_time_now = 0;
>+}
>+
>+static void migration_inst_rate(void)
>+{
>+RAMBlock *block;
>+MigrationState *s = migrate_get_current();
>+int64_t inst_dirty_pages_rate, inst_dirty_pages = 0;
>+int64_t i;
>+unsigned long *num;
>+unsigned long len = 0;
>+
>+dirty_pages_time_now = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);

When sync executed, we do this. And maybe every 1 second or another fixed
time to get the pages and time is also OK. But I have no idear which is
better.

>+if (dirty_pages_time_prev != 0) {
>+rcu_read_lock();
>+DirtyMemoryBlocks *blocks = atomic_rcu_read(
>+ _list.dirty_memory[DIRTY_MEMORY_MIGRATION]);
>+QLIST_FOREACH_RCU(block, _list.blocks, next) {
>+if (len == 0) {
>+len = block->offset;
>+}
>+len += block->used_length;
>+}
>+ram_addr_t idx = (len >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SIZE;
>+if (((len >> TARGET_PAGE_BITS) % DIRTY_MEMORY_BLOCK_SIZE) != 0) {
>+idx++;
>+}
>+for (i = 0; i < idx; i++) {
>+num = blocks->blocks[i];
>+inst_dirty_pages += bitmap_weight(num, DIRTY_MEMORY_BLOCK_SIZE);
>+}
>+rcu_read_unlock();
>+
>+inst_dirty_pages_rate = inst_dirty_pages * TARGET_PAGE_SIZE *
>+1024 * 1024 * 1000 /

The time we get is ms, so pages *1000 to make time changed to second.

The two *1024 is just to keep the magnitude, otherwise the
inst_dirty_pages is so small that the rate will be 0.

>+(dirty_pages_time_now - dirty_pages_time_prev) /
>+current_machine->ram_size;
>+s->parameters.cpu_throttle_initial = inst_dirty_pages_rate / 200;
>+s->parameters.cpu_throttle_increment = inst_dirty_pages_rate / 200;

Here the 200 is just a guess, because I don't know how map from
inst_dirty_pages_rate to throttle value. So just fill in a number.

I think there are better methods to map this. Then 

[Qemu-devel] [PATCH RFC] migration: set cpu throttle value by workload

2016-12-29 Thread Chao Fan
This RFC PATCH is my demo about the new feature, here is my POC mail:
https://lists.gnu.org/archive/html/qemu-devel/2016-12/msg00646.html

When migration_bitmap_sync executed, get the time and read bitmap to
calculate how many dirty pages born between two sync.
Use inst_dirty_pages / (time_now - time_prev) / ram_size to get
inst_dirty_pages_rate. Then map from the inst_dirty_pages_rate
to cpu throttle value. I have no idea how to map it. So I just do
that in a simple way. The mapping way is just a guess and should
be improved.

This is just a demo. There are more methods.
1.In another file, calculate the inst_dirty_pages_rate every second
  or two seconds or another fixed time. Then set the cpu throttle
  value according to the inst_dirty_pages_rate
2.When inst_dirty_pages_rate gets a threshold, begin cpu throttle
  and set the throttle value.

Any comments will be welcome.

Signed-off-by: Chao Fan 
---
 include/qemu/bitmap.h | 17 +
 migration/ram.c   | 49 +
 2 files changed, 66 insertions(+)

diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
index 63ea2d0..dc99f9b 100644
--- a/include/qemu/bitmap.h
+++ b/include/qemu/bitmap.h
@@ -235,4 +235,21 @@ static inline unsigned long *bitmap_zero_extend(unsigned 
long *old,
 return new;
 }
 
+static inline unsigned long bitmap_weight(const unsigned long *src, long nbits)
+{
+unsigned long i, count = 0, nlong = nbits / BITS_PER_LONG;
+
+if (small_nbits(nbits)) {
+return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits));
+}
+for (i = 0; i < nlong; i++) {
+count += hweight_long(src[i]);
+}
+if (nbits % BITS_PER_LONG) {
+count += hweight_long(src[i] & BITMAP_LAST_WORD_MASK(nbits));
+}
+
+return count;
+}
+
 #endif /* BITMAP_H */
diff --git a/migration/ram.c b/migration/ram.c
index a1c8089..f96e3e3 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -44,6 +44,7 @@
 #include "exec/ram_addr.h"
 #include "qemu/rcu_queue.h"
 #include "migration/colo.h"
+#include "hw/boards.h"
 
 #ifdef DEBUG_MIGRATION_RAM
 #define DPRINTF(fmt, ...) \
@@ -599,6 +600,9 @@ static int64_t num_dirty_pages_period;
 static uint64_t xbzrle_cache_miss_prev;
 static uint64_t iterations_prev;
 
+static int64_t dirty_pages_time_prev;
+static int64_t dirty_pages_time_now;
+
 static void migration_bitmap_sync_init(void)
 {
 start_time = 0;
@@ -606,6 +610,49 @@ static void migration_bitmap_sync_init(void)
 num_dirty_pages_period = 0;
 xbzrle_cache_miss_prev = 0;
 iterations_prev = 0;
+
+dirty_pages_time_prev = 0;
+dirty_pages_time_now = 0;
+}
+
+static void migration_inst_rate(void)
+{
+RAMBlock *block;
+MigrationState *s = migrate_get_current();
+int64_t inst_dirty_pages_rate, inst_dirty_pages = 0;
+int64_t i;
+unsigned long *num;
+unsigned long len = 0;
+
+dirty_pages_time_now = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+if (dirty_pages_time_prev != 0) {
+rcu_read_lock();
+DirtyMemoryBlocks *blocks = atomic_rcu_read(
+ _list.dirty_memory[DIRTY_MEMORY_MIGRATION]);
+QLIST_FOREACH_RCU(block, _list.blocks, next) {
+if (len == 0) {
+len = block->offset;
+}
+len += block->used_length;
+}
+ram_addr_t idx = (len >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SIZE;
+if (((len >> TARGET_PAGE_BITS) % DIRTY_MEMORY_BLOCK_SIZE) != 0) {
+idx++;
+}
+for (i = 0; i < idx; i++) {
+num = blocks->blocks[i];
+inst_dirty_pages += bitmap_weight(num, DIRTY_MEMORY_BLOCK_SIZE);
+}
+rcu_read_unlock();
+
+inst_dirty_pages_rate = inst_dirty_pages * TARGET_PAGE_SIZE *
+1024 * 1024 * 1000 /
+(dirty_pages_time_now - dirty_pages_time_prev) /
+current_machine->ram_size;
+s->parameters.cpu_throttle_initial = inst_dirty_pages_rate / 200;
+s->parameters.cpu_throttle_increment = inst_dirty_pages_rate / 200;
+}
+dirty_pages_time_prev = dirty_pages_time_now;
 }
 
 static void migration_bitmap_sync(void)
@@ -629,6 +676,8 @@ static void migration_bitmap_sync(void)
 trace_migration_bitmap_sync_start();
 memory_global_dirty_log_sync();
 
+migration_inst_rate();
+
 qemu_mutex_lock(_bitmap_mutex);
 rcu_read_lock();
 QLIST_FOREACH_RCU(block, _list.blocks, next) {
-- 
2.9.3