Re: [Qemu-block] Block Migration and CPU throttling

2018-02-16 Thread Dr. David Alan Gilbert
* Peter Lieven (p...@kamp.de) wrote:
> 
> > Am 07.02.2018 um 19:29 schrieb Dr. David Alan Gilbert :
> > 
> > * Peter Lieven (p...@kamp.de) wrote:
> >> Am 12.12.2017 um 18:05 schrieb Dr. David Alan Gilbert:
> >>> * Peter Lieven (p...@kamp.de) wrote:
>  Am 21.09.2017 um 14:36 schrieb Dr. David Alan Gilbert:
> > * Peter Lieven (p...@kamp.de) wrote:
> >> Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:
> >>> * Peter Lieven (p...@kamp.de) wrote:
>  Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:
> > * Peter Lieven (p...@kamp.de) wrote:
> >> Hi,
> >> 
> >> I just noticed that CPU throttling and Block Migration don't work 
> >> together very well.
> >> During block migration the throttling heuristic detects that we 
> >> obviously make no progress
> >> in ram transfer. But the reason is the running block migration and 
> >> not a too high dirty pages rate.
> >> 
> >> The result is that any VM is throttled by 99% during block 
> >> migration.
> > Hmm that's unfortunate; do you have a bandwidth set lower than your
> > actual network connection? I'm just wondering if it's actually going
> > between the block and RAM iterative sections or getting stuck in ne.
>  It happens also if source and dest are on the same machine and speed 
>  is set to 100G.
> >>> But does it happen if they're not and the speed is set low?
> >> Yes, it does. I noticed it in our test environment between different 
> >> nodes with a 10G
> >> link in between. But its totally clear why it happens. During block 
> >> migration we transfer
> >> all dirty memory pages in each round (if there is moderate memory 
> >> load), but all dirty
> >> pages are obviously more than 50% of the transferred ram in that round.
> >> It is more exactly 100%. But the current logic triggers on this 
> >> condition.
> >> 
> >> I think I will go forward and send a patch which disables auto 
> >> converge during
> >> block migration bulk stage.
> > Yes, that's fair;  it probably would also make sense to throttle the RAM
> > migration during the block migration bulk stage, since the chances are
> > it's not going to get far.  (I think in the nbd setup, the main
> > migration process isn't started until the end of bulk).
>  Catching up with the idea of delaying ram migration until block bulk has 
>  completed.
>  What do you think is the easiest way to achieve this?
> >>> 
> >>> 
> >>> I think the answer depends whether we think this is a 'special' or we
> >>> need a new general purpose mechanism.
> >>> 
> >>> If it was really general then we'd probably want to split the iterative
> >>> stage in two somehow, and only do RAM in the second half.
> >>> 
> >>> But I'm not sure it's worth it; I suspect the easiest way is:
> >>> 
> >>>a) Add a counter in migration/ram.c or in the RAM state somewhere
> >>>b) Make ram_save_inhibit increment the counter
> >>>c) Check the counter at the head of ram_save_iterate and just exit
> >>>  if it's none 0
> >>>d) Call ram_save_inhibit from block_save_setup
> >>>e) Then release it when you've finished the bulk stage
> >>> 
> >>> Make sure you still count the RAM in the pending totals, otherwise
> >>> migration might think it's finished a bit early.
> >> 
> >> Is there any culprit I don't see or is it as easy as this?
> > 
> > Hmm, looks promising doesn't it;  might need an include or two tidied
> > up, but looks worth a try.   Just be careful that there are no cases
> > where block migration can't transfer data in that state, otherwise we'll
> > keep coming back to here and spewing empty sections.
> 
> I already tested it and it actually works.

OK.

> What would you expect to be cleaned up before it would be a proper patch?

It's simple enough so not much; maybe add a trace for when it does the
exit just to make it easier to watch; I hadn't realised ram.c already
included migration/block.h for the !bulk case.

> Are there any implications with RDMA

Hmm; don't think so; it's mainly concerned with how the RAM is
transferred; and the ram_control_* hooks are called after your test,
and on the load side only once it's read the flags and got a block.

> and/or post copy migration?

Again I don't think so;  I think block migration always shows the
outstanding amount of block storage, and so it won't flip into postcopy
mode until the bulk of block migration is done.
Also the 'ram_save_complete' does it's own scan of blocks and doesn't
share the iterate code, so it won't be affected by your change.

> Is block migration possible at all with those?

Yes I think so in both cases; with RDMA I'm pretty sure it is, and I
think people have had it running with postcopy as well.   The postcopy
case isn't great (because the actual block storage isn't 

Re: [Qemu-block] Block Migration and CPU throttling

2018-02-07 Thread Peter Lieven

> Am 07.02.2018 um 19:29 schrieb Dr. David Alan Gilbert :
> 
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 12.12.2017 um 18:05 schrieb Dr. David Alan Gilbert:
>>> * Peter Lieven (p...@kamp.de) wrote:
 Am 21.09.2017 um 14:36 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:
>>> * Peter Lieven (p...@kamp.de) wrote:
 Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> Hi,
>> 
>> I just noticed that CPU throttling and Block Migration don't work 
>> together very well.
>> During block migration the throttling heuristic detects that we 
>> obviously make no progress
>> in ram transfer. But the reason is the running block migration and 
>> not a too high dirty pages rate.
>> 
>> The result is that any VM is throttled by 99% during block migration.
> Hmm that's unfortunate; do you have a bandwidth set lower than your
> actual network connection? I'm just wondering if it's actually going
> between the block and RAM iterative sections or getting stuck in ne.
 It happens also if source and dest are on the same machine and speed 
 is set to 100G.
>>> But does it happen if they're not and the speed is set low?
>> Yes, it does. I noticed it in our test environment between different 
>> nodes with a 10G
>> link in between. But its totally clear why it happens. During block 
>> migration we transfer
>> all dirty memory pages in each round (if there is moderate memory load), 
>> but all dirty
>> pages are obviously more than 50% of the transferred ram in that round.
>> It is more exactly 100%. But the current logic triggers on this 
>> condition.
>> 
>> I think I will go forward and send a patch which disables auto converge 
>> during
>> block migration bulk stage.
> Yes, that's fair;  it probably would also make sense to throttle the RAM
> migration during the block migration bulk stage, since the chances are
> it's not going to get far.  (I think in the nbd setup, the main
> migration process isn't started until the end of bulk).
 Catching up with the idea of delaying ram migration until block bulk has 
 completed.
 What do you think is the easiest way to achieve this?
>>> 
>>> 
>>> I think the answer depends whether we think this is a 'special' or we
>>> need a new general purpose mechanism.
>>> 
>>> If it was really general then we'd probably want to split the iterative
>>> stage in two somehow, and only do RAM in the second half.
>>> 
>>> But I'm not sure it's worth it; I suspect the easiest way is:
>>> 
>>>a) Add a counter in migration/ram.c or in the RAM state somewhere
>>>b) Make ram_save_inhibit increment the counter
>>>c) Check the counter at the head of ram_save_iterate and just exit
>>>  if it's none 0
>>>d) Call ram_save_inhibit from block_save_setup
>>>e) Then release it when you've finished the bulk stage
>>> 
>>> Make sure you still count the RAM in the pending totals, otherwise
>>> migration might think it's finished a bit early.
>> 
>> Is there any culprit I don't see or is it as easy as this?
> 
> Hmm, looks promising doesn't it;  might need an include or two tidied
> up, but looks worth a try.   Just be careful that there are no cases
> where block migration can't transfer data in that state, otherwise we'll
> keep coming back to here and spewing empty sections.

I already tested it and it actually works.

What would you expect to be cleaned up before it would be a proper patch?

Are there any implications with RDMA and/or post copy migration?
Is block migration possible at all with those?

Peter



Re: [Qemu-block] Block Migration and CPU throttling

2018-02-07 Thread Dr. David Alan Gilbert
* Peter Lieven (p...@kamp.de) wrote:
> Am 12.12.2017 um 18:05 schrieb Dr. David Alan Gilbert:
> > * Peter Lieven (p...@kamp.de) wrote:
> > > Am 21.09.2017 um 14:36 schrieb Dr. David Alan Gilbert:
> > > > * Peter Lieven (p...@kamp.de) wrote:
> > > > > Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:
> > > > > > * Peter Lieven (p...@kamp.de) wrote:
> > > > > > > Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:
> > > > > > > > * Peter Lieven (p...@kamp.de) wrote:
> > > > > > > > > Hi,
> > > > > > > > > 
> > > > > > > > > I just noticed that CPU throttling and Block Migration don't 
> > > > > > > > > work together very well.
> > > > > > > > > During block migration the throttling heuristic detects that 
> > > > > > > > > we obviously make no progress
> > > > > > > > > in ram transfer. But the reason is the running block 
> > > > > > > > > migration and not a too high dirty pages rate.
> > > > > > > > > 
> > > > > > > > > The result is that any VM is throttled by 99% during block 
> > > > > > > > > migration.
> > > > > > > > Hmm that's unfortunate; do you have a bandwidth set lower than 
> > > > > > > > your
> > > > > > > > actual network connection? I'm just wondering if it's actually 
> > > > > > > > going
> > > > > > > > between the block and RAM iterative sections or getting stuck 
> > > > > > > > in ne.
> > > > > > > It happens also if source and dest are on the same machine and 
> > > > > > > speed is set to 100G.
> > > > > > But does it happen if they're not and the speed is set low?
> > > > > Yes, it does. I noticed it in our test environment between different 
> > > > > nodes with a 10G
> > > > > link in between. But its totally clear why it happens. During block 
> > > > > migration we transfer
> > > > > all dirty memory pages in each round (if there is moderate memory 
> > > > > load), but all dirty
> > > > > pages are obviously more than 50% of the transferred ram in that 
> > > > > round.
> > > > > It is more exactly 100%. But the current logic triggers on this 
> > > > > condition.
> > > > > 
> > > > > I think I will go forward and send a patch which disables auto 
> > > > > converge during
> > > > > block migration bulk stage.
> > > > Yes, that's fair;  it probably would also make sense to throttle the RAM
> > > > migration during the block migration bulk stage, since the chances are
> > > > it's not going to get far.  (I think in the nbd setup, the main
> > > > migration process isn't started until the end of bulk).
> > > Catching up with the idea of delaying ram migration until block bulk has 
> > > completed.
> > > What do you think is the easiest way to achieve this?
> > 
> > 
> > I think the answer depends whether we think this is a 'special' or we
> > need a new general purpose mechanism.
> > 
> > If it was really general then we'd probably want to split the iterative
> > stage in two somehow, and only do RAM in the second half.
> > 
> > But I'm not sure it's worth it; I suspect the easiest way is:
> > 
> > a) Add a counter in migration/ram.c or in the RAM state somewhere
> > b) Make ram_save_inhibit increment the counter
> > c) Check the counter at the head of ram_save_iterate and just exit
> >   if it's none 0
> > d) Call ram_save_inhibit from block_save_setup
> > e) Then release it when you've finished the bulk stage
> > 
> > Make sure you still count the RAM in the pending totals, otherwise
> > migration might think it's finished a bit early.
> 
> Is there any culprit I don't see or is it as easy as this?

Hmm, looks promising doesn't it;  might need an include or two tidied
up, but looks worth a try.   Just be careful that there are no cases
where block migration can't transfer data in that state, otherwise we'll
keep coming back to here and spewing empty sections.

Dave

> diff --git a/migration/ram.c b/migration/ram.c
> index cb1950f..c67bcf1 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2255,6 +2255,13 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>  int64_t t0;
>  int done = 0;
> 
> +    if (blk_mig_bulk_active()) {
> +    /* Avoid transferring RAM during bulk phase of block migration as
> + * the bulk phase will usually take a lot of time and transferring
> + * RAM updates again and again is pointless. */
> +    goto out;
> +    }
> +
>  rcu_read_lock();
>  if (ram_list.version != rs->last_version) {
>  ram_state_reset(rs);
> @@ -2301,6 +2308,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>   */
>  ram_control_after_iterate(f, RAM_CONTROL_ROUND);
> 
> +out:
>  qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>  ram_counters.transferred += 8;
> 
> 
> Peter
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-block] Block Migration and CPU throttling

2018-02-06 Thread Peter Lieven

Am 12.12.2017 um 18:05 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Am 21.09.2017 um 14:36 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Hi,

I just noticed that CPU throttling and Block Migration don't work together very 
well.
During block migration the throttling heuristic detects that we obviously make 
no progress
in ram transfer. But the reason is the running block migration and not a too 
high dirty pages rate.

The result is that any VM is throttled by 99% during block migration.

Hmm that's unfortunate; do you have a bandwidth set lower than your
actual network connection? I'm just wondering if it's actually going
between the block and RAM iterative sections or getting stuck in ne.

It happens also if source and dest are on the same machine and speed is set to 
100G.

But does it happen if they're not and the speed is set low?

Yes, it does. I noticed it in our test environment between different nodes with 
a 10G
link in between. But its totally clear why it happens. During block migration 
we transfer
all dirty memory pages in each round (if there is moderate memory load), but 
all dirty
pages are obviously more than 50% of the transferred ram in that round.
It is more exactly 100%. But the current logic triggers on this condition.

I think I will go forward and send a patch which disables auto converge during
block migration bulk stage.

Yes, that's fair;  it probably would also make sense to throttle the RAM
migration during the block migration bulk stage, since the chances are
it's not going to get far.  (I think in the nbd setup, the main
migration process isn't started until the end of bulk).

Catching up with the idea of delaying ram migration until block bulk has 
completed.
What do you think is the easiest way to achieve this?



I think the answer depends whether we think this is a 'special' or we
need a new general purpose mechanism.

If it was really general then we'd probably want to split the iterative
stage in two somehow, and only do RAM in the second half.

But I'm not sure it's worth it; I suspect the easiest way is:

a) Add a counter in migration/ram.c or in the RAM state somewhere
b) Make ram_save_inhibit increment the counter
c) Check the counter at the head of ram_save_iterate and just exit
  if it's none 0
d) Call ram_save_inhibit from block_save_setup
e) Then release it when you've finished the bulk stage

Make sure you still count the RAM in the pending totals, otherwise
migration might think it's finished a bit early.


Is there any culprit I don't see or is it as easy as this?

diff --git a/migration/ram.c b/migration/ram.c
index cb1950f..c67bcf1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2255,6 +2255,13 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 int64_t t0;
 int done = 0;

+    if (blk_mig_bulk_active()) {
+    /* Avoid transferring RAM during bulk phase of block migration as
+ * the bulk phase will usually take a lot of time and transferring
+ * RAM updates again and again is pointless. */
+    goto out;
+    }
+
 rcu_read_lock();
 if (ram_list.version != rs->last_version) {
 ram_state_reset(rs);
@@ -2301,6 +2308,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
  */
 ram_control_after_iterate(f, RAM_CONTROL_ROUND);

+out:
 qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
 ram_counters.transferred += 8;


Peter




Re: [Qemu-block] Block Migration and CPU throttling

2017-12-13 Thread Peter Lieven
Am 12.12.2017 um 18:05 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 21.09.2017 um 14:36 schrieb Dr. David Alan Gilbert:
>>> * Peter Lieven (p...@kamp.de) wrote:
 Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:
>>> * Peter Lieven (p...@kamp.de) wrote:
 Hi,

 I just noticed that CPU throttling and Block Migration don't work 
 together very well.
 During block migration the throttling heuristic detects that we 
 obviously make no progress
 in ram transfer. But the reason is the running block migration and not 
 a too high dirty pages rate.

 The result is that any VM is throttled by 99% during block migration.
>>> Hmm that's unfortunate; do you have a bandwidth set lower than your
>>> actual network connection? I'm just wondering if it's actually going
>>> between the block and RAM iterative sections or getting stuck in ne.
>> It happens also if source and dest are on the same machine and speed is 
>> set to 100G.
> But does it happen if they're not and the speed is set low?
 Yes, it does. I noticed it in our test environment between different nodes 
 with a 10G
 link in between. But its totally clear why it happens. During block 
 migration we transfer
 all dirty memory pages in each round (if there is moderate memory load), 
 but all dirty
 pages are obviously more than 50% of the transferred ram in that round.
 It is more exactly 100%. But the current logic triggers on this condition.

 I think I will go forward and send a patch which disables auto converge 
 during
 block migration bulk stage.
>>> Yes, that's fair;  it probably would also make sense to throttle the RAM
>>> migration during the block migration bulk stage, since the chances are
>>> it's not going to get far.  (I think in the nbd setup, the main
>>> migration process isn't started until the end of bulk).
>> Catching up with the idea of delaying ram migration until block bulk has 
>> completed.
>> What do you think is the easiest way to achieve this?
> 
>
> I think the answer depends whether we think this is a 'special' or we
> need a new general purpose mechanism.
>
> If it was really general then we'd probably want to split the iterative
> stage in two somehow, and only do RAM in the second half.
>
> But I'm not sure it's worth it; I suspect the easiest way is:
>
>a) Add a counter in migration/ram.c or in the RAM state somewhere
>b) Make ram_save_inhibit increment the counter
>c) Check the counter at the head of ram_save_iterate and just exit
>  if it's none 0
>d) Call ram_save_inhibit from block_save_setup
>e) Then release it when you've finished the bulk stage
>
> Make sure you still count the RAM in the pending totals, otherwise
> migration might think it's finished a bit early.

I will look into this for 2.12. Thanks for the cookbook.

Peter




Re: [Qemu-block] Block Migration and CPU throttling

2017-12-12 Thread Dr. David Alan Gilbert
* Peter Lieven (p...@kamp.de) wrote:
> Am 21.09.2017 um 14:36 schrieb Dr. David Alan Gilbert:
> > * Peter Lieven (p...@kamp.de) wrote:
> > > Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:
> > > > * Peter Lieven (p...@kamp.de) wrote:
> > > > > Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:
> > > > > > * Peter Lieven (p...@kamp.de) wrote:
> > > > > > > Hi,
> > > > > > > 
> > > > > > > I just noticed that CPU throttling and Block Migration don't work 
> > > > > > > together very well.
> > > > > > > During block migration the throttling heuristic detects that we 
> > > > > > > obviously make no progress
> > > > > > > in ram transfer. But the reason is the running block migration 
> > > > > > > and not a too high dirty pages rate.
> > > > > > > 
> > > > > > > The result is that any VM is throttled by 99% during block 
> > > > > > > migration.
> > > > > > Hmm that's unfortunate; do you have a bandwidth set lower than your
> > > > > > actual network connection? I'm just wondering if it's actually going
> > > > > > between the block and RAM iterative sections or getting stuck in ne.
> > > > > It happens also if source and dest are on the same machine and speed 
> > > > > is set to 100G.
> > > > But does it happen if they're not and the speed is set low?
> > > Yes, it does. I noticed it in our test environment between different 
> > > nodes with a 10G
> > > link in between. But its totally clear why it happens. During block 
> > > migration we transfer
> > > all dirty memory pages in each round (if there is moderate memory load), 
> > > but all dirty
> > > pages are obviously more than 50% of the transferred ram in that round.
> > > It is more exactly 100%. But the current logic triggers on this condition.
> > > 
> > > I think I will go forward and send a patch which disables auto converge 
> > > during
> > > block migration bulk stage.
> > Yes, that's fair;  it probably would also make sense to throttle the RAM
> > migration during the block migration bulk stage, since the chances are
> > it's not going to get far.  (I think in the nbd setup, the main
> > migration process isn't started until the end of bulk).
> 
> Catching up with the idea of delaying ram migration until block bulk has 
> completed.
> What do you think is the easiest way to achieve this?



I think the answer depends whether we think this is a 'special' or we
need a new general purpose mechanism.

If it was really general then we'd probably want to split the iterative
stage in two somehow, and only do RAM in the second half.

But I'm not sure it's worth it; I suspect the easiest way is:

   a) Add a counter in migration/ram.c or in the RAM state somewhere
   b) Make ram_save_inhibit increment the counter
   c) Check the counter at the head of ram_save_iterate and just exit
 if it's none 0
   d) Call ram_save_inhibit from block_save_setup
   e) Then release it when you've finished the bulk stage

Make sure you still count the RAM in the pending totals, otherwise
migration might think it's finished a bit early.

Dave

> Peter
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-block] Block Migration and CPU throttling

2017-10-12 Thread Peter Lieven

Am 21.09.2017 um 14:36 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Hi,

I just noticed that CPU throttling and Block Migration don't work together very 
well.
During block migration the throttling heuristic detects that we obviously make 
no progress
in ram transfer. But the reason is the running block migration and not a too 
high dirty pages rate.

The result is that any VM is throttled by 99% during block migration.

Hmm that's unfortunate; do you have a bandwidth set lower than your
actual network connection? I'm just wondering if it's actually going
between the block and RAM iterative sections or getting stuck in ne.

It happens also if source and dest are on the same machine and speed is set to 
100G.

But does it happen if they're not and the speed is set low?

Yes, it does. I noticed it in our test environment between different nodes with 
a 10G
link in between. But its totally clear why it happens. During block migration 
we transfer
all dirty memory pages in each round (if there is moderate memory load), but 
all dirty
pages are obviously more than 50% of the transferred ram in that round.
It is more exactly 100%. But the current logic triggers on this condition.

I think I will go forward and send a patch which disables auto converge during
block migration bulk stage.

Yes, that's fair;  it probably would also make sense to throttle the RAM
migration during the block migration bulk stage, since the chances are
it's not going to get far.  (I think in the nbd setup, the main
migration process isn't started until the end of bulk).


Catching up with the idea of delaying ram migration until block bulk has 
completed.
What do you think is the easiest way to achieve this?

Peter



Re: [Qemu-block] Block Migration and CPU throttling

2017-09-21 Thread Peter Lieven

Am 21.09.2017 um 14:36 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Hi,

I just noticed that CPU throttling and Block Migration don't work together very 
well.
During block migration the throttling heuristic detects that we obviously make 
no progress
in ram transfer. But the reason is the running block migration and not a too 
high dirty pages rate.

The result is that any VM is throttled by 99% during block migration.

Hmm that's unfortunate; do you have a bandwidth set lower than your
actual network connection? I'm just wondering if it's actually going
between the block and RAM iterative sections or getting stuck in ne.

It happens also if source and dest are on the same machine and speed is set to 
100G.

But does it happen if they're not and the speed is set low?

Yes, it does. I noticed it in our test environment between different nodes with 
a 10G
link in between. But its totally clear why it happens. During block migration 
we transfer
all dirty memory pages in each round (if there is moderate memory load), but 
all dirty
pages are obviously more than 50% of the transferred ram in that round.
It is more exactly 100%. But the current logic triggers on this condition.

I think I will go forward and send a patch which disables auto converge during
block migration bulk stage.

Yes, that's fair;  it probably would also make sense to throttle the RAM
migration during the block migration bulk stage, since the chances are
it's not going to get far.  (I think in the nbd setup, the main
migration process isn't started until the end of bulk).


Exactly, but I would move this to a different patch. I would not start ram 
migration at all before
the bulk phase has completed.

Peter




Re: [Qemu-block] Block Migration and CPU throttling

2017-09-21 Thread Dr. David Alan Gilbert
* Peter Lieven (p...@kamp.de) wrote:
> Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:
> > * Peter Lieven (p...@kamp.de) wrote:
> >> Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:
> >>> * Peter Lieven (p...@kamp.de) wrote:
>  Hi,
> 
>  I just noticed that CPU throttling and Block Migration don't work 
>  together very well.
>  During block migration the throttling heuristic detects that we 
>  obviously make no progress
>  in ram transfer. But the reason is the running block migration and not a 
>  too high dirty pages rate.
> 
>  The result is that any VM is throttled by 99% during block migration.
> >>> Hmm that's unfortunate; do you have a bandwidth set lower than your
> >>> actual network connection? I'm just wondering if it's actually going
> >>> between the block and RAM iterative sections or getting stuck in ne.
> >> It happens also if source and dest are on the same machine and speed is 
> >> set to 100G.
> > But does it happen if they're not and the speed is set low?
> 
> Yes, it does. I noticed it in our test environment between different nodes 
> with a 10G
> link in between. But its totally clear why it happens. During block migration 
> we transfer
> all dirty memory pages in each round (if there is moderate memory load), but 
> all dirty
> pages are obviously more than 50% of the transferred ram in that round.
> It is more exactly 100%. But the current logic triggers on this condition.
> 
> I think I will go forward and send a patch which disables auto converge during
> block migration bulk stage.

Yes, that's fair;  it probably would also make sense to throttle the RAM
migration during the block migration bulk stage, since the chances are
it's not going to get far.  (I think in the nbd setup, the main
migration process isn't started until the end of bulk).

Dave

> Thanks for your feedback,
> Peter
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-block] Block Migration and CPU throttling

2017-09-20 Thread Peter Lieven
Am 19.09.2017 um 16:41 schrieb Paolo Bonzini:
> On 19/09/2017 15:36, Peter Lieven wrote:
>> Hi,
>>
>> I just noticed that CPU throttling and Block Migration don't work
>> together very well.
>> During block migration the throttling heuristic detects that we
>> obviously make no progress
>> in ram transfer. But the reason is the running block migration and not a
>> too high dirty pages rate.
>>
>> The result is that any VM is throttled by 99% during block migration.
>>
>> I wonder what the best way would be fix this. I came up with the
>> following ideas so far:
>>
>> - disable throttling while block migration is in bulk stage
>> - check if absolute number of num_dirty_pages_period crosses a threshold
>> and not if its just
>>   greater than 50% of transferred bytes
>> - check if migration_dirty_pages > 0. This slows down throttling, but
>> does not avoid it completely.
> If you can use nbd-server and drive-mirror for block migration (libvirt
> would do it), then you will use multiple sockets and be able to migrate
> block and RAM at the same time.
>
> Otherwise, disabling throttling during the bulk stage is the one that
> seems nicest and most promising.

Okay, but this can be done independently of the nbd approach.
If someone uses classic block migration and auto converge his
vserver will freeze.

I will send a patch to fix that.

Thanks,
Peter




Re: [Qemu-block] Block Migration and CPU throttling

2017-09-20 Thread Peter Lieven
Am 19.09.2017 um 16:41 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:
>>> * Peter Lieven (p...@kamp.de) wrote:
 Hi,

 I just noticed that CPU throttling and Block Migration don't work together 
 very well.
 During block migration the throttling heuristic detects that we obviously 
 make no progress
 in ram transfer. But the reason is the running block migration and not a 
 too high dirty pages rate.

 The result is that any VM is throttled by 99% during block migration.
>>> Hmm that's unfortunate; do you have a bandwidth set lower than your
>>> actual network connection? I'm just wondering if it's actually going
>>> between the block and RAM iterative sections or getting stuck in ne.
>> It happens also if source and dest are on the same machine and speed is set 
>> to 100G.
> But does it happen if they're not and the speed is set low?

Yes, it does. I noticed it in our test environment between different nodes with 
a 10G
link in between. But its totally clear why it happens. During block migration 
we transfer
all dirty memory pages in each round (if there is moderate memory load), but 
all dirty
pages are obviously more than 50% of the transferred ram in that round.
It is more exactly 100%. But the current logic triggers on this condition.

I think I will go forward and send a patch which disables auto converge during
block migration bulk stage.

Thanks for your feedback,
Peter




Re: [Qemu-block] Block Migration and CPU throttling

2017-09-19 Thread Paolo Bonzini
On 19/09/2017 15:36, Peter Lieven wrote:
> Hi,
> 
> I just noticed that CPU throttling and Block Migration don't work
> together very well.
> During block migration the throttling heuristic detects that we
> obviously make no progress
> in ram transfer. But the reason is the running block migration and not a
> too high dirty pages rate.
> 
> The result is that any VM is throttled by 99% during block migration.
> 
> I wonder what the best way would be fix this. I came up with the
> following ideas so far:
> 
> - disable throttling while block migration is in bulk stage
> - check if absolute number of num_dirty_pages_period crosses a threshold
> and not if its just
>   greater than 50% of transferred bytes
> - check if migration_dirty_pages > 0. This slows down throttling, but
> does not avoid it completely.

If you can use nbd-server and drive-mirror for block migration (libvirt
would do it), then you will use multiple sockets and be able to migrate
block and RAM at the same time.

Otherwise, disabling throttling during the bulk stage is the one that
seems nicest and most promising.

Paolo



Re: [Qemu-block] Block Migration and CPU throttling

2017-09-19 Thread Dr. David Alan Gilbert
* Peter Lieven (p...@kamp.de) wrote:
> Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:
> > * Peter Lieven (p...@kamp.de) wrote:
> > > Hi,
> > > 
> > > I just noticed that CPU throttling and Block Migration don't work 
> > > together very well.
> > > During block migration the throttling heuristic detects that we obviously 
> > > make no progress
> > > in ram transfer. But the reason is the running block migration and not a 
> > > too high dirty pages rate.
> > > 
> > > The result is that any VM is throttled by 99% during block migration.
> > Hmm that's unfortunate; do you have a bandwidth set lower than your
> > actual network connection? I'm just wondering if it's actually going
> > between the block and RAM iterative sections or getting stuck in ne.
> 
> It happens also if source and dest are on the same machine and speed is set 
> to 100G.

But does it happen if they're not and the speed is set low?

Dave

> Peter
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-block] Block Migration and CPU throttling

2017-09-19 Thread Peter Lieven

Am 19.09.2017 um 16:38 schrieb Dr. David Alan Gilbert:

* Peter Lieven (p...@kamp.de) wrote:

Hi,

I just noticed that CPU throttling and Block Migration don't work together very 
well.
During block migration the throttling heuristic detects that we obviously make 
no progress
in ram transfer. But the reason is the running block migration and not a too 
high dirty pages rate.

The result is that any VM is throttled by 99% during block migration.

Hmm that's unfortunate; do you have a bandwidth set lower than your
actual network connection? I'm just wondering if it's actually going
between the block and RAM iterative sections or getting stuck in ne.


It happens also if source and dest are on the same machine and speed is set to 
100G.

Peter




Re: [Qemu-block] Block Migration and CPU throttling

2017-09-19 Thread Dr. David Alan Gilbert
* Peter Lieven (p...@kamp.de) wrote:
> Hi,
> 
> I just noticed that CPU throttling and Block Migration don't work together 
> very well.
> During block migration the throttling heuristic detects that we obviously 
> make no progress
> in ram transfer. But the reason is the running block migration and not a too 
> high dirty pages rate.
> 
> The result is that any VM is throttled by 99% during block migration.

Hmm that's unfortunate; do you have a bandwidth set lower than your
actual network connection? I'm just wondering if it's actually going
between the block and RAM iterative sections or getting stuck in ne.

> I wonder what the best way would be fix this. I came up with the following 
> ideas so far:
> 
> - disable throttling while block migration is in bulk stage

mig_throttle_guest_down is currently in migration/ram.c
so however we do it, we've got to add an 'inhibit' that
block.c could set (I suggest a counter so that it could
be set by a few things).

> - check if absolute number of num_dirty_pages_period crosses a threshold and 
> not if its just
>   greater than 50% of transferred bytes
> - check if migration_dirty_pages > 0. This slows down throttling, but does 
> not avoid it completely.

An interesting question is whether you want to inhibit in the non-bulk
stage if IO writes are happening too quickly to keep up.

Dave

> 
> Peter
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



[Qemu-block] Block Migration and CPU throttling

2017-09-19 Thread Peter Lieven

Hi,

I just noticed that CPU throttling and Block Migration don't work together very 
well.
During block migration the throttling heuristic detects that we obviously make 
no progress
in ram transfer. But the reason is the running block migration and not a too 
high dirty pages rate.

The result is that any VM is throttled by 99% during block migration.

I wonder what the best way would be fix this. I came up with the following 
ideas so far:

- disable throttling while block migration is in bulk stage
- check if absolute number of num_dirty_pages_period crosses a threshold and 
not if its just
  greater than 50% of transferred bytes
- check if migration_dirty_pages > 0. This slows down throttling, but does not 
avoid it completely.

Peter