* Peter Lieven (p...@kamp.de) wrote:
> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
> > On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote:
> >> I stumbled across the MAX_INFLIGHT_IO field that was introduced in 2015 
> >> and was curious what was the reason
> >> to choose 512MB as readahead? The question is that I found that the source 
> >> VM gets very unresponsive I/O wise
> >> while the initial 512MB are read and furthermore seems to stay 
> >> unreasponsive if we choose a high migration speed
> >> and have a fast storage on the destination VM.
> >>
> >> In our environment I modified this value to 16MB which seems to work much 
> >> smoother. I wonder if we should make
> >> this a user configurable value or define a different rate limit for the 
> >> block transfer in bulk stage at least?
> > I don't know if benchmarks were run when choosing the value.  From the
> > commit description it sounds like the main purpose was to limit the
> > amount of memory that can be consumed.
> >
> > 16 MB also fulfills that criteria :), but why is the source VM more
> > responsive with a lower value?
> >
> > Perhaps the issue is queue depth on the storage device - the block
> > migration code enqueues up to 512 MB worth of reads, and guest I/O has
> > to wait?
> 
> That is my guess. Especially if the destination storage is faster we 
> basically alsways have
> 512 I/Os in flight on the source storage.
> 
> Does anyone mind if the reduce that value to 16MB or do we need a better 
> mechanism?

We've got migration-parameters these days; you could connect it to one
of those fairly easily I think.
Try: grep -i 'cpu[-_]throttle[-_]initial'  for an example of one that's
already there.
Then you can set it to whatever you like.

Dave

> Peter
> 
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

Reply via email to