Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-29 Thread Kirti Wankhede




On 5/29/2020 4:29 AM, Alex Williamson wrote:

On Wed, 27 May 2020 09:48:22 +0100
"Dr. David Alan Gilbert"  wrote:

* Yan Zhao (yan.y.z...@intel.com) wrote:

BTW, for viommu, the downtime data is as below. under the same network
condition and guest memory size, and no running dirty data/memory produced
by device.
(1) viommu off
single-round dirty query: downtime ~100ms


Fine.


(2) viommu on
single-round dirty query: downtime 58s


Youch.


Double Youch!  But we believe this is because we're getting the dirty
bitmap one IOMMU leaf page at a time, right?  We've enable the kernel
to get a dirty bitmap across multiple mappings, but QEMU isn't yet
taking advantage of it.  Do I have this correct?  Thanks,



That's correct.

Thanks,
Kirti



Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-29 Thread Dr. David Alan Gilbert
* Alex Williamson (alex.william...@redhat.com) wrote:
> On Thu, 28 May 2020 04:01:02 -0400
> Yan Zhao  wrote:
> 
> > > > > This is my understanding of the protocol as well, when the device is
> > > > > running, pending_bytes might drop to zero if no internal state has
> > > > > changed and may be non-zero on the next iteration due to device
> > > > > activity.  When the device is not running, pending_bytes reporting 
> > > > > zero
> > > > > indicates the device is done, there is no further state to transmit.
> > > > > Does that meet your need/expectation?
> > > > >  
> > > > (1) on one side, as in vfio_save_pending(),
> > > > vfio_save_pending()
> > > > {
> > > > ...
> > > > ret = vfio_update_pending(vbasedev);
> > > > ...
> > > > *res_precopy_only += migration->pending_bytes;
> > > > ...
> > > > }
> > > > the pending_bytes tells migration thread how much data is still hold in
> > > > device side.
> > > > the device data includes
> > > > device internal data + running device dirty data + device state.
> > > > 
> > > > so the pending_bytes should include device state as well, right?
> > > > if so, the pending_bytes should never reach 0 if there's any device
> > > > state to be sent after device is stopped.  
> > > 
> > > I hadn't expected the pending-bytes to include a fixed offset for device
> > > state (If you mean a few registers etc) - I'd expect pending to drop
> > > possibly to zero;  the heuristic as to when to switch from iteration to
> > > stop, is based on the total pending across all iterated devices; so it's
> > > got to be allowed to drop otherwise you'll never transition to stop.
> > >   
> > ok. got it.
> 
> Yeah, as I understand it, a device is not required to participate in
> reporting data available while (_SAVING | _RUNNING), there will always
> be an iteration while the device is !_RUNNING.  Therefore if you have
> fixed device state that you're always going to send, it should only be
> sent once when called during !_RUNNING.  The iterative phase should be
> used where you have a good chance to avoid re-sending data at the
> stop-and-copy phase.  Thanks,

Right.

Dave

> Alex
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-28 Thread Yan Zhao
On Thu, May 28, 2020 at 04:59:06PM -0600, Alex Williamson wrote:
> On Wed, 27 May 2020 09:48:22 +0100
> "Dr. David Alan Gilbert"  wrote:
> > * Yan Zhao (yan.y.z...@intel.com) wrote:
> > > BTW, for viommu, the downtime data is as below. under the same network
> > > condition and guest memory size, and no running dirty data/memory produced
> > > by device.
> > > (1) viommu off
> > > single-round dirty query: downtime ~100ms   
> > 
> > Fine.
> > 
> > > (2) viommu on
> > > single-round dirty query: downtime 58s   
> > 
> > Youch.
> 
> Double Youch!  But we believe this is because we're getting the dirty
> bitmap one IOMMU leaf page at a time, right?  We've enable the kernel
> to get a dirty bitmap across multiple mappings, but QEMU isn't yet
> taking advantage of it.  Do I have this correct?  Thanks,
>
Yes, I think so, but I haven't looked into it yet.

Thanks
Yan



Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-28 Thread Alex Williamson
On Wed, 27 May 2020 09:48:22 +0100
"Dr. David Alan Gilbert"  wrote:
> * Yan Zhao (yan.y.z...@intel.com) wrote:
> > BTW, for viommu, the downtime data is as below. under the same network
> > condition and guest memory size, and no running dirty data/memory produced
> > by device.
> > (1) viommu off
> > single-round dirty query: downtime ~100ms   
> 
> Fine.
> 
> > (2) viommu on
> > single-round dirty query: downtime 58s   
> 
> Youch.

Double Youch!  But we believe this is because we're getting the dirty
bitmap one IOMMU leaf page at a time, right?  We've enable the kernel
to get a dirty bitmap across multiple mappings, but QEMU isn't yet
taking advantage of it.  Do I have this correct?  Thanks,

Alex




Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-28 Thread Alex Williamson
On Thu, 28 May 2020 04:01:02 -0400
Yan Zhao  wrote:

> > > > This is my understanding of the protocol as well, when the device is
> > > > running, pending_bytes might drop to zero if no internal state has
> > > > changed and may be non-zero on the next iteration due to device
> > > > activity.  When the device is not running, pending_bytes reporting zero
> > > > indicates the device is done, there is no further state to transmit.
> > > > Does that meet your need/expectation?
> > > >  
> > > (1) on one side, as in vfio_save_pending(),
> > > vfio_save_pending()
> > > {
> > > ...
> > > ret = vfio_update_pending(vbasedev);
> > > ...
> > > *res_precopy_only += migration->pending_bytes;
> > > ...
> > > }
> > > the pending_bytes tells migration thread how much data is still hold in
> > > device side.
> > > the device data includes
> > > device internal data + running device dirty data + device state.
> > > 
> > > so the pending_bytes should include device state as well, right?
> > > if so, the pending_bytes should never reach 0 if there's any device
> > > state to be sent after device is stopped.  
> > 
> > I hadn't expected the pending-bytes to include a fixed offset for device
> > state (If you mean a few registers etc) - I'd expect pending to drop
> > possibly to zero;  the heuristic as to when to switch from iteration to
> > stop, is based on the total pending across all iterated devices; so it's
> > got to be allowed to drop otherwise you'll never transition to stop.
> >   
> ok. got it.

Yeah, as I understand it, a device is not required to participate in
reporting data available while (_SAVING | _RUNNING), there will always
be an iteration while the device is !_RUNNING.  Therefore if you have
fixed device state that you're always going to send, it should only be
sent once when called during !_RUNNING.  The iterative phase should be
used where you have a good chance to avoid re-sending data at the
stop-and-copy phase.  Thanks,

Alex




Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-28 Thread Yan Zhao
> > > This is my understanding of the protocol as well, when the device is
> > > running, pending_bytes might drop to zero if no internal state has
> > > changed and may be non-zero on the next iteration due to device
> > > activity.  When the device is not running, pending_bytes reporting zero
> > > indicates the device is done, there is no further state to transmit.
> > > Does that meet your need/expectation?
> > >
> > (1) on one side, as in vfio_save_pending(),
> > vfio_save_pending()
> > {
> > ...
> > ret = vfio_update_pending(vbasedev);
> > ...
> > *res_precopy_only += migration->pending_bytes;
> > ...
> > }
> > the pending_bytes tells migration thread how much data is still hold in
> > device side.
> > the device data includes
> > device internal data + running device dirty data + device state.
> > 
> > so the pending_bytes should include device state as well, right?
> > if so, the pending_bytes should never reach 0 if there's any device
> > state to be sent after device is stopped.
> 
> I hadn't expected the pending-bytes to include a fixed offset for device
> state (If you mean a few registers etc) - I'd expect pending to drop
> possibly to zero;  the heuristic as to when to switch from iteration to
> stop, is based on the total pending across all iterated devices; so it's
> got to be allowed to drop otherwise you'll never transition to stop.
> 
ok. got it.

> > (2) on the other side,
> > along side we updated the pending_bytes in vfio_save_pending() and
> > enter into the vfio_save_iterate(), if we repeatedly update
> > pending_bytes in vfio_save_iterate(), it would enter into a scenario
> > like
> > 
> > initially pending_bytes=500M.
> > vfio_save_iterate() -->
> >   round 1: transmitted 500M.
> >   round 2: update pending bytes, pending_bytes=50M (50M dirty data).
> >   round 3: update pending bytes, pending_bytes=50M.
> >   ...
> >   round N: update pending bytes, pending_bytes=50M.
> > 
> > If there're two vfio devices, the vfio_save_iterate() for the second device
> > may never get chance to be called because there's always pending_bytes
> > produced by the first device, even the size if small.
> 
> And between RAM and the vfio devices?

yes, is that right?

Thanks
Yan



Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-27 Thread Dr. David Alan Gilbert
* Yan Zhao (yan.y.z...@intel.com) wrote:
> On Tue, May 26, 2020 at 02:19:39PM -0600, Alex Williamson wrote:
> > On Mon, 25 May 2020 18:50:54 +0530
> > Kirti Wankhede  wrote:
> > 
> > > On 5/25/2020 12:29 PM, Yan Zhao wrote:
> > > > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> > > >> Hi folks,
> > > >>
> > > >> My impression is that we're getting pretty close to a workable
> > > >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> > > >> also have a matching QEMU series and a proposal for a new i40e
> > > >> consumer, as well as I assume GVT-g updates happening internally at
> > > >> Intel.  I expect all of the latter needs further review and discussion,
> > > >> but we should be at the point where we can validate these proposed
> > > >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> > > >> that we can get this wrapped up for the v5.8 merge window.  I know
> > > >> Connie has some outstanding documentation comments and I'd like to make
> > > >> sure everyone has an opportunity to check that their comments have been
> > > >> addressed and we don't discover any new blocking issues.  Please send
> > > >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> > > >> interface and implementation.  Thanks!
> > > >>  
> > > > hi Alex
> > > > after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
> > > > two bugs.
> > > > 1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 
> > > > 22"
> > > > This is a qemu bug that the dirty bitmap query range is not the same
> > > > as the dma map range. It can be fixed in qemu. and I just have a 
> > > > little
> > > > concern for kernel to have this restriction.
> > > >   
> > > 
> > > I never saw this unaligned size in my testing. In this case if you can 
> > > provide vfio_* event traces, that will helpful.
> > 
> > Yeah, I'm curious why we're hitting such a call path, I think we were
> > designing this under the assumption we wouldn't see these.  I also
> that's because the algorithm for getting dirty bitmap query range is still 
> not exactly
> matching to that for dma map range in vfio_dma_map().
> 
> 
> > wonder if we really need to enforce the dma mapping range for getting
> > the dirty bitmap with the current implementation (unmap+dirty obviously
> > still has the restriction).  We do shift the bitmap in place for
> > alignment, but I'm not sure why we couldn't shift it back and only
> > clear the range that was reported.  Kirti, do you see other issues?  I
> > think a patch to lift that restriction is something we could plan to
> > include after the initial series is included and before we've committed
> > to the uapi at the v5.8 release.
> >  
> > > > 2. migration abortion, reporting
> > > > "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> > > > qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> > > > qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> > > > 
> > > > It's still a qemu bug and we can fixed it by
> > > > "
> > > > if (migration->pending_bytes == 0) {
> > > > +qemu_put_be64(f, 0);
> > > > +qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > > "  
> > > 
> > > In which function in QEMU do you have to add this?
> > 
> > I think this is relative to QEMU path 09/ where Yan had the questions
> > below on v16 and again tried to get answers to them on v22:
> > 
> > https://lore.kernel.org/qemu-devel/20200520031323.GB10369@joy-OptiPlex-7040/
> > 
> > Kirti, please address these questions.
> > 
> > > > and actually there are some extra concerns about this part, as reported 
> > > > in
> > > > [1][2].
> > > > 
> > > > [1] data_size should be read ahead of data_offset
> > > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
> > > > [2] should not repeatedly update pending_bytes in vfio_save_iterate()
> > > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.
> > > > 
> > > > but as those errors are all in qemu, and we have finished basic tests in
> > > > both gvt & i40e, we're fine with the kernel part interface in general 
> > > > now.
> > > > (except for my concern [1], which needs to update kernel patch 1)
> > > >   
> > > 
> > >  >> what if pending_bytes is not 0, but vendor driver just does not want  
> > > to
> > >  >> send data in this iteration? isn't it right to get data_size first   
> > > before
> > >  >> getting data_offset?  
> > > 
> > > If vendor driver doesn't want to send data but still has data in staging 
> > > buffer, vendor driver still can control to send pending_bytes for this 
> > > iteration as 0 as this is a trap field.
> > > 
> > > I would defer this to Alex.
> > 
> > This is my understanding of the protocol as well, when the device is
> > running, pending_bytes might drop to zero if no internal state has
> > changed and may be non-zero on the next iteration due to device
> > 

Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-27 Thread Yan Zhao
On Tue, May 26, 2020 at 02:19:39PM -0600, Alex Williamson wrote:
> On Mon, 25 May 2020 18:50:54 +0530
> Kirti Wankhede  wrote:
> 
> > On 5/25/2020 12:29 PM, Yan Zhao wrote:
> > > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> > >> Hi folks,
> > >>
> > >> My impression is that we're getting pretty close to a workable
> > >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> > >> also have a matching QEMU series and a proposal for a new i40e
> > >> consumer, as well as I assume GVT-g updates happening internally at
> > >> Intel.  I expect all of the latter needs further review and discussion,
> > >> but we should be at the point where we can validate these proposed
> > >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> > >> that we can get this wrapped up for the v5.8 merge window.  I know
> > >> Connie has some outstanding documentation comments and I'd like to make
> > >> sure everyone has an opportunity to check that their comments have been
> > >> addressed and we don't discover any new blocking issues.  Please send
> > >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> > >> interface and implementation.  Thanks!
> > >>  
> > > hi Alex
> > > after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
> > > two bugs.
> > > 1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
> > > This is a qemu bug that the dirty bitmap query range is not the same
> > > as the dma map range. It can be fixed in qemu. and I just have a 
> > > little
> > > concern for kernel to have this restriction.
> > >   
> > 
> > I never saw this unaligned size in my testing. In this case if you can 
> > provide vfio_* event traces, that will helpful.
> 
> Yeah, I'm curious why we're hitting such a call path, I think we were
> designing this under the assumption we wouldn't see these.  I also
that's because the algorithm for getting dirty bitmap query range is still not 
exactly
matching to that for dma map range in vfio_dma_map().


> wonder if we really need to enforce the dma mapping range for getting
> the dirty bitmap with the current implementation (unmap+dirty obviously
> still has the restriction).  We do shift the bitmap in place for
> alignment, but I'm not sure why we couldn't shift it back and only
> clear the range that was reported.  Kirti, do you see other issues?  I
> think a patch to lift that restriction is something we could plan to
> include after the initial series is included and before we've committed
> to the uapi at the v5.8 release.
>  
> > > 2. migration abortion, reporting
> > > "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> > > qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> > > qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> > > 
> > > It's still a qemu bug and we can fixed it by
> > > "
> > > if (migration->pending_bytes == 0) {
> > > +qemu_put_be64(f, 0);
> > > +qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > "  
> > 
> > In which function in QEMU do you have to add this?
> 
> I think this is relative to QEMU path 09/ where Yan had the questions
> below on v16 and again tried to get answers to them on v22:
> 
> https://lore.kernel.org/qemu-devel/20200520031323.GB10369@joy-OptiPlex-7040/
> 
> Kirti, please address these questions.
> 
> > > and actually there are some extra concerns about this part, as reported in
> > > [1][2].
> > > 
> > > [1] data_size should be read ahead of data_offset
> > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
> > > [2] should not repeatedly update pending_bytes in vfio_save_iterate()
> > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.
> > > 
> > > but as those errors are all in qemu, and we have finished basic tests in
> > > both gvt & i40e, we're fine with the kernel part interface in general now.
> > > (except for my concern [1], which needs to update kernel patch 1)
> > >   
> > 
> >  >> what if pending_bytes is not 0, but vendor driver just does not want  to
> >  >> send data in this iteration? isn't it right to get data_size first   
> > before
> >  >> getting data_offset?  
> > 
> > If vendor driver doesn't want to send data but still has data in staging 
> > buffer, vendor driver still can control to send pending_bytes for this 
> > iteration as 0 as this is a trap field.
> > 
> > I would defer this to Alex.
> 
> This is my understanding of the protocol as well, when the device is
> running, pending_bytes might drop to zero if no internal state has
> changed and may be non-zero on the next iteration due to device
> activity.  When the device is not running, pending_bytes reporting zero
> indicates the device is done, there is no further state to transmit.
> Does that meet your need/expectation?
>
(1) on one side, as in vfio_save_pending(),
vfio_save_pending()
{
...
ret = 

Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-26 Thread Alex Williamson
On Mon, 25 May 2020 18:50:54 +0530
Kirti Wankhede  wrote:

> On 5/25/2020 12:29 PM, Yan Zhao wrote:
> > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> >> Hi folks,
> >>
> >> My impression is that we're getting pretty close to a workable
> >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> >> also have a matching QEMU series and a proposal for a new i40e
> >> consumer, as well as I assume GVT-g updates happening internally at
> >> Intel.  I expect all of the latter needs further review and discussion,
> >> but we should be at the point where we can validate these proposed
> >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> >> that we can get this wrapped up for the v5.8 merge window.  I know
> >> Connie has some outstanding documentation comments and I'd like to make
> >> sure everyone has an opportunity to check that their comments have been
> >> addressed and we don't discover any new blocking issues.  Please send
> >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> >> interface and implementation.  Thanks!
> >>  
> > hi Alex
> > after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
> > two bugs.
> > 1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
> > This is a qemu bug that the dirty bitmap query range is not the same
> > as the dma map range. It can be fixed in qemu. and I just have a little
> > concern for kernel to have this restriction.
> >   
> 
> I never saw this unaligned size in my testing. In this case if you can 
> provide vfio_* event traces, that will helpful.

Yeah, I'm curious why we're hitting such a call path, I think we were
designing this under the assumption we wouldn't see these.  I also
wonder if we really need to enforce the dma mapping range for getting
the dirty bitmap with the current implementation (unmap+dirty obviously
still has the restriction).  We do shift the bitmap in place for
alignment, but I'm not sure why we couldn't shift it back and only
clear the range that was reported.  Kirti, do you see other issues?  I
think a patch to lift that restriction is something we could plan to
include after the initial series is included and before we've committed
to the uapi at the v5.8 release.
 
> > 2. migration abortion, reporting
> > "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> > qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> > qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> > 
> > It's still a qemu bug and we can fixed it by
> > "
> > if (migration->pending_bytes == 0) {
> > +qemu_put_be64(f, 0);
> > +qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > "  
> 
> In which function in QEMU do you have to add this?

I think this is relative to QEMU path 09/ where Yan had the questions
below on v16 and again tried to get answers to them on v22:

https://lore.kernel.org/qemu-devel/20200520031323.GB10369@joy-OptiPlex-7040/

Kirti, please address these questions.

> > and actually there are some extra concerns about this part, as reported in
> > [1][2].
> > 
> > [1] data_size should be read ahead of data_offset
> > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
> > [2] should not repeatedly update pending_bytes in vfio_save_iterate()
> > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.
> > 
> > but as those errors are all in qemu, and we have finished basic tests in
> > both gvt & i40e, we're fine with the kernel part interface in general now.
> > (except for my concern [1], which needs to update kernel patch 1)
> >   
> 
>  >> what if pending_bytes is not 0, but vendor driver just does not want  to
>  >> send data in this iteration? isn't it right to get data_size first   
> before
>  >> getting data_offset?  
> 
> If vendor driver doesn't want to send data but still has data in staging 
> buffer, vendor driver still can control to send pending_bytes for this 
> iteration as 0 as this is a trap field.
> 
> I would defer this to Alex.

This is my understanding of the protocol as well, when the device is
running, pending_bytes might drop to zero if no internal state has
changed and may be non-zero on the next iteration due to device
activity.  When the device is not running, pending_bytes reporting zero
indicates the device is done, there is no further state to transmit.
Does that meet your need/expectation?

> > so I wonder which way in your mind is better, to give our reviewed-by to
> > the kernel part now, or hold until next qemu fixes?
> > and as performance data from gvt is requested from your previous mail, is
> > that still required before the code is accepted?

The QEMU series does not need to be perfect, I kind of expect we might
see a few iterations of that beyond the kernel portion being accepted.
We should have the QEMU series to the point that we've resolved any
uapi issues though, which it 

Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-25 Thread Kirti Wankhede




On 5/25/2020 12:29 PM, Yan Zhao wrote:

On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:

Hi folks,

My impression is that we're getting pretty close to a workable
implementation here with v22 plus respins of patches 5, 6, and 8.  We
also have a matching QEMU series and a proposal for a new i40e
consumer, as well as I assume GVT-g updates happening internally at
Intel.  I expect all of the latter needs further review and discussion,
but we should be at the point where we can validate these proposed
kernel interfaces.  Therefore I'd like to make a call for reviews so
that we can get this wrapped up for the v5.8 merge window.  I know
Connie has some outstanding documentation comments and I'd like to make
sure everyone has an opportunity to check that their comments have been
addressed and we don't discover any new blocking issues.  Please send
your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
interface and implementation.  Thanks!


hi Alex
after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
two bugs.
1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
This is a qemu bug that the dirty bitmap query range is not the same
as the dma map range. It can be fixed in qemu. and I just have a little
concern for kernel to have this restriction.



I never saw this unaligned size in my testing. In this case if you can 
provide vfio_* event traces, that will helpful.



2. migration abortion, reporting
"qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
qemu-system-x86_64-lm: error while loading state section id 49(vfio)
qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"

It's still a qemu bug and we can fixed it by
"
if (migration->pending_bytes == 0) {
+qemu_put_be64(f, 0);
+qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
"


In which function in QEMU do you have to add this?


and actually there are some extra concerns about this part, as reported in
[1][2].

[1] data_size should be read ahead of data_offset
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
[2] should not repeatedly update pending_bytes in vfio_save_iterate()
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.

but as those errors are all in qemu, and we have finished basic tests in
both gvt & i40e, we're fine with the kernel part interface in general now.
(except for my concern [1], which needs to update kernel patch 1)



>> what if pending_bytes is not 0, but vendor driver just does not want  to
>> send data in this iteration? isn't it right to get data_size first 
before

>> getting data_offset?

If vendor driver doesn't want to send data but still has data in staging 
buffer, vendor driver still can control to send pending_bytes for this 
iteration as 0 as this is a trap field.


I would defer this to Alex.


so I wonder which way in your mind is better, to give our reviewed-by to
the kernel part now, or hold until next qemu fixes?
and as performance data from gvt is requested from your previous mail, is
that still required before the code is accepted?

BTW, we have also conducted some basic tests when viommu is on, and found out
errors like
"qemu-system-x86_64-dt: vtd_iova_to_slpte: detected slpte permission error 
(iova=0x0, level=0x3, slpte=0x0, write=1)
qemu-system-x86_64-dt: vtd_iommu_translate: detected translation failure 
(dev=00:03:00, iova=0x0)
qemu-system-x86_64-dt: New fault is not recorded due to compression of faults".



I saw these errors, I'm looking into it.

Thanks,
Kirti



Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-25 Thread Yan Zhao
On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
> Hi folks,
> 
> My impression is that we're getting pretty close to a workable
> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> also have a matching QEMU series and a proposal for a new i40e
> consumer, as well as I assume GVT-g updates happening internally at
> Intel.  I expect all of the latter needs further review and discussion,
> but we should be at the point where we can validate these proposed
> kernel interfaces.  Therefore I'd like to make a call for reviews so
> that we can get this wrapped up for the v5.8 merge window.  I know
> Connie has some outstanding documentation comments and I'd like to make
> sure everyone has an opportunity to check that their comments have been
> addressed and we don't discover any new blocking issues.  Please send
> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> interface and implementation.  Thanks!
>
hi Alex
after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
two bugs.
1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
   This is a qemu bug that the dirty bitmap query range is not the same
   as the dma map range. It can be fixed in qemu. and I just have a little
   concern for kernel to have this restriction.

2. migration abortion, reporting
"qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
qemu-system-x86_64-lm: error while loading state section id 49(vfio)
qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"

It's still a qemu bug and we can fixed it by
"
if (migration->pending_bytes == 0) {
+qemu_put_be64(f, 0);
+qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
"
and actually there are some extra concerns about this part, as reported in
[1][2].

[1] data_size should be read ahead of data_offset
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
[2] should not repeatedly update pending_bytes in vfio_save_iterate()
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.

but as those errors are all in qemu, and we have finished basic tests in
both gvt & i40e, we're fine with the kernel part interface in general now.
(except for my concern [1], which needs to update kernel patch 1)

so I wonder which way in your mind is better, to give our reviewed-by to
the kernel part now, or hold until next qemu fixes?
and as performance data from gvt is requested from your previous mail, is
that still required before the code is accepted?

BTW, we have also conducted some basic tests when viommu is on, and found out
errors like 
"qemu-system-x86_64-dt: vtd_iova_to_slpte: detected slpte permission error 
(iova=0x0, level=0x3, slpte=0x0, write=1)
qemu-system-x86_64-dt: vtd_iommu_translate: detected translation failure 
(dev=00:03:00, iova=0x0)
qemu-system-x86_64-dt: New fault is not recorded due to compression of faults".

Thanks
Yan







Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-21 Thread Kirti Wankhede




On 5/21/2020 12:34 PM, Yan Zhao wrote:

On Thu, May 21, 2020 at 12:39:48PM +0530, Kirti Wankhede wrote:



On 5/21/2020 10:38 AM, Yan Zhao wrote:

On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:

On Wed, 20 May 2020 19:10:07 +0530
Kirti Wankhede  wrote:


On 5/20/2020 8:25 AM, Yan Zhao wrote:

On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:

Hi folks,

My impression is that we're getting pretty close to a workable
implementation here with v22 plus respins of patches 5, 6, and 8.  We
also have a matching QEMU series and a proposal for a new i40e
consumer, as well as I assume GVT-g updates happening internally at
Intel.  I expect all of the latter needs further review and discussion,
but we should be at the point where we can validate these proposed
kernel interfaces.  Therefore I'd like to make a call for reviews so
that we can get this wrapped up for the v5.8 merge window.  I know
Connie has some outstanding documentation comments and I'd like to make
sure everyone has an opportunity to check that their comments have been
addressed and we don't discover any new blocking issues.  Please send
your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
interface and implementation.  Thanks!

hi Alex and Kirti,
after porting to qemu v22 and kernel v22, it is found out that
it can not even pass basic live migration test with error like

"Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"


Thanks for testing Yan.
I think last moment change in below cause this failure

https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankh...@nvidia.com/

   > if (dma->iova > iova + size)
   > break;

Surprisingly with my basic testing with 2G sys mem QEMU didn't raise
abort on g_free, but I do hit this with large sys mem.
With above change, that function iterated through next vfio_dma as well.
Check should be as below:

-   if (dma->iova > iova + size)
+   if (dma->iova > iova + size -1)



Or just:

if (dma->iova >= iova + size)

Thanks,
Alex



   break;

Another fix is in QEMU.
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html

   > > +range->bitmap.size = ROUND_UP(pages, 64) / 8;
   >
   > ROUND_UP(npages/8, sizeof(u64))?
   >

If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.

Changing it as below

-range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
+range->bitmap.size = ROUND_UP(pages, sizeof(__u64) *
BITS_PER_BYTE) /
+ BITS_PER_BYTE;

I'm updating patches with these fixes and Cornelia's suggestion soon.

Due to short of time I may not be able to address all the concerns
raised on previous versions of QEMU, I'm trying make QEMU side code
available for testing for others with latest kernel changes. Don't
worry, I will revisit comments on QEMU patches. Right now first priority
is to test kernel UAPI and prepare kernel patches for 5.8




hi Kirti
by updating kernel/qemu to v23, still met below two types of errors.
just basic migration test.
(the guest VM size is 2G for all reported bugs).

"Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"



size doesn't look correct here, below check should be failing.
  range.size & (iommu_pgsize - 1)


or

"qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
qemu-system-x86_64-lm: error while loading state section id 49(vfio)
qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"




Above error is from:
 buf = g_try_malloc0(data_size);
 if (!buf) {
 error_report("%s: Error allocating buffer ", __func__);
 return -ENOMEM;
 }

Seems you are running out of memory?


no. my host memory is about 60G.
just migrate with command "migrate -d xxx" without speed limit.
FYI.



Traces are added in migration code so enabling vfio_* traces at source 
and destination qemu commandline helps to debug and analyze any 
migration related errors.


Thanks,
Kirti






Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-21 Thread Kirti Wankhede




On 5/21/2020 12:34 PM, Yan Zhao wrote:

On Thu, May 21, 2020 at 12:39:48PM +0530, Kirti Wankhede wrote:



On 5/21/2020 10:38 AM, Yan Zhao wrote:

On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:

On Wed, 20 May 2020 19:10:07 +0530
Kirti Wankhede  wrote:


On 5/20/2020 8:25 AM, Yan Zhao wrote:

On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:

Hi folks,

My impression is that we're getting pretty close to a workable
implementation here with v22 plus respins of patches 5, 6, and 8.  We
also have a matching QEMU series and a proposal for a new i40e
consumer, as well as I assume GVT-g updates happening internally at
Intel.  I expect all of the latter needs further review and discussion,
but we should be at the point where we can validate these proposed
kernel interfaces.  Therefore I'd like to make a call for reviews so
that we can get this wrapped up for the v5.8 merge window.  I know
Connie has some outstanding documentation comments and I'd like to make
sure everyone has an opportunity to check that their comments have been
addressed and we don't discover any new blocking issues.  Please send
your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
interface and implementation.  Thanks!

hi Alex and Kirti,
after porting to qemu v22 and kernel v22, it is found out that
it can not even pass basic live migration test with error like

"Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"


Thanks for testing Yan.
I think last moment change in below cause this failure

https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankh...@nvidia.com/

   > if (dma->iova > iova + size)
   > break;

Surprisingly with my basic testing with 2G sys mem QEMU didn't raise
abort on g_free, but I do hit this with large sys mem.
With above change, that function iterated through next vfio_dma as well.
Check should be as below:

-   if (dma->iova > iova + size)
+   if (dma->iova > iova + size -1)



Or just:

if (dma->iova >= iova + size)

Thanks,
Alex



   break;

Another fix is in QEMU.
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html

   > > +range->bitmap.size = ROUND_UP(pages, 64) / 8;
   >
   > ROUND_UP(npages/8, sizeof(u64))?
   >

If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.

Changing it as below

-range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
+range->bitmap.size = ROUND_UP(pages, sizeof(__u64) *
BITS_PER_BYTE) /
+ BITS_PER_BYTE;

I'm updating patches with these fixes and Cornelia's suggestion soon.

Due to short of time I may not be able to address all the concerns
raised on previous versions of QEMU, I'm trying make QEMU side code
available for testing for others with latest kernel changes. Don't
worry, I will revisit comments on QEMU patches. Right now first priority
is to test kernel UAPI and prepare kernel patches for 5.8




hi Kirti
by updating kernel/qemu to v23, still met below two types of errors.
just basic migration test.
(the guest VM size is 2G for all reported bugs).

"Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"



size doesn't look correct here, below check should be failing.
  range.size & (iommu_pgsize - 1)


or

"qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
qemu-system-x86_64-lm: error while loading state section id 49(vfio)
qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"




Above error is from:
 buf = g_try_malloc0(data_size);
 if (!buf) {
 error_report("%s: Error allocating buffer ", __func__);
 return -ENOMEM;
 }

Seems you are running out of memory?


no. my host memory is about 60G.
just migrate with command "migrate -d xxx" without speed limit.
FYI.



Probably you will have to figure out why g_try_malloc0() is failing. 
what is data_size when it fails?


Thanks,
Kirti



Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-21 Thread Yan Zhao
On Thu, May 21, 2020 at 12:39:48PM +0530, Kirti Wankhede wrote:
> 
> 
> On 5/21/2020 10:38 AM, Yan Zhao wrote:
> > On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:
> > > On Wed, 20 May 2020 19:10:07 +0530
> > > Kirti Wankhede  wrote:
> > > 
> > > > On 5/20/2020 8:25 AM, Yan Zhao wrote:
> > > > > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
> > > > > > Hi folks,
> > > > > > 
> > > > > > My impression is that we're getting pretty close to a workable
> > > > > > implementation here with v22 plus respins of patches 5, 6, and 8.  
> > > > > > We
> > > > > > also have a matching QEMU series and a proposal for a new i40e
> > > > > > consumer, as well as I assume GVT-g updates happening internally at
> > > > > > Intel.  I expect all of the latter needs further review and 
> > > > > > discussion,
> > > > > > but we should be at the point where we can validate these proposed
> > > > > > kernel interfaces.  Therefore I'd like to make a call for reviews so
> > > > > > that we can get this wrapped up for the v5.8 merge window.  I know
> > > > > > Connie has some outstanding documentation comments and I'd like to 
> > > > > > make
> > > > > > sure everyone has an opportunity to check that their comments have 
> > > > > > been
> > > > > > addressed and we don't discover any new blocking issues.  Please 
> > > > > > send
> > > > > > your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with 
> > > > > > this
> > > > > > interface and implementation.  Thanks!
> > > > > hi Alex and Kirti,
> > > > > after porting to qemu v22 and kernel v22, it is found out that
> > > > > it can not even pass basic live migration test with error like
> > > > > 
> > > > > "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
> > > > 
> > > > Thanks for testing Yan.
> > > > I think last moment change in below cause this failure
> > > > 
> > > > https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankh...@nvidia.com/
> > > > 
> > > >   > if (dma->iova > iova + size)
> > > >   > break;
> > > > 
> > > > Surprisingly with my basic testing with 2G sys mem QEMU didn't raise
> > > > abort on g_free, but I do hit this with large sys mem.
> > > > With above change, that function iterated through next vfio_dma as well.
> > > > Check should be as below:
> > > > 
> > > > -   if (dma->iova > iova + size)
> > > > +   if (dma->iova > iova + size -1)
> > > 
> > > 
> > > Or just:
> > > 
> > >   if (dma->iova >= iova + size)
> > > 
> > > Thanks,
> > > Alex
> > > 
> > > 
> > > >   break;
> > > > 
> > > > Another fix is in QEMU.
> > > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html
> > > > 
> > > >   > > +range->bitmap.size = ROUND_UP(pages, 64) / 8;
> > > >   >
> > > >   > ROUND_UP(npages/8, sizeof(u64))?
> > > >   >
> > > > 
> > > > If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.
> > > > 
> > > > Changing it as below
> > > > 
> > > > -range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
> > > > +range->bitmap.size = ROUND_UP(pages, sizeof(__u64) *
> > > > BITS_PER_BYTE) /
> > > > + BITS_PER_BYTE;
> > > > 
> > > > I'm updating patches with these fixes and Cornelia's suggestion soon.
> > > > 
> > > > Due to short of time I may not be able to address all the concerns
> > > > raised on previous versions of QEMU, I'm trying make QEMU side code
> > > > available for testing for others with latest kernel changes. Don't
> > > > worry, I will revisit comments on QEMU patches. Right now first priority
> > > > is to test kernel UAPI and prepare kernel patches for 5.8
> > > > 
> > > 
> > hi Kirti
> > by updating kernel/qemu to v23, still met below two types of errors.
> > just basic migration test.
> > (the guest VM size is 2G for all reported bugs).
> > 
> > "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
> > 
> 
> size doesn't look correct here, below check should be failing.
>  range.size & (iommu_pgsize - 1)
> 
> > or
> > 
> > "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> > qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> > qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> > 
> > 
> 
> Above error is from:
> buf = g_try_malloc0(data_size);
> if (!buf) {
> error_report("%s: Error allocating buffer ", __func__);
> return -ENOMEM;
> }
> 
> Seems you are running out of memory?
>
no. my host memory is about 60G.
just migrate with command "migrate -d xxx" without speed limit.
FYI.

Yan



Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-21 Thread Kirti Wankhede




On 5/21/2020 10:38 AM, Yan Zhao wrote:

On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:

On Wed, 20 May 2020 19:10:07 +0530
Kirti Wankhede  wrote:


On 5/20/2020 8:25 AM, Yan Zhao wrote:

On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:

Hi folks,

My impression is that we're getting pretty close to a workable
implementation here with v22 plus respins of patches 5, 6, and 8.  We
also have a matching QEMU series and a proposal for a new i40e
consumer, as well as I assume GVT-g updates happening internally at
Intel.  I expect all of the latter needs further review and discussion,
but we should be at the point where we can validate these proposed
kernel interfaces.  Therefore I'd like to make a call for reviews so
that we can get this wrapped up for the v5.8 merge window.  I know
Connie has some outstanding documentation comments and I'd like to make
sure everyone has an opportunity to check that their comments have been
addressed and we don't discover any new blocking issues.  Please send
your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
interface and implementation.  Thanks!
  

hi Alex and Kirti,
after porting to qemu v22 and kernel v22, it is found out that
it can not even pass basic live migration test with error like

"Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
   


Thanks for testing Yan.
I think last moment change in below cause this failure

https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankh...@nvidia.com/

  >  if (dma->iova > iova + size)
  >  break;

Surprisingly with my basic testing with 2G sys mem QEMU didn't raise
abort on g_free, but I do hit this with large sys mem.
With above change, that function iterated through next vfio_dma as well.
Check should be as below:

-   if (dma->iova > iova + size)
+   if (dma->iova > iova + size -1)



Or just:

if (dma->iova >= iova + size)

Thanks,
Alex



  break;

Another fix is in QEMU.
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html

  > > +range->bitmap.size = ROUND_UP(pages, 64) / 8;
  >
  > ROUND_UP(npages/8, sizeof(u64))?
  >

If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.

Changing it as below

-range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
+range->bitmap.size = ROUND_UP(pages, sizeof(__u64) *
BITS_PER_BYTE) /
+ BITS_PER_BYTE;

I'm updating patches with these fixes and Cornelia's suggestion soon.

Due to short of time I may not be able to address all the concerns
raised on previous versions of QEMU, I'm trying make QEMU side code
available for testing for others with latest kernel changes. Don't
worry, I will revisit comments on QEMU patches. Right now first priority
is to test kernel UAPI and prepare kernel patches for 5.8




hi Kirti
by updating kernel/qemu to v23, still met below two types of errors.
just basic migration test.
(the guest VM size is 2G for all reported bugs).

"Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"



size doesn't look correct here, below check should be failing.
 range.size & (iommu_pgsize - 1)


or

"qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
qemu-system-x86_64-lm: error while loading state section id 49(vfio)
qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"




Above error is from:
buf = g_try_malloc0(data_size);
if (!buf) {
error_report("%s: Error allocating buffer ", __func__);
return -ENOMEM;
}

Seems you are running out of memory?

Thanks,
Kirti



Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-20 Thread Yan Zhao
On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:
> On Wed, 20 May 2020 19:10:07 +0530
> Kirti Wankhede  wrote:
> 
> > On 5/20/2020 8:25 AM, Yan Zhao wrote:
> > > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> > >> Hi folks,
> > >>
> > >> My impression is that we're getting pretty close to a workable
> > >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> > >> also have a matching QEMU series and a proposal for a new i40e
> > >> consumer, as well as I assume GVT-g updates happening internally at
> > >> Intel.  I expect all of the latter needs further review and discussion,
> > >> but we should be at the point where we can validate these proposed
> > >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> > >> that we can get this wrapped up for the v5.8 merge window.  I know
> > >> Connie has some outstanding documentation comments and I'd like to make
> > >> sure everyone has an opportunity to check that their comments have been
> > >> addressed and we don't discover any new blocking issues.  Please send
> > >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> > >> interface and implementation.  Thanks!
> > >>  
> > > hi Alex and Kirti,
> > > after porting to qemu v22 and kernel v22, it is found out that
> > > it can not even pass basic live migration test with error like
> > > 
> > > "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
> > >   
> > 
> > Thanks for testing Yan.
> > I think last moment change in below cause this failure
> > 
> > https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankh...@nvidia.com/
> > 
> >  >  if (dma->iova > iova + size)
> >  >  break;  
> > 
> > Surprisingly with my basic testing with 2G sys mem QEMU didn't raise 
> > abort on g_free, but I do hit this with large sys mem.
> > With above change, that function iterated through next vfio_dma as well. 
> > Check should be as below:
> > 
> > -   if (dma->iova > iova + size)
> > +   if (dma->iova > iova + size -1)
> 
> 
> Or just:
> 
>   if (dma->iova >= iova + size)
> 
> Thanks,
> Alex
> 
> 
> >  break;
> > 
> > Another fix is in QEMU.
> > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html
> > 
> >  > > +range->bitmap.size = ROUND_UP(pages, 64) / 8;  
> >  >
> >  > ROUND_UP(npages/8, sizeof(u64))?
> >  >  
> > 
> > If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.
> > 
> > Changing it as below
> > 
> > -range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
> > +range->bitmap.size = ROUND_UP(pages, sizeof(__u64) * 
> > BITS_PER_BYTE) /
> > + BITS_PER_BYTE;
> > 
> > I'm updating patches with these fixes and Cornelia's suggestion soon.
> > 
> > Due to short of time I may not be able to address all the concerns 
> > raised on previous versions of QEMU, I'm trying make QEMU side code 
> > available for testing for others with latest kernel changes. Don't 
> > worry, I will revisit comments on QEMU patches. Right now first priority 
> > is to test kernel UAPI and prepare kernel patches for 5.8
> > 
>
hi Kirti
by updating kernel/qemu to v23, still met below two types of errors.
just basic migration test.
(the guest VM size is 2G for all reported bugs).

"Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"

or 

"qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
qemu-system-x86_64-lm: error while loading state section id 49(vfio)
qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"


Thanks
Yan



Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-20 Thread Alex Williamson
On Wed, 20 May 2020 19:10:07 +0530
Kirti Wankhede  wrote:

> On 5/20/2020 8:25 AM, Yan Zhao wrote:
> > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> >> Hi folks,
> >>
> >> My impression is that we're getting pretty close to a workable
> >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> >> also have a matching QEMU series and a proposal for a new i40e
> >> consumer, as well as I assume GVT-g updates happening internally at
> >> Intel.  I expect all of the latter needs further review and discussion,
> >> but we should be at the point where we can validate these proposed
> >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> >> that we can get this wrapped up for the v5.8 merge window.  I know
> >> Connie has some outstanding documentation comments and I'd like to make
> >> sure everyone has an opportunity to check that their comments have been
> >> addressed and we don't discover any new blocking issues.  Please send
> >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> >> interface and implementation.  Thanks!
> >>  
> > hi Alex and Kirti,
> > after porting to qemu v22 and kernel v22, it is found out that
> > it can not even pass basic live migration test with error like
> > 
> > "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
> >   
> 
> Thanks for testing Yan.
> I think last moment change in below cause this failure
> 
> https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankh...@nvidia.com/
> 
>  >if (dma->iova > iova + size)
>  >break;  
> 
> Surprisingly with my basic testing with 2G sys mem QEMU didn't raise 
> abort on g_free, but I do hit this with large sys mem.
> With above change, that function iterated through next vfio_dma as well. 
> Check should be as below:
> 
> -   if (dma->iova > iova + size)
> +   if (dma->iova > iova + size -1)


Or just:

if (dma->iova >= iova + size)

Thanks,
Alex


>  break;
> 
> Another fix is in QEMU.
> https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html
> 
>  > > +range->bitmap.size = ROUND_UP(pages, 64) / 8;  
>  >
>  > ROUND_UP(npages/8, sizeof(u64))?
>  >  
> 
> If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.
> 
> Changing it as below
> 
> -range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
> +range->bitmap.size = ROUND_UP(pages, sizeof(__u64) * 
> BITS_PER_BYTE) /
> + BITS_PER_BYTE;
> 
> I'm updating patches with these fixes and Cornelia's suggestion soon.
> 
> Due to short of time I may not be able to address all the concerns 
> raised on previous versions of QEMU, I'm trying make QEMU side code 
> available for testing for others with latest kernel changes. Don't 
> worry, I will revisit comments on QEMU patches. Right now first priority 
> is to test kernel UAPI and prepare kernel patches for 5.8
> 
> Thanks,
> Kirti
> 




Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-20 Thread Kirti Wankhede




On 5/20/2020 8:25 AM, Yan Zhao wrote:

On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:

Hi folks,

My impression is that we're getting pretty close to a workable
implementation here with v22 plus respins of patches 5, 6, and 8.  We
also have a matching QEMU series and a proposal for a new i40e
consumer, as well as I assume GVT-g updates happening internally at
Intel.  I expect all of the latter needs further review and discussion,
but we should be at the point where we can validate these proposed
kernel interfaces.  Therefore I'd like to make a call for reviews so
that we can get this wrapped up for the v5.8 merge window.  I know
Connie has some outstanding documentation comments and I'd like to make
sure everyone has an opportunity to check that their comments have been
addressed and we don't discover any new blocking issues.  Please send
your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
interface and implementation.  Thanks!


hi Alex and Kirti,
after porting to qemu v22 and kernel v22, it is found out that
it can not even pass basic live migration test with error like

"Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"



Thanks for testing Yan.
I think last moment change in below cause this failure

https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankh...@nvidia.com/

>if (dma->iova > iova + size)
>break;

Surprisingly with my basic testing with 2G sys mem QEMU didn't raise 
abort on g_free, but I do hit this with large sys mem.
With above change, that function iterated through next vfio_dma as well. 
Check should be as below:


-   if (dma->iova > iova + size)
+   if (dma->iova > iova + size -1)
break;

Another fix is in QEMU.
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html

> > +range->bitmap.size = ROUND_UP(pages, 64) / 8;
>
> ROUND_UP(npages/8, sizeof(u64))?
>

If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.

Changing it as below

-range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
+range->bitmap.size = ROUND_UP(pages, sizeof(__u64) * 
BITS_PER_BYTE) /

+ BITS_PER_BYTE;

I'm updating patches with these fixes and Cornelia's suggestion soon.

Due to short of time I may not be able to address all the concerns 
raised on previous versions of QEMU, I'm trying make QEMU side code 
available for testing for others with latest kernel changes. Don't 
worry, I will revisit comments on QEMU patches. Right now first priority 
is to test kernel UAPI and prepare kernel patches for 5.8


Thanks,
Kirti



Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-19 Thread Yan Zhao
On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
> Hi folks,
> 
> My impression is that we're getting pretty close to a workable
> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> also have a matching QEMU series and a proposal for a new i40e
> consumer, as well as I assume GVT-g updates happening internally at
> Intel.  I expect all of the latter needs further review and discussion,
> but we should be at the point where we can validate these proposed
> kernel interfaces.  Therefore I'd like to make a call for reviews so
> that we can get this wrapped up for the v5.8 merge window.  I know
> Connie has some outstanding documentation comments and I'd like to make
> sure everyone has an opportunity to check that their comments have been
> addressed and we don't discover any new blocking issues.  Please send
> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> interface and implementation.  Thanks!
> 
hi Alex and Kirti,
after porting to qemu v22 and kernel v22, it is found out that
it can not even pass basic live migration test with error like

"Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"

Thanks
Yan

> 
> On Mon, 18 May 2020 11:26:29 +0530
> Kirti Wankhede  wrote:
> 
> > Hi,
> > 
> > This patch set adds:
> > * IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
> >   respect to IOMMU container rather than per device. All pages pinned by
> >   vendor driver through vfio_pin_pages external API has to be marked as
> >   dirty during  migration. When IOMMU capable device is present in the
> >   container and all pages are pinned and mapped, then all pages are marked
> >   dirty.
> >   When there are CPU writes, CPU dirty page tracking can identify dirtied
> >   pages, but any page pinned by vendor driver can also be written by
> >   device. As of now there is no device which has hardware support for
> >   dirty page tracking. So all pages which are pinned should be considered
> >   as dirty.
> >   This ioctl is also used to start/stop dirty pages tracking for pinned and
> >   unpinned pages while migration is active.
> > 
> > * Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
> >   unmapping IO virtual address range.
> >   With vIOMMU, during pre-copy phase of migration, while CPUs are still
> >   running, IO virtual address unmap can happen while device still keeping
> >   reference of guest pfns. Those pages should be reported as dirty before
> >   unmap, so that VFIO user space application can copy content of those
> >   pages from source to destination.
> > 
> > * Patch 8 detect if IOMMU capable device driver is smart to report pages
> >   to be marked dirty by pinning pages using vfio_pin_pages() API.
> > 
> > 
> > Yet TODO:
> > Since there is no device which has hardware support for system memmory
> > dirty bitmap tracking, right now there is no other API from vendor driver
> > to VFIO IOMMU module to report dirty pages. In future, when such hardware
> > support will be implemented, an API will be required such that vendor
> > driver could report dirty pages to VFIO module during migration phases.
> > 
> > Adding revision history from previous QEMU patch set to understand KABI
> > changes done till now
> > 
> > v21 -> v22
> > - Fixed issue raised by Alex :
> > https://lore.kernel.org/kvm/20200515163307.72951...@w520.home/
> > 
> > v20 -> v21
> > - Added checkin for GET_BITMAP ioctl for vfio_dma boundaries.
> > - Updated unmap ioctl function - as suggested by Alex.
> > - Updated comments in DIRTY_TRACKING ioctl definition - as suggested by
> >   Cornelia.
> > 
> > v19 -> v20
> > - Fixed ioctl to get dirty bitmap to get bitmap of multiple vfio_dmas
> > - Fixed unmap ioctl to get dirty bitmap of multiple vfio_dmas.
> > - Removed flag definition from migration capability.
> > 
> > v18 -> v19
> > - Updated migration capability with supported page sizes bitmap for dirty
> >   page tracking and  maximum bitmap size supported by kernel module.
> > - Added patch to calculate and cache pgsize_bitmap when iommu->domain_list
> >   is updated.
> > - Removed extra buffers added in previous version for bitmap manipulation
> >   and optimised the code.
> > 
> > v17 -> v18
> > - Add migration capability to the capability chain for VFIO_IOMMU_GET_INFO
> >   ioctl
> > - Updated UMAP_DMA ioctl to return bitmap of multiple vfio_dma
> > 
> > v16 -> v17
> > - Fixed errors reported by kbuild test robot  on i386
> > 
> > v15 -> v16
> > - Minor edits and nit picks (Auger Eric)
> > - On copying bitmap to user, re-populated bitmap only for pinned pages,
> >   excluding unmapped pages and CPU dirtied pages.
> > - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
> >   https://lkml.org/lkml/2020/3/12/1255
> > 
> > v14 -> v15
> > - Minor edits and nit picks.
> > - In the verification of user allocated bitmap memory, added check of
> >maximum size.
> > - Patches are on tag: next-20200318 and 1-3 patches 

Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices

2020-05-19 Thread Alex Williamson
Hi folks,

My impression is that we're getting pretty close to a workable
implementation here with v22 plus respins of patches 5, 6, and 8.  We
also have a matching QEMU series and a proposal for a new i40e
consumer, as well as I assume GVT-g updates happening internally at
Intel.  I expect all of the latter needs further review and discussion,
but we should be at the point where we can validate these proposed
kernel interfaces.  Therefore I'd like to make a call for reviews so
that we can get this wrapped up for the v5.8 merge window.  I know
Connie has some outstanding documentation comments and I'd like to make
sure everyone has an opportunity to check that their comments have been
addressed and we don't discover any new blocking issues.  Please send
your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
interface and implementation.  Thanks!

Alex

On Mon, 18 May 2020 11:26:29 +0530
Kirti Wankhede  wrote:

> Hi,
> 
> This patch set adds:
> * IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
>   respect to IOMMU container rather than per device. All pages pinned by
>   vendor driver through vfio_pin_pages external API has to be marked as
>   dirty during  migration. When IOMMU capable device is present in the
>   container and all pages are pinned and mapped, then all pages are marked
>   dirty.
>   When there are CPU writes, CPU dirty page tracking can identify dirtied
>   pages, but any page pinned by vendor driver can also be written by
>   device. As of now there is no device which has hardware support for
>   dirty page tracking. So all pages which are pinned should be considered
>   as dirty.
>   This ioctl is also used to start/stop dirty pages tracking for pinned and
>   unpinned pages while migration is active.
> 
> * Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
>   unmapping IO virtual address range.
>   With vIOMMU, during pre-copy phase of migration, while CPUs are still
>   running, IO virtual address unmap can happen while device still keeping
>   reference of guest pfns. Those pages should be reported as dirty before
>   unmap, so that VFIO user space application can copy content of those
>   pages from source to destination.
> 
> * Patch 8 detect if IOMMU capable device driver is smart to report pages
>   to be marked dirty by pinning pages using vfio_pin_pages() API.
> 
> 
> Yet TODO:
> Since there is no device which has hardware support for system memmory
> dirty bitmap tracking, right now there is no other API from vendor driver
> to VFIO IOMMU module to report dirty pages. In future, when such hardware
> support will be implemented, an API will be required such that vendor
> driver could report dirty pages to VFIO module during migration phases.
> 
> Adding revision history from previous QEMU patch set to understand KABI
> changes done till now
> 
> v21 -> v22
> - Fixed issue raised by Alex :
> https://lore.kernel.org/kvm/20200515163307.72951...@w520.home/
> 
> v20 -> v21
> - Added checkin for GET_BITMAP ioctl for vfio_dma boundaries.
> - Updated unmap ioctl function - as suggested by Alex.
> - Updated comments in DIRTY_TRACKING ioctl definition - as suggested by
>   Cornelia.
> 
> v19 -> v20
> - Fixed ioctl to get dirty bitmap to get bitmap of multiple vfio_dmas
> - Fixed unmap ioctl to get dirty bitmap of multiple vfio_dmas.
> - Removed flag definition from migration capability.
> 
> v18 -> v19
> - Updated migration capability with supported page sizes bitmap for dirty
>   page tracking and  maximum bitmap size supported by kernel module.
> - Added patch to calculate and cache pgsize_bitmap when iommu->domain_list
>   is updated.
> - Removed extra buffers added in previous version for bitmap manipulation
>   and optimised the code.
> 
> v17 -> v18
> - Add migration capability to the capability chain for VFIO_IOMMU_GET_INFO
>   ioctl
> - Updated UMAP_DMA ioctl to return bitmap of multiple vfio_dma
> 
> v16 -> v17
> - Fixed errors reported by kbuild test robot  on i386
> 
> v15 -> v16
> - Minor edits and nit picks (Auger Eric)
> - On copying bitmap to user, re-populated bitmap only for pinned pages,
>   excluding unmapped pages and CPU dirtied pages.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v14 -> v15
> - Minor edits and nit picks.
> - In the verification of user allocated bitmap memory, added check of
>maximum size.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v13 -> v14
> - Added struct vfio_bitmap to kabi. updated structure
>   vfio_iommu_type1_dirty_bitmap_get and vfio_iommu_type1_dma_unmap.
> - All small changes suggested by Alex.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v12 -> v13
> - Changed bitmap allocation in vfio_iommu_type1 to per vfio_dma
> - Changed VFIO_IOMMU_DIRTY_PAGES ioctl behaviour to be