Re: [EXT] Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-18 Thread Jean-Philippe Brucker
On Wed, Mar 18, 2020 at 12:42:25PM +0100, Auger Eric wrote:
> Hi Jean,
> 
> On 3/18/20 12:20 PM, Bharat Bhushan wrote:
> > 
> > 
> >> -Original Message-
> >> From: Jean-Philippe Brucker 
> >> Sent: Wednesday, March 18, 2020 4:48 PM
> >> To: Bharat Bhushan 
> >> Cc: Auger Eric ; Peter Maydell
> >> ; kevin.t...@intel.com; Tomasz Nowicki [C]
> >> ; m...@redhat.com; drjo...@redhat.com;
> >> pet...@redhat.com; qemu-devel@nongnu.org; alex.william...@redhat.com;
> >> qemu-...@nongnu.org; Bharat Bhushan ;
> >> linuc.dec...@gmail.com; eric.auger@gmail.com
> >> Subject: [EXT] Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for
> >> attach/detach
> >>
> >> External Email
> >>
> >> --
> >> On Wed, Mar 18, 2020 at 03:47:44PM +0530, Bharat Bhushan wrote:
> >>> Hi Jean,
> >>>
> >>> On Tue, Mar 17, 2020 at 9:29 PM Jean-Philippe Brucker
> >>>  wrote:
> >>>>
> >>>> On Tue, Mar 17, 2020 at 02:46:55PM +0530, Bharat Bhushan wrote:
> >>>>> Hi Jean,
> >>>>>
> >>>>> On Tue, Mar 17, 2020 at 2:23 PM Jean-Philippe Brucker
> >>>>>  wrote:
> >>>>>>
> >>>>>> On Tue, Mar 17, 2020 at 12:40:39PM +0530, Bharat Bhushan wrote:
> >>>>>>> Hi Jean,
> >>>>>>>
> >>>>>>> On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
> >>>>>>>  wrote:
> >>>>>>>>
> >>>>>>>> Hi Bharat,
> >>>>>>>>
> >>>>>>>> Could you Cc me on your next posting?  Unfortunately I don't
> >>>>>>>> have much hardware for testing this at the moment, but I
> >>>>>>>> might be able to help a little on the review.
> >>>>>>>>
> >>>>>>>> On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> >>>>>>>>>>>>> First issue is: your guest can use 4K page and your
> >>>>>>>>>>>>> host can use 64KB pages. In that case VFIO_DMA_MAP
> >>>>>>>>>>>>> will fail with -EINVAL. We must devise a way to pass the host
> >> settings to the VIRTIO-IOMMU device.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Even with 64KB pages, it did not work for me. I have
> >>>>>>>>>>>>> obviously not the storm of VFIO_DMA_MAP failures but
> >>>>>>>>>>>>> I have some, most probably due to some wrong notifications
> >> somewhere. I will try to investigate on my side.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Did you test with VFIO on your side?
> >>>>>>>>>>>>
> >>>>>>>>>>>> I did not tried with different page sizes, only tested with 4K 
> >>>>>>>>>>>> page
> >> size.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Yes it works, I tested with two n/w device assigned
> >>>>>>>>>>>> to VM, both interfaces works
> >>>>>>>>>>>>
> >>>>>>>>>>>> First I will try with 64k page size.
> >>>>>>>>>>>
> >>>>>>>>>>> 64K page size does not work for me as well,
> >>>>>>>>>>>
> >>>>>>>>>>> I think we are not passing correct page_size_mask here
> >>>>>>>>>>> (config.page_size_mask is set to TARGET_PAGE_MASK (
> >>>>>>>>>>> which is
> >>>>>>>>>>> 0xf000))
> >>>>>>>>>> I guess you mean with guest using 4K and host using 64K.
> >>>>>>>>>>>
> >>>>>>>>>>> We need to set this correctly as per host page size, correct?
> >>>>>>>>>> Yes that's correct. We need to put in place a control
> >>>>>>>>>> path to retrieve the page settings on host through VFIO to inform 
> >>>>>>&

Re: [EXT] Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-18 Thread Auger Eric
Hi Jean,

On 3/18/20 12:20 PM, Bharat Bhushan wrote:
> 
> 
>> -Original Message-
>> From: Jean-Philippe Brucker 
>> Sent: Wednesday, March 18, 2020 4:48 PM
>> To: Bharat Bhushan 
>> Cc: Auger Eric ; Peter Maydell
>> ; kevin.t...@intel.com; Tomasz Nowicki [C]
>> ; m...@redhat.com; drjo...@redhat.com;
>> pet...@redhat.com; qemu-devel@nongnu.org; alex.william...@redhat.com;
>> qemu-...@nongnu.org; Bharat Bhushan ;
>> linuc.dec...@gmail.com; eric.auger@gmail.com
>> Subject: [EXT] Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for
>> attach/detach
>>
>> External Email
>>
>> --
>> On Wed, Mar 18, 2020 at 03:47:44PM +0530, Bharat Bhushan wrote:
>>> Hi Jean,
>>>
>>> On Tue, Mar 17, 2020 at 9:29 PM Jean-Philippe Brucker
>>>  wrote:
>>>>
>>>> On Tue, Mar 17, 2020 at 02:46:55PM +0530, Bharat Bhushan wrote:
>>>>> Hi Jean,
>>>>>
>>>>> On Tue, Mar 17, 2020 at 2:23 PM Jean-Philippe Brucker
>>>>>  wrote:
>>>>>>
>>>>>> On Tue, Mar 17, 2020 at 12:40:39PM +0530, Bharat Bhushan wrote:
>>>>>>> Hi Jean,
>>>>>>>
>>>>>>> On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
>>>>>>>  wrote:
>>>>>>>>
>>>>>>>> Hi Bharat,
>>>>>>>>
>>>>>>>> Could you Cc me on your next posting?  Unfortunately I don't
>>>>>>>> have much hardware for testing this at the moment, but I
>>>>>>>> might be able to help a little on the review.
>>>>>>>>
>>>>>>>> On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
>>>>>>>>>>>>> First issue is: your guest can use 4K page and your
>>>>>>>>>>>>> host can use 64KB pages. In that case VFIO_DMA_MAP
>>>>>>>>>>>>> will fail with -EINVAL. We must devise a way to pass the host
>> settings to the VIRTIO-IOMMU device.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Even with 64KB pages, it did not work for me. I have
>>>>>>>>>>>>> obviously not the storm of VFIO_DMA_MAP failures but
>>>>>>>>>>>>> I have some, most probably due to some wrong notifications
>> somewhere. I will try to investigate on my side.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Did you test with VFIO on your side?
>>>>>>>>>>>>
>>>>>>>>>>>> I did not tried with different page sizes, only tested with 4K page
>> size.
>>>>>>>>>>>>
>>>>>>>>>>>> Yes it works, I tested with two n/w device assigned
>>>>>>>>>>>> to VM, both interfaces works
>>>>>>>>>>>>
>>>>>>>>>>>> First I will try with 64k page size.
>>>>>>>>>>>
>>>>>>>>>>> 64K page size does not work for me as well,
>>>>>>>>>>>
>>>>>>>>>>> I think we are not passing correct page_size_mask here
>>>>>>>>>>> (config.page_size_mask is set to TARGET_PAGE_MASK (
>>>>>>>>>>> which is
>>>>>>>>>>> 0xf000))
>>>>>>>>>> I guess you mean with guest using 4K and host using 64K.
>>>>>>>>>>>
>>>>>>>>>>> We need to set this correctly as per host page size, correct?
>>>>>>>>>> Yes that's correct. We need to put in place a control
>>>>>>>>>> path to retrieve the page settings on host through VFIO to inform the
>> virtio-iommu device.
>>>>>>>>>>
>>>>>>>>>> Besides this issue, did you try with 64kB on host and guest?
>>>>>>>>>
>>>>>>>>> I tried Followings
>>>>>>>>>   - 4k host and 4k guest  - it works with v7 version
>>>>>>>>>   - 64k host and 64k guest - it does not work with v7
>>>>>>>>> hard-coded config.page_size_mask to 0xff

RE: [EXT] Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-18 Thread Bharat Bhushan



> -Original Message-
> From: Jean-Philippe Brucker 
> Sent: Wednesday, March 18, 2020 4:48 PM
> To: Bharat Bhushan 
> Cc: Auger Eric ; Peter Maydell
> ; kevin.t...@intel.com; Tomasz Nowicki [C]
> ; m...@redhat.com; drjo...@redhat.com;
> pet...@redhat.com; qemu-devel@nongnu.org; alex.william...@redhat.com;
> qemu-...@nongnu.org; Bharat Bhushan ;
> linuc.dec...@gmail.com; eric.auger....@gmail.com
> Subject: [EXT] Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for
> attach/detach
> 
> External Email
> 
> --
> On Wed, Mar 18, 2020 at 03:47:44PM +0530, Bharat Bhushan wrote:
> > Hi Jean,
> >
> > On Tue, Mar 17, 2020 at 9:29 PM Jean-Philippe Brucker
> >  wrote:
> > >
> > > On Tue, Mar 17, 2020 at 02:46:55PM +0530, Bharat Bhushan wrote:
> > > > Hi Jean,
> > > >
> > > > On Tue, Mar 17, 2020 at 2:23 PM Jean-Philippe Brucker
> > > >  wrote:
> > > > >
> > > > > On Tue, Mar 17, 2020 at 12:40:39PM +0530, Bharat Bhushan wrote:
> > > > > > Hi Jean,
> > > > > >
> > > > > > On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
> > > > > >  wrote:
> > > > > > >
> > > > > > > Hi Bharat,
> > > > > > >
> > > > > > > Could you Cc me on your next posting?  Unfortunately I don't
> > > > > > > have much hardware for testing this at the moment, but I
> > > > > > > might be able to help a little on the review.
> > > > > > >
> > > > > > > On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> > > > > > > > > >>> First issue is: your guest can use 4K page and your
> > > > > > > > > >>> host can use 64KB pages. In that case VFIO_DMA_MAP
> > > > > > > > > >>> will fail with -EINVAL. We must devise a way to pass the 
> > > > > > > > > >>> host
> settings to the VIRTIO-IOMMU device.
> > > > > > > > > >>>
> > > > > > > > > >>> Even with 64KB pages, it did not work for me. I have
> > > > > > > > > >>> obviously not the storm of VFIO_DMA_MAP failures but
> > > > > > > > > >>> I have some, most probably due to some wrong notifications
> somewhere. I will try to investigate on my side.
> > > > > > > > > >>>
> > > > > > > > > >>> Did you test with VFIO on your side?
> > > > > > > > > >>
> > > > > > > > > >> I did not tried with different page sizes, only tested 
> > > > > > > > > >> with 4K page
> size.
> > > > > > > > > >>
> > > > > > > > > >> Yes it works, I tested with two n/w device assigned
> > > > > > > > > >> to VM, both interfaces works
> > > > > > > > > >>
> > > > > > > > > >> First I will try with 64k page size.
> > > > > > > > > >
> > > > > > > > > > 64K page size does not work for me as well,
> > > > > > > > > >
> > > > > > > > > > I think we are not passing correct page_size_mask here
> > > > > > > > > > (config.page_size_mask is set to TARGET_PAGE_MASK (
> > > > > > > > > > which is
> > > > > > > > > > 0xf000))
> > > > > > > > > I guess you mean with guest using 4K and host using 64K.
> > > > > > > > > >
> > > > > > > > > > We need to set this correctly as per host page size, 
> > > > > > > > > > correct?
> > > > > > > > > Yes that's correct. We need to put in place a control
> > > > > > > > > path to retrieve the page settings on host through VFIO to 
> > > > > > > > > inform the
> virtio-iommu device.
> > > > > > > > >
> > > > > > > > > Besides this issue, did you try with 64kB on host and guest?
> > > > > > > >
> > > > > > > > I tried Followings
> > > > > > > >   - 

Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-18 Thread Jean-Philippe Brucker
On Wed, Mar 18, 2020 at 03:47:44PM +0530, Bharat Bhushan wrote:
> Hi Jean,
> 
> On Tue, Mar 17, 2020 at 9:29 PM Jean-Philippe Brucker
>  wrote:
> >
> > On Tue, Mar 17, 2020 at 02:46:55PM +0530, Bharat Bhushan wrote:
> > > Hi Jean,
> > >
> > > On Tue, Mar 17, 2020 at 2:23 PM Jean-Philippe Brucker
> > >  wrote:
> > > >
> > > > On Tue, Mar 17, 2020 at 12:40:39PM +0530, Bharat Bhushan wrote:
> > > > > Hi Jean,
> > > > >
> > > > > On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
> > > > >  wrote:
> > > > > >
> > > > > > Hi Bharat,
> > > > > >
> > > > > > Could you Cc me on your next posting?  Unfortunately I don't have 
> > > > > > much
> > > > > > hardware for testing this at the moment, but I might be able to 
> > > > > > help a
> > > > > > little on the review.
> > > > > >
> > > > > > On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> > > > > > > > >>> First issue is: your guest can use 4K page and your host 
> > > > > > > > >>> can use 64KB
> > > > > > > > >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We 
> > > > > > > > >>> must devise
> > > > > > > > >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> > > > > > > > >>>
> > > > > > > > >>> Even with 64KB pages, it did not work for me. I have 
> > > > > > > > >>> obviously not the
> > > > > > > > >>> storm of VFIO_DMA_MAP failures but I have some, most 
> > > > > > > > >>> probably due to
> > > > > > > > >>> some wrong notifications somewhere. I will try to 
> > > > > > > > >>> investigate on my side.
> > > > > > > > >>>
> > > > > > > > >>> Did you test with VFIO on your side?
> > > > > > > > >>
> > > > > > > > >> I did not tried with different page sizes, only tested with 
> > > > > > > > >> 4K page size.
> > > > > > > > >>
> > > > > > > > >> Yes it works, I tested with two n/w device assigned to VM, 
> > > > > > > > >> both interfaces works
> > > > > > > > >>
> > > > > > > > >> First I will try with 64k page size.
> > > > > > > > >
> > > > > > > > > 64K page size does not work for me as well,
> > > > > > > > >
> > > > > > > > > I think we are not passing correct page_size_mask here
> > > > > > > > > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > > > > > > > > 0xf000))
> > > > > > > > I guess you mean with guest using 4K and host using 64K.
> > > > > > > > >
> > > > > > > > > We need to set this correctly as per host page size, correct?
> > > > > > > > Yes that's correct. We need to put in place a control path to 
> > > > > > > > retrieve
> > > > > > > > the page settings on host through VFIO to inform the 
> > > > > > > > virtio-iommu device.
> > > > > > > >
> > > > > > > > Besides this issue, did you try with 64kB on host and guest?
> > > > > > >
> > > > > > > I tried Followings
> > > > > > >   - 4k host and 4k guest  - it works with v7 version
> > > > > > >   - 64k host and 64k guest - it does not work with v7
> > > > > > > hard-coded config.page_size_mask to 0x and it 
> > > > > > > works
> > > > > >
> > > > > > You might get this from the iova_pgsize bitmap returned by
> > > > > > VFIO_IOMMU_GET_INFO. The virtio config.page_size_mask is global so 
> > > > > > there
> > > > > > is the usual problem of aggregating consistent properties, but I'm
> > > > > > guessing using the host page size as a granule here is safe enough.
> > > > > >
> > > > > > If it is a problem, we can add a PROBE property for page size mask,
> > > > > > allowing to define per-endpoint page masks. I have kernel patches
> > > > > > somewhere to do just that.
> > > > >
> > > > > I do not see we need page size mask per endpoint.
> > > > >
> > > > > While I am trying to understand what "page-size-mask" guest will work 
> > > > > with
> > > > >
> > > > > - 4K page size host and 4k page size guest
> > > > >   config.page_size_mask = 0x000 will work
> > > > >
> > > > > - 64K page size host and 64k page size guest
> > > > >   config.page_size_mask = 0xfff will work
> > > > >
> > > > > - 64K page size host and 4k page size guest
> > > > >1) config.page_size_mask = 0x000 will also not work as
> > > > > VFIO in host expect iova and size to be aligned to 64k (PAGE_SIZE in
> > > > > host)
> > > > >2) config.page_size_mask = 0xfff will not work, iova
> > > > > initialization (in guest) expect minimum page-size supported by h/w to
> > > > > be equal to 4k (PAGE_SIZE in guest)
> > > > >Should we look to relax this in iova allocation code?
> > > >
> > > > Oh right, that's not great. Maybe the BUG_ON() can be removed, I'll ask 
> > > > on
> > > > the list.
> > >
> > > yes, the BUG_ON in iova_init.
> > > I tried with removing same and it worked, but not analyzed side effects.
> >
> > It might break the assumption from device drivers that mapping a page is
> > safe. For example they call alloc_page() followed by dma_map_page(). In
> > our situation dma-iommu.c will oblige and create one 64k mapping to one 4k
> > physical page. As a result 

Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-18 Thread Bharat Bhushan
Hi Jean,

On Tue, Mar 17, 2020 at 9:29 PM Jean-Philippe Brucker
 wrote:
>
> On Tue, Mar 17, 2020 at 02:46:55PM +0530, Bharat Bhushan wrote:
> > Hi Jean,
> >
> > On Tue, Mar 17, 2020 at 2:23 PM Jean-Philippe Brucker
> >  wrote:
> > >
> > > On Tue, Mar 17, 2020 at 12:40:39PM +0530, Bharat Bhushan wrote:
> > > > Hi Jean,
> > > >
> > > > On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
> > > >  wrote:
> > > > >
> > > > > Hi Bharat,
> > > > >
> > > > > Could you Cc me on your next posting?  Unfortunately I don't have much
> > > > > hardware for testing this at the moment, but I might be able to help a
> > > > > little on the review.
> > > > >
> > > > > On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> > > > > > > >>> First issue is: your guest can use 4K page and your host can 
> > > > > > > >>> use 64KB
> > > > > > > >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We 
> > > > > > > >>> must devise
> > > > > > > >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> > > > > > > >>>
> > > > > > > >>> Even with 64KB pages, it did not work for me. I have 
> > > > > > > >>> obviously not the
> > > > > > > >>> storm of VFIO_DMA_MAP failures but I have some, most probably 
> > > > > > > >>> due to
> > > > > > > >>> some wrong notifications somewhere. I will try to investigate 
> > > > > > > >>> on my side.
> > > > > > > >>>
> > > > > > > >>> Did you test with VFIO on your side?
> > > > > > > >>
> > > > > > > >> I did not tried with different page sizes, only tested with 4K 
> > > > > > > >> page size.
> > > > > > > >>
> > > > > > > >> Yes it works, I tested with two n/w device assigned to VM, 
> > > > > > > >> both interfaces works
> > > > > > > >>
> > > > > > > >> First I will try with 64k page size.
> > > > > > > >
> > > > > > > > 64K page size does not work for me as well,
> > > > > > > >
> > > > > > > > I think we are not passing correct page_size_mask here
> > > > > > > > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > > > > > > > 0xf000))
> > > > > > > I guess you mean with guest using 4K and host using 64K.
> > > > > > > >
> > > > > > > > We need to set this correctly as per host page size, correct?
> > > > > > > Yes that's correct. We need to put in place a control path to 
> > > > > > > retrieve
> > > > > > > the page settings on host through VFIO to inform the virtio-iommu 
> > > > > > > device.
> > > > > > >
> > > > > > > Besides this issue, did you try with 64kB on host and guest?
> > > > > >
> > > > > > I tried Followings
> > > > > >   - 4k host and 4k guest  - it works with v7 version
> > > > > >   - 64k host and 64k guest - it does not work with v7
> > > > > > hard-coded config.page_size_mask to 0x and it 
> > > > > > works
> > > > >
> > > > > You might get this from the iova_pgsize bitmap returned by
> > > > > VFIO_IOMMU_GET_INFO. The virtio config.page_size_mask is global so 
> > > > > there
> > > > > is the usual problem of aggregating consistent properties, but I'm
> > > > > guessing using the host page size as a granule here is safe enough.
> > > > >
> > > > > If it is a problem, we can add a PROBE property for page size mask,
> > > > > allowing to define per-endpoint page masks. I have kernel patches
> > > > > somewhere to do just that.
> > > >
> > > > I do not see we need page size mask per endpoint.
> > > >
> > > > While I am trying to understand what "page-size-mask" guest will work 
> > > > with
> > > >
> > > > - 4K page size host and 4k page size guest
> > > >   config.page_size_mask = 0x000 will work
> > > >
> > > > - 64K page size host and 64k page size guest
> > > >   config.page_size_mask = 0xfff will work
> > > >
> > > > - 64K page size host and 4k page size guest
> > > >1) config.page_size_mask = 0x000 will also not work as
> > > > VFIO in host expect iova and size to be aligned to 64k (PAGE_SIZE in
> > > > host)
> > > >2) config.page_size_mask = 0xfff will not work, iova
> > > > initialization (in guest) expect minimum page-size supported by h/w to
> > > > be equal to 4k (PAGE_SIZE in guest)
> > > >Should we look to relax this in iova allocation code?
> > >
> > > Oh right, that's not great. Maybe the BUG_ON() can be removed, I'll ask on
> > > the list.
> >
> > yes, the BUG_ON in iova_init.
> > I tried with removing same and it worked, but not analyzed side effects.
>
> It might break the assumption from device drivers that mapping a page is
> safe. For example they call alloc_page() followed by dma_map_page(). In
> our situation dma-iommu.c will oblige and create one 64k mapping to one 4k
> physical page. As a result the endpoint can access the neighbouring 60k of
> memory.
>
> This isn't too terrible. After all, even when the page sizes match, device
> drivers can call dma_map_single() on sub-page buffers, which will also let
> the endpoint access a whole page. The solution, if you don't trust the
> endpoint, is to use bounce 

Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-17 Thread Jean-Philippe Brucker
On Tue, Mar 17, 2020 at 02:46:55PM +0530, Bharat Bhushan wrote:
> Hi Jean,
> 
> On Tue, Mar 17, 2020 at 2:23 PM Jean-Philippe Brucker
>  wrote:
> >
> > On Tue, Mar 17, 2020 at 12:40:39PM +0530, Bharat Bhushan wrote:
> > > Hi Jean,
> > >
> > > On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
> > >  wrote:
> > > >
> > > > Hi Bharat,
> > > >
> > > > Could you Cc me on your next posting?  Unfortunately I don't have much
> > > > hardware for testing this at the moment, but I might be able to help a
> > > > little on the review.
> > > >
> > > > On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> > > > > > >>> First issue is: your guest can use 4K page and your host can 
> > > > > > >>> use 64KB
> > > > > > >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We 
> > > > > > >>> must devise
> > > > > > >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> > > > > > >>>
> > > > > > >>> Even with 64KB pages, it did not work for me. I have obviously 
> > > > > > >>> not the
> > > > > > >>> storm of VFIO_DMA_MAP failures but I have some, most probably 
> > > > > > >>> due to
> > > > > > >>> some wrong notifications somewhere. I will try to investigate 
> > > > > > >>> on my side.
> > > > > > >>>
> > > > > > >>> Did you test with VFIO on your side?
> > > > > > >>
> > > > > > >> I did not tried with different page sizes, only tested with 4K 
> > > > > > >> page size.
> > > > > > >>
> > > > > > >> Yes it works, I tested with two n/w device assigned to VM, both 
> > > > > > >> interfaces works
> > > > > > >>
> > > > > > >> First I will try with 64k page size.
> > > > > > >
> > > > > > > 64K page size does not work for me as well,
> > > > > > >
> > > > > > > I think we are not passing correct page_size_mask here
> > > > > > > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > > > > > > 0xf000))
> > > > > > I guess you mean with guest using 4K and host using 64K.
> > > > > > >
> > > > > > > We need to set this correctly as per host page size, correct?
> > > > > > Yes that's correct. We need to put in place a control path to 
> > > > > > retrieve
> > > > > > the page settings on host through VFIO to inform the virtio-iommu 
> > > > > > device.
> > > > > >
> > > > > > Besides this issue, did you try with 64kB on host and guest?
> > > > >
> > > > > I tried Followings
> > > > >   - 4k host and 4k guest  - it works with v7 version
> > > > >   - 64k host and 64k guest - it does not work with v7
> > > > > hard-coded config.page_size_mask to 0x and it 
> > > > > works
> > > >
> > > > You might get this from the iova_pgsize bitmap returned by
> > > > VFIO_IOMMU_GET_INFO. The virtio config.page_size_mask is global so there
> > > > is the usual problem of aggregating consistent properties, but I'm
> > > > guessing using the host page size as a granule here is safe enough.
> > > >
> > > > If it is a problem, we can add a PROBE property for page size mask,
> > > > allowing to define per-endpoint page masks. I have kernel patches
> > > > somewhere to do just that.
> > >
> > > I do not see we need page size mask per endpoint.
> > >
> > > While I am trying to understand what "page-size-mask" guest will work with
> > >
> > > - 4K page size host and 4k page size guest
> > >   config.page_size_mask = 0x000 will work
> > >
> > > - 64K page size host and 64k page size guest
> > >   config.page_size_mask = 0xfff will work
> > >
> > > - 64K page size host and 4k page size guest
> > >1) config.page_size_mask = 0x000 will also not work as
> > > VFIO in host expect iova and size to be aligned to 64k (PAGE_SIZE in
> > > host)
> > >2) config.page_size_mask = 0xfff will not work, iova
> > > initialization (in guest) expect minimum page-size supported by h/w to
> > > be equal to 4k (PAGE_SIZE in guest)
> > >Should we look to relax this in iova allocation code?
> >
> > Oh right, that's not great. Maybe the BUG_ON() can be removed, I'll ask on
> > the list.
> 
> yes, the BUG_ON in iova_init.
> I tried with removing same and it worked, but not analyzed side effects.

It might break the assumption from device drivers that mapping a page is
safe. For example they call alloc_page() followed by dma_map_page(). In
our situation dma-iommu.c will oblige and create one 64k mapping to one 4k
physical page. As a result the endpoint can access the neighbouring 60k of
memory.

This isn't too terrible. After all, even when the page sizes match, device
drivers can call dma_map_single() on sub-page buffers, which will also let
the endpoint access a whole page. The solution, if you don't trust the
endpoint, is to use bounce buffers.

But I suspect it's not as simple as removing the BUG_ON(), we'll need to
go over dma-iommu.c first. And it seems like assigning endpoints to guest
userspace won't work either in this config. In vfio_dma_do_map():

mask = ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1;


Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-17 Thread Bharat Bhushan
Hi Jean,

On Tue, Mar 17, 2020 at 2:23 PM Jean-Philippe Brucker
 wrote:
>
> On Tue, Mar 17, 2020 at 12:40:39PM +0530, Bharat Bhushan wrote:
> > Hi Jean,
> >
> > On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
> >  wrote:
> > >
> > > Hi Bharat,
> > >
> > > Could you Cc me on your next posting?  Unfortunately I don't have much
> > > hardware for testing this at the moment, but I might be able to help a
> > > little on the review.
> > >
> > > On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> > > > > >>> First issue is: your guest can use 4K page and your host can use 
> > > > > >>> 64KB
> > > > > >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must 
> > > > > >>> devise
> > > > > >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> > > > > >>>
> > > > > >>> Even with 64KB pages, it did not work for me. I have obviously 
> > > > > >>> not the
> > > > > >>> storm of VFIO_DMA_MAP failures but I have some, most probably due 
> > > > > >>> to
> > > > > >>> some wrong notifications somewhere. I will try to investigate on 
> > > > > >>> my side.
> > > > > >>>
> > > > > >>> Did you test with VFIO on your side?
> > > > > >>
> > > > > >> I did not tried with different page sizes, only tested with 4K 
> > > > > >> page size.
> > > > > >>
> > > > > >> Yes it works, I tested with two n/w device assigned to VM, both 
> > > > > >> interfaces works
> > > > > >>
> > > > > >> First I will try with 64k page size.
> > > > > >
> > > > > > 64K page size does not work for me as well,
> > > > > >
> > > > > > I think we are not passing correct page_size_mask here
> > > > > > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > > > > > 0xf000))
> > > > > I guess you mean with guest using 4K and host using 64K.
> > > > > >
> > > > > > We need to set this correctly as per host page size, correct?
> > > > > Yes that's correct. We need to put in place a control path to retrieve
> > > > > the page settings on host through VFIO to inform the virtio-iommu 
> > > > > device.
> > > > >
> > > > > Besides this issue, did you try with 64kB on host and guest?
> > > >
> > > > I tried Followings
> > > >   - 4k host and 4k guest  - it works with v7 version
> > > >   - 64k host and 64k guest - it does not work with v7
> > > > hard-coded config.page_size_mask to 0x and it works
> > >
> > > You might get this from the iova_pgsize bitmap returned by
> > > VFIO_IOMMU_GET_INFO. The virtio config.page_size_mask is global so there
> > > is the usual problem of aggregating consistent properties, but I'm
> > > guessing using the host page size as a granule here is safe enough.
> > >
> > > If it is a problem, we can add a PROBE property for page size mask,
> > > allowing to define per-endpoint page masks. I have kernel patches
> > > somewhere to do just that.
> >
> > I do not see we need page size mask per endpoint.
> >
> > While I am trying to understand what "page-size-mask" guest will work with
> >
> > - 4K page size host and 4k page size guest
> >   config.page_size_mask = 0x000 will work
> >
> > - 64K page size host and 64k page size guest
> >   config.page_size_mask = 0xfff will work
> >
> > - 64K page size host and 4k page size guest
> >1) config.page_size_mask = 0x000 will also not work as
> > VFIO in host expect iova and size to be aligned to 64k (PAGE_SIZE in
> > host)
> >2) config.page_size_mask = 0xfff will not work, iova
> > initialization (in guest) expect minimum page-size supported by h/w to
> > be equal to 4k (PAGE_SIZE in guest)
> >Should we look to relax this in iova allocation code?
>
> Oh right, that's not great. Maybe the BUG_ON() can be removed, I'll ask on
> the list.

yes, the BUG_ON in iova_init.
I tried with removing same and it worked, but not analyzed side effects.

>
> In the meantime, 64k granule is the right value to advertise to the guest
> in this case.
> Did you try 64k guest 4k host?

no, will try.

Thanks
-Bharat

>
> Thanks,
> Jean



Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-17 Thread Jean-Philippe Brucker
On Tue, Mar 17, 2020 at 12:40:39PM +0530, Bharat Bhushan wrote:
> Hi Jean,
> 
> On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
>  wrote:
> >
> > Hi Bharat,
> >
> > Could you Cc me on your next posting?  Unfortunately I don't have much
> > hardware for testing this at the moment, but I might be able to help a
> > little on the review.
> >
> > On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> > > > >>> First issue is: your guest can use 4K page and your host can use 
> > > > >>> 64KB
> > > > >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must 
> > > > >>> devise
> > > > >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> > > > >>>
> > > > >>> Even with 64KB pages, it did not work for me. I have obviously not 
> > > > >>> the
> > > > >>> storm of VFIO_DMA_MAP failures but I have some, most probably due to
> > > > >>> some wrong notifications somewhere. I will try to investigate on my 
> > > > >>> side.
> > > > >>>
> > > > >>> Did you test with VFIO on your side?
> > > > >>
> > > > >> I did not tried with different page sizes, only tested with 4K page 
> > > > >> size.
> > > > >>
> > > > >> Yes it works, I tested with two n/w device assigned to VM, both 
> > > > >> interfaces works
> > > > >>
> > > > >> First I will try with 64k page size.
> > > > >
> > > > > 64K page size does not work for me as well,
> > > > >
> > > > > I think we are not passing correct page_size_mask here
> > > > > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > > > > 0xf000))
> > > > I guess you mean with guest using 4K and host using 64K.
> > > > >
> > > > > We need to set this correctly as per host page size, correct?
> > > > Yes that's correct. We need to put in place a control path to retrieve
> > > > the page settings on host through VFIO to inform the virtio-iommu 
> > > > device.
> > > >
> > > > Besides this issue, did you try with 64kB on host and guest?
> > >
> > > I tried Followings
> > >   - 4k host and 4k guest  - it works with v7 version
> > >   - 64k host and 64k guest - it does not work with v7
> > > hard-coded config.page_size_mask to 0x and it works
> >
> > You might get this from the iova_pgsize bitmap returned by
> > VFIO_IOMMU_GET_INFO. The virtio config.page_size_mask is global so there
> > is the usual problem of aggregating consistent properties, but I'm
> > guessing using the host page size as a granule here is safe enough.
> >
> > If it is a problem, we can add a PROBE property for page size mask,
> > allowing to define per-endpoint page masks. I have kernel patches
> > somewhere to do just that.
> 
> I do not see we need page size mask per endpoint.
> 
> While I am trying to understand what "page-size-mask" guest will work with
> 
> - 4K page size host and 4k page size guest
>   config.page_size_mask = 0x000 will work
> 
> - 64K page size host and 64k page size guest
>   config.page_size_mask = 0xfff will work
> 
> - 64K page size host and 4k page size guest
>1) config.page_size_mask = 0x000 will also not work as
> VFIO in host expect iova and size to be aligned to 64k (PAGE_SIZE in
> host)
>2) config.page_size_mask = 0xfff will not work, iova
> initialization (in guest) expect minimum page-size supported by h/w to
> be equal to 4k (PAGE_SIZE in guest)
>Should we look to relax this in iova allocation code?

Oh right, that's not great. Maybe the BUG_ON() can be removed, I'll ask on
the list.

In the meantime, 64k granule is the right value to advertise to the guest
in this case. Did you try 64k guest 4k host?

Thanks,
Jean



Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-17 Thread Auger Eric
Hi Bharat,

On 3/17/20 8:10 AM, Bharat Bhushan wrote:
> Hi Jean,
> 
> On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
>  wrote:
>>
>> Hi Bharat,
>>
>> Could you Cc me on your next posting?  Unfortunately I don't have much
>> hardware for testing this at the moment, but I might be able to help a
>> little on the review.
>>
>> On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
>>> First issue is: your guest can use 4K page and your host can use 64KB
>>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must devise
>>> a way to pass the host settings to the VIRTIO-IOMMU device.
>>>
>>> Even with 64KB pages, it did not work for me. I have obviously not the
>>> storm of VFIO_DMA_MAP failures but I have some, most probably due to
>>> some wrong notifications somewhere. I will try to investigate on my 
>>> side.
>>>
>>> Did you test with VFIO on your side?
>>
>> I did not tried with different page sizes, only tested with 4K page size.
>>
>> Yes it works, I tested with two n/w device assigned to VM, both 
>> interfaces works
>>
>> First I will try with 64k page size.
>
> 64K page size does not work for me as well,
>
> I think we are not passing correct page_size_mask here
> (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> 0xf000))
 I guess you mean with guest using 4K and host using 64K.
>
> We need to set this correctly as per host page size, correct?
 Yes that's correct. We need to put in place a control path to retrieve
 the page settings on host through VFIO to inform the virtio-iommu device.

 Besides this issue, did you try with 64kB on host and guest?
>>>
>>> I tried Followings
>>>   - 4k host and 4k guest  - it works with v7 version
>>>   - 64k host and 64k guest - it does not work with v7
>>> hard-coded config.page_size_mask to 0x and it works
>>
>> You might get this from the iova_pgsize bitmap returned by
>> VFIO_IOMMU_GET_INFO. The virtio config.page_size_mask is global so there
>> is the usual problem of aggregating consistent properties, but I'm
>> guessing using the host page size as a granule here is safe enough.
>>
>> If it is a problem, we can add a PROBE property for page size mask,
>> allowing to define per-endpoint page masks. I have kernel patches
>> somewhere to do just that.
> 
> I do not see we need page size mask per endpoint.
the physical devices can be protected with different physical IOMMUs and
they may have different page size support
> 
> While I am trying to understand what "page-size-mask" guest will work with
> 
> - 4K page size host and 4k page size guest
>   config.page_size_mask = 0x000 will work
> 
> - 64K page size host and 64k page size guest
>   config.page_size_mask = 0xfff will work
I guess not all the pages sizes should be exposed by the virtio-iommu
device, only 4K and 64K

If host supports 4K we should expose 4K and bigger
If host supports 64K we should expose page sizes of 64KB and bigger

The guest will be forced to use what is exposed and that should work.

What is missing is a way to retrieve the host supported page size
bitmask. I can try to help you on that if you want to.

Maybe we should first try to upstream vhost support and then VFIO?

Thanks

Eric
> 
> - 64K page size host and 4k page size guest
>1) config.page_size_mask = 0x000 will also not work as
> VFIO in host expect iova and size to be aligned to 64k (PAGE_SIZE in
> host)
>2) config.page_size_mask = 0xfff will not work, iova
> initialization (in guest) expect minimum page-size supported by h/w to
> be equal to 4k (PAGE_SIZE in guest)
>Should we look to relax this in iova allocation code?
> 
> Thanks
> -Bharat
> 
> 
>>
>> Thanks,
>> Jean
> 




Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-17 Thread Bharat Bhushan
Hi Jean,

On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
 wrote:
>
> Hi Bharat,
>
> Could you Cc me on your next posting?  Unfortunately I don't have much
> hardware for testing this at the moment, but I might be able to help a
> little on the review.
>
> On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> > > >>> First issue is: your guest can use 4K page and your host can use 64KB
> > > >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must 
> > > >>> devise
> > > >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> > > >>>
> > > >>> Even with 64KB pages, it did not work for me. I have obviously not the
> > > >>> storm of VFIO_DMA_MAP failures but I have some, most probably due to
> > > >>> some wrong notifications somewhere. I will try to investigate on my 
> > > >>> side.
> > > >>>
> > > >>> Did you test with VFIO on your side?
> > > >>
> > > >> I did not tried with different page sizes, only tested with 4K page 
> > > >> size.
> > > >>
> > > >> Yes it works, I tested with two n/w device assigned to VM, both 
> > > >> interfaces works
> > > >>
> > > >> First I will try with 64k page size.
> > > >
> > > > 64K page size does not work for me as well,
> > > >
> > > > I think we are not passing correct page_size_mask here
> > > > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > > > 0xf000))
> > > I guess you mean with guest using 4K and host using 64K.
> > > >
> > > > We need to set this correctly as per host page size, correct?
> > > Yes that's correct. We need to put in place a control path to retrieve
> > > the page settings on host through VFIO to inform the virtio-iommu device.
> > >
> > > Besides this issue, did you try with 64kB on host and guest?
> >
> > I tried Followings
> >   - 4k host and 4k guest  - it works with v7 version
> >   - 64k host and 64k guest - it does not work with v7
> > hard-coded config.page_size_mask to 0x and it works
>
> You might get this from the iova_pgsize bitmap returned by
> VFIO_IOMMU_GET_INFO. The virtio config.page_size_mask is global so there
> is the usual problem of aggregating consistent properties, but I'm
> guessing using the host page size as a granule here is safe enough.
>
> If it is a problem, we can add a PROBE property for page size mask,
> allowing to define per-endpoint page masks. I have kernel patches
> somewhere to do just that.

I do not see we need page size mask per endpoint.

While I am trying to understand what "page-size-mask" guest will work with

- 4K page size host and 4k page size guest
  config.page_size_mask = 0x000 will work

- 64K page size host and 64k page size guest
  config.page_size_mask = 0xfff will work

- 64K page size host and 4k page size guest
   1) config.page_size_mask = 0x000 will also not work as
VFIO in host expect iova and size to be aligned to 64k (PAGE_SIZE in
host)
   2) config.page_size_mask = 0xfff will not work, iova
initialization (in guest) expect minimum page-size supported by h/w to
be equal to 4k (PAGE_SIZE in guest)
   Should we look to relax this in iova allocation code?

Thanks
-Bharat


>
> Thanks,
> Jean



Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-16 Thread Jean-Philippe Brucker
Hi Bharat,

Could you Cc me on your next posting?  Unfortunately I don't have much
hardware for testing this at the moment, but I might be able to help a
little on the review.

On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> > >>> First issue is: your guest can use 4K page and your host can use 64KB
> > >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must devise
> > >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> > >>>
> > >>> Even with 64KB pages, it did not work for me. I have obviously not the
> > >>> storm of VFIO_DMA_MAP failures but I have some, most probably due to
> > >>> some wrong notifications somewhere. I will try to investigate on my 
> > >>> side.
> > >>>
> > >>> Did you test with VFIO on your side?
> > >>
> > >> I did not tried with different page sizes, only tested with 4K page size.
> > >>
> > >> Yes it works, I tested with two n/w device assigned to VM, both 
> > >> interfaces works
> > >>
> > >> First I will try with 64k page size.
> > >
> > > 64K page size does not work for me as well,
> > >
> > > I think we are not passing correct page_size_mask here
> > > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > > 0xf000))
> > I guess you mean with guest using 4K and host using 64K.
> > >
> > > We need to set this correctly as per host page size, correct?
> > Yes that's correct. We need to put in place a control path to retrieve
> > the page settings on host through VFIO to inform the virtio-iommu device.
> >
> > Besides this issue, did you try with 64kB on host and guest?
> 
> I tried Followings
>   - 4k host and 4k guest  - it works with v7 version
>   - 64k host and 64k guest - it does not work with v7
> hard-coded config.page_size_mask to 0x and it works

You might get this from the iova_pgsize bitmap returned by
VFIO_IOMMU_GET_INFO. The virtio config.page_size_mask is global so there
is the usual problem of aggregating consistent properties, but I'm
guessing using the host page size as a granule here is safe enough. 

If it is a problem, we can add a PROBE property for page size mask,
allowing to define per-endpoint page masks. I have kernel patches
somewhere to do just that.

Thanks,
Jean



Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-16 Thread Bharat Bhushan
Hi Eric,

On Mon, Mar 16, 2020 at 1:15 PM Bharat Bhushan  wrote:
>
> Hi Eric,
>
> On Mon, Mar 16, 2020 at 1:02 PM Auger Eric  wrote:
> >
> > Hi Bharat,
> >
> > On 3/16/20 7:41 AM, Bharat Bhushan wrote:
> > > Hi Eric,
> > >
> > > On Fri, Mar 13, 2020 at 8:11 PM Auger Eric  wrote:
> > >>
> > >> Hi Bharat
> > >>
> > >> On 3/13/20 8:48 AM, Bharat Bhushan wrote:
> > >>> iommu-notifier are called when a device is attached
> > >> IOMMU notifiers
> > >>> or detached to as address-space.
> > >>> This is needed for VFIO.
> > >> and vhost for detach
> > >>>
> > >>> Signed-off-by: Bharat Bhushan 
> > >>> ---
> > >>>  hw/virtio/virtio-iommu.c | 47 
> > >>>  1 file changed, 47 insertions(+)
> > >>>
> > >>> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> > >>> index e51344a53e..2006f72901 100644
> > >>> --- a/hw/virtio/virtio-iommu.c
> > >>> +++ b/hw/virtio/virtio-iommu.c
> > >>> @@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUEndpoint {
> > >>>  uint32_t id;
> > >>>  VirtIOIOMMUDomain *domain;
> > >>>  QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
> > >>> +VirtIOIOMMU *viommu;
> > >> This needs specal care on post-load. When migrating the EPs, only the id
> > >> is migrated. On post-load you need to set viommu as it is done for
> > >> domain. migration is allowed with vhost.
> > >
> > > ok, I have not tried vhost/migration. Below change set viommu when
> > > reconstructing endpoint.
> >
> >
> > Yes I think this should be OK.
> >
> > By the end I did the series a try with vhost/vfio. with vhost it works
> > (not with recent kernel though, but the issue may be related to kernel).
> > With VFIO however it does not for me.
> >
> > First issue is: your guest can use 4K page and your host can use 64KB
> > pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must devise
> > a way to pass the host settings to the VIRTIO-IOMMU device.
> >
> > Even with 64KB pages, it did not work for me. I have obviously not the
> > storm of VFIO_DMA_MAP failures but I have some, most probably due to
> > some wrong notifications somewhere. I will try to investigate on my side.
> >
> > Did you test with VFIO on your side?
>
> I did not tried with different page sizes, only tested with 4K page size.
>
> Yes it works, I tested with two n/w device assigned to VM, both interfaces 
> works
>
> First I will try with 64k page size.

64K page size does not work for me as well,

I think we are not passing correct page_size_mask here
(config.page_size_mask is set to TARGET_PAGE_MASK ( which is
0xf000))

We need to set this correctly as per host page size, correct?

Thanks
-Bharat

>
> Thanks
> -Bharat
>
> >
> > Thanks
> >
> > Eric
> > >
> > > @@ -984,6 +973,7 @@ static gboolean reconstruct_endpoints(gpointer
> > > key, gpointer value,
> > >
> > >  QLIST_FOREACH(iter, >endpoint_list, next) {
> > >  iter->domain = d;
> > > +   iter->viommu = s;
> > >  g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
> > >  }
> > >  return false; /* continue the domain traversal */
> > >
> > >>>  } VirtIOIOMMUEndpoint;
> > >>>
> > >>>  typedef struct VirtIOIOMMUInterval {
> > >>> @@ -155,8 +156,44 @@ static void 
> > >>> virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr iova,
> > >>>  memory_region_notify_iommu(mr, 0, entry);
> > >>>  }
> > >>>
> > >>> +static gboolean virtio_iommu_mapping_unmap(gpointer key, gpointer 
> > >>> value,
> > >>> +   gpointer data)
> > >>> +{
> > >>> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> > >>> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> > >>> +
> > >>> +virtio_iommu_notify_unmap(mr, interval->low,
> > >>> +  interval->high - interval->low + 1);
> > >>> +
> > >>> +return false;
> > >>> +}
> > >>> +
> > >>> +static gboolean virtio_iommu_mapping_map(gpointer key, gpointer value,
> > >>> + gpointer data)
> > >>> +{
> > >>> +VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
> > >>> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> > >>> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> > >>> +
> > >>> +virtio_iommu_notify_map(mr, interval->low, mapping->phys_addr,
> > >>> +interval->high - interval->low + 1);
> > >>> +
> > >>> +return false;
> > >>> +}
> > >>> +
> > >>>  static void 
> > >>> virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
> > >>>  {
> > >>> +VirtioIOMMUNotifierNode *node;
> > >>> +VirtIOIOMMU *s = ep->viommu;
> > >>> +VirtIOIOMMUDomain *domain = ep->domain;
> > >>> +
> > >>> +QLIST_FOREACH(node, >notifiers_list, next) {
> > >>> +if (ep->id == node->iommu_dev->devfn) {
> > >>> +g_tree_foreach(domain->mappings, 
> > >>> virtio_iommu_mapping_unmap,
> > >>> +   >iommu_dev->iommu_mr);
> > >> I 

Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-16 Thread Bharat Bhushan
Hi Eric,

On Mon, Mar 16, 2020 at 2:35 PM Auger Eric  wrote:
>
> Hi Bharat,
>
> On 3/16/20 9:58 AM, Bharat Bhushan wrote:
> > Hi Eric,
> >
> > On Mon, Mar 16, 2020 at 1:15 PM Bharat Bhushan  
> > wrote:
> >>
> >> Hi Eric,
> >>
> >> On Mon, Mar 16, 2020 at 1:02 PM Auger Eric  wrote:
> >>>
> >>> Hi Bharat,
> >>>
> >>> On 3/16/20 7:41 AM, Bharat Bhushan wrote:
>  Hi Eric,
> 
>  On Fri, Mar 13, 2020 at 8:11 PM Auger Eric  wrote:
> >
> > Hi Bharat
> >
> > On 3/13/20 8:48 AM, Bharat Bhushan wrote:
> >> iommu-notifier are called when a device is attached
> > IOMMU notifiers
> >> or detached to as address-space.
> >> This is needed for VFIO.
> > and vhost for detach
> >>
> >> Signed-off-by: Bharat Bhushan 
> >> ---
> >>  hw/virtio/virtio-iommu.c | 47 
> >>  1 file changed, 47 insertions(+)
> >>
> >> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> >> index e51344a53e..2006f72901 100644
> >> --- a/hw/virtio/virtio-iommu.c
> >> +++ b/hw/virtio/virtio-iommu.c
> >> @@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUEndpoint {
> >>  uint32_t id;
> >>  VirtIOIOMMUDomain *domain;
> >>  QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
> >> +VirtIOIOMMU *viommu;
> > This needs specal care on post-load. When migrating the EPs, only the id
> > is migrated. On post-load you need to set viommu as it is done for
> > domain. migration is allowed with vhost.
> 
>  ok, I have not tried vhost/migration. Below change set viommu when
>  reconstructing endpoint.
> >>>
> >>>
> >>> Yes I think this should be OK.
> >>>
> >>> By the end I did the series a try with vhost/vfio. with vhost it works
> >>> (not with recent kernel though, but the issue may be related to kernel).
> >>> With VFIO however it does not for me.
> >>>
> >>> First issue is: your guest can use 4K page and your host can use 64KB
> >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must devise
> >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> >>>
> >>> Even with 64KB pages, it did not work for me. I have obviously not the
> >>> storm of VFIO_DMA_MAP failures but I have some, most probably due to
> >>> some wrong notifications somewhere. I will try to investigate on my side.
> >>>
> >>> Did you test with VFIO on your side?
> >>
> >> I did not tried with different page sizes, only tested with 4K page size.
> >>
> >> Yes it works, I tested with two n/w device assigned to VM, both interfaces 
> >> works
> >>
> >> First I will try with 64k page size.
> >
> > 64K page size does not work for me as well,
> >
> > I think we are not passing correct page_size_mask here
> > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > 0xf000))
> I guess you mean with guest using 4K and host using 64K.
> >
> > We need to set this correctly as per host page size, correct?
> Yes that's correct. We need to put in place a control path to retrieve
> the page settings on host through VFIO to inform the virtio-iommu device.
>
> Besides this issue, did you try with 64kB on host and guest?

I tried Followings
  - 4k host and 4k guest  - it works with v7 version
  - 64k host and 64k guest - it does not work with v7
hard-coded config.page_size_mask to 0x and it works

Thanks
-Bharat

>
> Thanks
>
> Eric
> >
> > Thanks
> > -Bharat
> >
> >>
> >> Thanks
> >> -Bharat
> >>
> >>>
> >>> Thanks
> >>>
> >>> Eric
> 
>  @@ -984,6 +973,7 @@ static gboolean reconstruct_endpoints(gpointer
>  key, gpointer value,
> 
>   QLIST_FOREACH(iter, >endpoint_list, next) {
>   iter->domain = d;
>  +   iter->viommu = s;
>   g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
>   }
>   return false; /* continue the domain traversal */
> 
> >>  } VirtIOIOMMUEndpoint;
> >>
> >>  typedef struct VirtIOIOMMUInterval {
> >> @@ -155,8 +156,44 @@ static void 
> >> virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr iova,
> >>  memory_region_notify_iommu(mr, 0, entry);
> >>  }
> >>
> >> +static gboolean virtio_iommu_mapping_unmap(gpointer key, gpointer 
> >> value,
> >> +   gpointer data)
> >> +{
> >> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> >> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> >> +
> >> +virtio_iommu_notify_unmap(mr, interval->low,
> >> +  interval->high - interval->low + 1);
> >> +
> >> +return false;
> >> +}
> >> +
> >> +static gboolean virtio_iommu_mapping_map(gpointer key, gpointer value,
> >> + gpointer data)
> >> +{
> >> +VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
> >> +

Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-16 Thread Auger Eric
Hi Bharat,

On 3/16/20 9:58 AM, Bharat Bhushan wrote:
> Hi Eric,
> 
> On Mon, Mar 16, 2020 at 1:15 PM Bharat Bhushan  
> wrote:
>>
>> Hi Eric,
>>
>> On Mon, Mar 16, 2020 at 1:02 PM Auger Eric  wrote:
>>>
>>> Hi Bharat,
>>>
>>> On 3/16/20 7:41 AM, Bharat Bhushan wrote:
 Hi Eric,

 On Fri, Mar 13, 2020 at 8:11 PM Auger Eric  wrote:
>
> Hi Bharat
>
> On 3/13/20 8:48 AM, Bharat Bhushan wrote:
>> iommu-notifier are called when a device is attached
> IOMMU notifiers
>> or detached to as address-space.
>> This is needed for VFIO.
> and vhost for detach
>>
>> Signed-off-by: Bharat Bhushan 
>> ---
>>  hw/virtio/virtio-iommu.c | 47 
>>  1 file changed, 47 insertions(+)
>>
>> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
>> index e51344a53e..2006f72901 100644
>> --- a/hw/virtio/virtio-iommu.c
>> +++ b/hw/virtio/virtio-iommu.c
>> @@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUEndpoint {
>>  uint32_t id;
>>  VirtIOIOMMUDomain *domain;
>>  QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
>> +VirtIOIOMMU *viommu;
> This needs specal care on post-load. When migrating the EPs, only the id
> is migrated. On post-load you need to set viommu as it is done for
> domain. migration is allowed with vhost.

 ok, I have not tried vhost/migration. Below change set viommu when
 reconstructing endpoint.
>>>
>>>
>>> Yes I think this should be OK.
>>>
>>> By the end I did the series a try with vhost/vfio. with vhost it works
>>> (not with recent kernel though, but the issue may be related to kernel).
>>> With VFIO however it does not for me.
>>>
>>> First issue is: your guest can use 4K page and your host can use 64KB
>>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must devise
>>> a way to pass the host settings to the VIRTIO-IOMMU device.
>>>
>>> Even with 64KB pages, it did not work for me. I have obviously not the
>>> storm of VFIO_DMA_MAP failures but I have some, most probably due to
>>> some wrong notifications somewhere. I will try to investigate on my side.
>>>
>>> Did you test with VFIO on your side?
>>
>> I did not tried with different page sizes, only tested with 4K page size.
>>
>> Yes it works, I tested with two n/w device assigned to VM, both interfaces 
>> works
>>
>> First I will try with 64k page size.
> 
> 64K page size does not work for me as well,
> 
> I think we are not passing correct page_size_mask here
> (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> 0xf000))
I guess you mean with guest using 4K and host using 64K.
> 
> We need to set this correctly as per host page size, correct?
Yes that's correct. We need to put in place a control path to retrieve
the page settings on host through VFIO to inform the virtio-iommu device.

Besides this issue, did you try with 64kB on host and guest?

Thanks

Eric
> 
> Thanks
> -Bharat
> 
>>
>> Thanks
>> -Bharat
>>
>>>
>>> Thanks
>>>
>>> Eric

 @@ -984,6 +973,7 @@ static gboolean reconstruct_endpoints(gpointer
 key, gpointer value,

  QLIST_FOREACH(iter, >endpoint_list, next) {
  iter->domain = d;
 +   iter->viommu = s;
  g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
  }
  return false; /* continue the domain traversal */

>>  } VirtIOIOMMUEndpoint;
>>
>>  typedef struct VirtIOIOMMUInterval {
>> @@ -155,8 +156,44 @@ static void 
>> virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr iova,
>>  memory_region_notify_iommu(mr, 0, entry);
>>  }
>>
>> +static gboolean virtio_iommu_mapping_unmap(gpointer key, gpointer value,
>> +   gpointer data)
>> +{
>> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
>> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
>> +
>> +virtio_iommu_notify_unmap(mr, interval->low,
>> +  interval->high - interval->low + 1);
>> +
>> +return false;
>> +}
>> +
>> +static gboolean virtio_iommu_mapping_map(gpointer key, gpointer value,
>> + gpointer data)
>> +{
>> +VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
>> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
>> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
>> +
>> +virtio_iommu_notify_map(mr, interval->low, mapping->phys_addr,
>> +interval->high - interval->low + 1);
>> +
>> +return false;
>> +}
>> +
>>  static void 
>> virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
>>  {
>> +VirtioIOMMUNotifierNode *node;
>> +VirtIOIOMMU *s = ep->viommu;
>> +VirtIOIOMMUDomain *domain = 

Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-16 Thread Bharat Bhushan
Hi Eric,

On Mon, Mar 16, 2020 at 1:02 PM Auger Eric  wrote:
>
> Hi Bharat,
>
> On 3/16/20 7:41 AM, Bharat Bhushan wrote:
> > Hi Eric,
> >
> > On Fri, Mar 13, 2020 at 8:11 PM Auger Eric  wrote:
> >>
> >> Hi Bharat
> >>
> >> On 3/13/20 8:48 AM, Bharat Bhushan wrote:
> >>> iommu-notifier are called when a device is attached
> >> IOMMU notifiers
> >>> or detached to as address-space.
> >>> This is needed for VFIO.
> >> and vhost for detach
> >>>
> >>> Signed-off-by: Bharat Bhushan 
> >>> ---
> >>>  hw/virtio/virtio-iommu.c | 47 
> >>>  1 file changed, 47 insertions(+)
> >>>
> >>> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> >>> index e51344a53e..2006f72901 100644
> >>> --- a/hw/virtio/virtio-iommu.c
> >>> +++ b/hw/virtio/virtio-iommu.c
> >>> @@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUEndpoint {
> >>>  uint32_t id;
> >>>  VirtIOIOMMUDomain *domain;
> >>>  QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
> >>> +VirtIOIOMMU *viommu;
> >> This needs specal care on post-load. When migrating the EPs, only the id
> >> is migrated. On post-load you need to set viommu as it is done for
> >> domain. migration is allowed with vhost.
> >
> > ok, I have not tried vhost/migration. Below change set viommu when
> > reconstructing endpoint.
>
>
> Yes I think this should be OK.
>
> By the end I did the series a try with vhost/vfio. with vhost it works
> (not with recent kernel though, but the issue may be related to kernel).
> With VFIO however it does not for me.
>
> First issue is: your guest can use 4K page and your host can use 64KB
> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must devise
> a way to pass the host settings to the VIRTIO-IOMMU device.
>
> Even with 64KB pages, it did not work for me. I have obviously not the
> storm of VFIO_DMA_MAP failures but I have some, most probably due to
> some wrong notifications somewhere. I will try to investigate on my side.
>
> Did you test with VFIO on your side?

I did not tried with different page sizes, only tested with 4K page size.

Yes it works, I tested with two n/w device assigned to VM, both interfaces works

First I will try with 64k page size.

Thanks
-Bharat

>
> Thanks
>
> Eric
> >
> > @@ -984,6 +973,7 @@ static gboolean reconstruct_endpoints(gpointer
> > key, gpointer value,
> >
> >  QLIST_FOREACH(iter, >endpoint_list, next) {
> >  iter->domain = d;
> > +   iter->viommu = s;
> >  g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
> >  }
> >  return false; /* continue the domain traversal */
> >
> >>>  } VirtIOIOMMUEndpoint;
> >>>
> >>>  typedef struct VirtIOIOMMUInterval {
> >>> @@ -155,8 +156,44 @@ static void 
> >>> virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr iova,
> >>>  memory_region_notify_iommu(mr, 0, entry);
> >>>  }
> >>>
> >>> +static gboolean virtio_iommu_mapping_unmap(gpointer key, gpointer value,
> >>> +   gpointer data)
> >>> +{
> >>> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> >>> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> >>> +
> >>> +virtio_iommu_notify_unmap(mr, interval->low,
> >>> +  interval->high - interval->low + 1);
> >>> +
> >>> +return false;
> >>> +}
> >>> +
> >>> +static gboolean virtio_iommu_mapping_map(gpointer key, gpointer value,
> >>> + gpointer data)
> >>> +{
> >>> +VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
> >>> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> >>> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> >>> +
> >>> +virtio_iommu_notify_map(mr, interval->low, mapping->phys_addr,
> >>> +interval->high - interval->low + 1);
> >>> +
> >>> +return false;
> >>> +}
> >>> +
> >>>  static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint 
> >>> *ep)
> >>>  {
> >>> +VirtioIOMMUNotifierNode *node;
> >>> +VirtIOIOMMU *s = ep->viommu;
> >>> +VirtIOIOMMUDomain *domain = ep->domain;
> >>> +
> >>> +QLIST_FOREACH(node, >notifiers_list, next) {
> >>> +if (ep->id == node->iommu_dev->devfn) {
> >>> +g_tree_foreach(domain->mappings, virtio_iommu_mapping_unmap,
> >>> +   >iommu_dev->iommu_mr);
> >> I understand this should fo the job for domain removal
> >
> > did not get the comment, are you saying we should do this on domain removal?
> see my reply on 2/5
>
> Note the above code should be moved after the check of !ep->domain below

ohh yes, will move

Thanks
-Bharat

> >
> >>> +}
> >>> +}
> >>> +
> >>>  if (!ep->domain) {
> >>>  return;
> >>>  }
> >>> @@ -178,6 +215,7 @@ static VirtIOIOMMUEndpoint 
> >>> *virtio_iommu_get_endpoint(VirtIOIOMMU *s,
> >>>  }
> >>>  ep = g_malloc0(sizeof(*ep));
> >>>  ep->id = ep_id;
> >>> +ep->viommu = s;
> >>>  

Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-16 Thread Auger Eric
Hi Bharat,

On 3/16/20 7:41 AM, Bharat Bhushan wrote:
> Hi Eric,
> 
> On Fri, Mar 13, 2020 at 8:11 PM Auger Eric  wrote:
>>
>> Hi Bharat
>>
>> On 3/13/20 8:48 AM, Bharat Bhushan wrote:
>>> iommu-notifier are called when a device is attached
>> IOMMU notifiers
>>> or detached to as address-space.
>>> This is needed for VFIO.
>> and vhost for detach
>>>
>>> Signed-off-by: Bharat Bhushan 
>>> ---
>>>  hw/virtio/virtio-iommu.c | 47 
>>>  1 file changed, 47 insertions(+)
>>>
>>> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
>>> index e51344a53e..2006f72901 100644
>>> --- a/hw/virtio/virtio-iommu.c
>>> +++ b/hw/virtio/virtio-iommu.c
>>> @@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUEndpoint {
>>>  uint32_t id;
>>>  VirtIOIOMMUDomain *domain;
>>>  QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
>>> +VirtIOIOMMU *viommu;
>> This needs specal care on post-load. When migrating the EPs, only the id
>> is migrated. On post-load you need to set viommu as it is done for
>> domain. migration is allowed with vhost.
> 
> ok, I have not tried vhost/migration. Below change set viommu when
> reconstructing endpoint.


Yes I think this should be OK.

By the end I did the series a try with vhost/vfio. with vhost it works
(not with recent kernel though, but the issue may be related to kernel).
With VFIO however it does not for me.

First issue is: your guest can use 4K page and your host can use 64KB
pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must devise
a way to pass the host settings to the VIRTIO-IOMMU device.

Even with 64KB pages, it did not work for me. I have obviously not the
storm of VFIO_DMA_MAP failures but I have some, most probably due to
some wrong notifications somewhere. I will try to investigate on my side.

Did you test with VFIO on your side?

Thanks

Eric
> 
> @@ -984,6 +973,7 @@ static gboolean reconstruct_endpoints(gpointer
> key, gpointer value,
> 
>  QLIST_FOREACH(iter, >endpoint_list, next) {
>  iter->domain = d;
> +   iter->viommu = s;
>  g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
>  }
>  return false; /* continue the domain traversal */
> 
>>>  } VirtIOIOMMUEndpoint;
>>>
>>>  typedef struct VirtIOIOMMUInterval {
>>> @@ -155,8 +156,44 @@ static void 
>>> virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr iova,
>>>  memory_region_notify_iommu(mr, 0, entry);
>>>  }
>>>
>>> +static gboolean virtio_iommu_mapping_unmap(gpointer key, gpointer value,
>>> +   gpointer data)
>>> +{
>>> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
>>> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
>>> +
>>> +virtio_iommu_notify_unmap(mr, interval->low,
>>> +  interval->high - interval->low + 1);
>>> +
>>> +return false;
>>> +}
>>> +
>>> +static gboolean virtio_iommu_mapping_map(gpointer key, gpointer value,
>>> + gpointer data)
>>> +{
>>> +VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
>>> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
>>> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
>>> +
>>> +virtio_iommu_notify_map(mr, interval->low, mapping->phys_addr,
>>> +interval->high - interval->low + 1);
>>> +
>>> +return false;
>>> +}
>>> +
>>>  static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint 
>>> *ep)
>>>  {
>>> +VirtioIOMMUNotifierNode *node;
>>> +VirtIOIOMMU *s = ep->viommu;
>>> +VirtIOIOMMUDomain *domain = ep->domain;
>>> +
>>> +QLIST_FOREACH(node, >notifiers_list, next) {
>>> +if (ep->id == node->iommu_dev->devfn) {
>>> +g_tree_foreach(domain->mappings, virtio_iommu_mapping_unmap,
>>> +   >iommu_dev->iommu_mr);
>> I understand this should fo the job for domain removal
> 
> did not get the comment, are you saying we should do this on domain removal?
see my reply on 2/5

Note the above code should be moved after the check of !ep->domain below
> 
>>> +}
>>> +}
>>> +
>>>  if (!ep->domain) {
>>>  return;
>>>  }
>>> @@ -178,6 +215,7 @@ static VirtIOIOMMUEndpoint 
>>> *virtio_iommu_get_endpoint(VirtIOIOMMU *s,
>>>  }
>>>  ep = g_malloc0(sizeof(*ep));
>>>  ep->id = ep_id;
>>> +ep->viommu = s;
>>>  trace_virtio_iommu_get_endpoint(ep_id);
>>>  g_tree_insert(s->endpoints, GUINT_TO_POINTER(ep_id), ep);
>>>  return ep;
>>> @@ -272,6 +310,7 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
>>>  {
>>>  uint32_t domain_id = le32_to_cpu(req->domain);
>>>  uint32_t ep_id = le32_to_cpu(req->endpoint);
>>> +VirtioIOMMUNotifierNode *node;
>>>  VirtIOIOMMUDomain *domain;
>>>  VirtIOIOMMUEndpoint *ep;
>>>
>>> @@ -299,6 +338,14 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
>>>
>>>  ep->domain = domain;
>>>
>>> +/* 

Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-16 Thread Bharat Bhushan
Hi Eric,

On Fri, Mar 13, 2020 at 8:11 PM Auger Eric  wrote:
>
> Hi Bharat
>
> On 3/13/20 8:48 AM, Bharat Bhushan wrote:
> > iommu-notifier are called when a device is attached
> IOMMU notifiers
> > or detached to as address-space.
> > This is needed for VFIO.
> and vhost for detach
> >
> > Signed-off-by: Bharat Bhushan 
> > ---
> >  hw/virtio/virtio-iommu.c | 47 
> >  1 file changed, 47 insertions(+)
> >
> > diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> > index e51344a53e..2006f72901 100644
> > --- a/hw/virtio/virtio-iommu.c
> > +++ b/hw/virtio/virtio-iommu.c
> > @@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUEndpoint {
> >  uint32_t id;
> >  VirtIOIOMMUDomain *domain;
> >  QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
> > +VirtIOIOMMU *viommu;
> This needs specal care on post-load. When migrating the EPs, only the id
> is migrated. On post-load you need to set viommu as it is done for
> domain. migration is allowed with vhost.

ok, I have not tried vhost/migration. Below change set viommu when
reconstructing endpoint.

@@ -984,6 +973,7 @@ static gboolean reconstruct_endpoints(gpointer
key, gpointer value,

 QLIST_FOREACH(iter, >endpoint_list, next) {
 iter->domain = d;
+   iter->viommu = s;
 g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
 }
 return false; /* continue the domain traversal */

> >  } VirtIOIOMMUEndpoint;
> >
> >  typedef struct VirtIOIOMMUInterval {
> > @@ -155,8 +156,44 @@ static void 
> > virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr iova,
> >  memory_region_notify_iommu(mr, 0, entry);
> >  }
> >
> > +static gboolean virtio_iommu_mapping_unmap(gpointer key, gpointer value,
> > +   gpointer data)
> > +{
> > +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> > +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> > +
> > +virtio_iommu_notify_unmap(mr, interval->low,
> > +  interval->high - interval->low + 1);
> > +
> > +return false;
> > +}
> > +
> > +static gboolean virtio_iommu_mapping_map(gpointer key, gpointer value,
> > + gpointer data)
> > +{
> > +VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
> > +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> > +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> > +
> > +virtio_iommu_notify_map(mr, interval->low, mapping->phys_addr,
> > +interval->high - interval->low + 1);
> > +
> > +return false;
> > +}
> > +
> >  static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint 
> > *ep)
> >  {
> > +VirtioIOMMUNotifierNode *node;
> > +VirtIOIOMMU *s = ep->viommu;
> > +VirtIOIOMMUDomain *domain = ep->domain;
> > +
> > +QLIST_FOREACH(node, >notifiers_list, next) {
> > +if (ep->id == node->iommu_dev->devfn) {
> > +g_tree_foreach(domain->mappings, virtio_iommu_mapping_unmap,
> > +   >iommu_dev->iommu_mr);
> I understand this should fo the job for domain removal

did not get the comment, are you saying we should do this on domain removal?

> > +}
> > +}
> > +
> >  if (!ep->domain) {
> >  return;
> >  }
> > @@ -178,6 +215,7 @@ static VirtIOIOMMUEndpoint 
> > *virtio_iommu_get_endpoint(VirtIOIOMMU *s,
> >  }
> >  ep = g_malloc0(sizeof(*ep));
> >  ep->id = ep_id;
> > +ep->viommu = s;
> >  trace_virtio_iommu_get_endpoint(ep_id);
> >  g_tree_insert(s->endpoints, GUINT_TO_POINTER(ep_id), ep);
> >  return ep;
> > @@ -272,6 +310,7 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
> >  {
> >  uint32_t domain_id = le32_to_cpu(req->domain);
> >  uint32_t ep_id = le32_to_cpu(req->endpoint);
> > +VirtioIOMMUNotifierNode *node;
> >  VirtIOIOMMUDomain *domain;
> >  VirtIOIOMMUEndpoint *ep;
> >
> > @@ -299,6 +338,14 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
> >
> >  ep->domain = domain;
> >
> > +/* Replay existing address space mappings on the associated memory 
> > region */
> maybe use the "domain" terminology here.

ok,

Thanks
-Bharat

> > +QLIST_FOREACH(node, >notifiers_list, next) {
> > +if (ep_id == node->iommu_dev->devfn) {
> > +g_tree_foreach(domain->mappings, virtio_iommu_mapping_map,
> > +   >iommu_dev->iommu_mr);
> > +}
> > +}
> > +
> >  return VIRTIO_IOMMU_S_OK;
> >  }
> >
> >
> Thanks
>
> Eric
>



Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach

2020-03-13 Thread Auger Eric
Hi Bharat

On 3/13/20 8:48 AM, Bharat Bhushan wrote:
> iommu-notifier are called when a device is attached
IOMMU notifiers
> or detached to as address-space.
> This is needed for VFIO.
and vhost for detach
> 
> Signed-off-by: Bharat Bhushan 
> ---
>  hw/virtio/virtio-iommu.c | 47 
>  1 file changed, 47 insertions(+)
> 
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index e51344a53e..2006f72901 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUEndpoint {
>  uint32_t id;
>  VirtIOIOMMUDomain *domain;
>  QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
> +VirtIOIOMMU *viommu;
This needs specal care on post-load. When migrating the EPs, only the id
is migrated. On post-load you need to set viommu as it is done for
domain. migration is allowed with vhost.
>  } VirtIOIOMMUEndpoint;
>  
>  typedef struct VirtIOIOMMUInterval {
> @@ -155,8 +156,44 @@ static void virtio_iommu_notify_unmap(IOMMUMemoryRegion 
> *mr, hwaddr iova,
>  memory_region_notify_iommu(mr, 0, entry);
>  }
>  
> +static gboolean virtio_iommu_mapping_unmap(gpointer key, gpointer value,
> +   gpointer data)
> +{
> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> +
> +virtio_iommu_notify_unmap(mr, interval->low,
> +  interval->high - interval->low + 1);
> +
> +return false;
> +}
> +
> +static gboolean virtio_iommu_mapping_map(gpointer key, gpointer value,
> + gpointer data)
> +{
> +VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
> +VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> +IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> +
> +virtio_iommu_notify_map(mr, interval->low, mapping->phys_addr,
> +interval->high - interval->low + 1);
> +
> +return false;
> +}
> +
>  static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
>  {
> +VirtioIOMMUNotifierNode *node;
> +VirtIOIOMMU *s = ep->viommu;
> +VirtIOIOMMUDomain *domain = ep->domain;
> +
> +QLIST_FOREACH(node, >notifiers_list, next) {
> +if (ep->id == node->iommu_dev->devfn) {
> +g_tree_foreach(domain->mappings, virtio_iommu_mapping_unmap,
> +   >iommu_dev->iommu_mr);
I understand this should fo the job for domain removal
> +}
> +}
> +
>  if (!ep->domain) {
>  return;
>  }
> @@ -178,6 +215,7 @@ static VirtIOIOMMUEndpoint 
> *virtio_iommu_get_endpoint(VirtIOIOMMU *s,
>  }
>  ep = g_malloc0(sizeof(*ep));
>  ep->id = ep_id;
> +ep->viommu = s;
>  trace_virtio_iommu_get_endpoint(ep_id);
>  g_tree_insert(s->endpoints, GUINT_TO_POINTER(ep_id), ep);
>  return ep;
> @@ -272,6 +310,7 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
>  {
>  uint32_t domain_id = le32_to_cpu(req->domain);
>  uint32_t ep_id = le32_to_cpu(req->endpoint);
> +VirtioIOMMUNotifierNode *node;
>  VirtIOIOMMUDomain *domain;
>  VirtIOIOMMUEndpoint *ep;
>  
> @@ -299,6 +338,14 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
>  
>  ep->domain = domain;
>  
> +/* Replay existing address space mappings on the associated memory 
> region */
maybe use the "domain" terminology here.
> +QLIST_FOREACH(node, >notifiers_list, next) {
> +if (ep_id == node->iommu_dev->devfn) {
> +g_tree_foreach(domain->mappings, virtio_iommu_mapping_map,
> +   >iommu_dev->iommu_mr);
> +}
> +}
> +
>  return VIRTIO_IOMMU_S_OK;
>  }
>  
> 
Thanks

Eric