Re: [vfio-users] VFIO 128 byte TLP size?

2018-03-11 Thread Alex Williamson
On Fri, 9 Mar 2018 18:03:14 +0100
Oliver Heid  wrote:

> Does VFIO somehow limit PCIe TLP size to 128 bytes on virtualized 
> devices? In our case MaxPayload = MaxReadReq = 256 bytes in PCI config, 
> and we use 4k aligned base addresses and do not cross 4k boundaries, but 
> actual package size is only 128B. Any idea how to get 256 byte TLPs?

See:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/vfio/pci/vfio_pci_config.c?id=523184972b282cd9ca17a76f6ca4742394856818
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/vfio/pci/vfio_pci_config.c?id=cf0d53ba4947aad6e471491d5b20a567cbe92e56

And to tune the host MPS settings:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/admin-guide/kernel-parameters.txt#n3058

Suggestions for improvement welcome.  Thanks,

Alex

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] vfio-users Digest, Vol 32, Issue 3

2018-03-11 Thread Alex Williamson
On Sat, 10 Mar 2018 22:17:33 +
"Patrick O'Callaghan"  wrote:

> On Sat, 2018-03-10 at 18:40 +, Steve Glaser wrote:
> > Max Payload Size determines both what a Function can send and what a 
> > Function accepts. As such, it is almost always set to the lowest common 
> > denominator of everything in the PCIe Hierarchy.  
> ...
> 
> 
> Don't reply to digests. Even if you fix the subject line the threading
> will still be screwed up. Some mail clients (e.g. Evolution) allow you
> to reply to a specific message *within* a digest and avoid this
> problem. Otherwise, it's just better not to use digests at all. They
> are a remnant from the days of UUCP and don't convey any advantage in
> modern systems.
> 

OTOH, I'd rather see a mangled, informative reply from a digest user
than none at all.  Thanks,

Alex

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] VFIO_IOMMU_MAP_DMA succeeds only on second try?

2018-03-11 Thread Alex Williamson
On Mon, 5 Mar 2018 12:37:24 +0100
Oliver Heid  wrote:

> Do I miss something here? I want to allow RW access of a peripheral 
> device to a memory region via
> 
>      struct vfio_iommu_type1_dma_map dma_map = { .argsz = sizeof(dma_map) };
> 
>      __u32* mem = (__u32*)mmap(NULL,size,PROT_READ|PROT_WRITE, 
> MAP_SHARED|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
> 
>      dma_map.argsz = sizeof(dma_map);
>      dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE;
>      dma_map.vaddr = (__u64)mem;
>      dma_map.iova  = 0;
>      dma_map.size  = size;
> 
>      ioctl(container, VFIO_IOMMU_MAP_DMA, _map);
> 
> Any idea why theVFIO_IOMMU_MAP_DMA ioctl fails with EINVAL, but then a 
> second, identical call succeeds? Does it actually succeed then? QEMU 
> re-tries if the first attempt fails with EBUSY, but not with EINVAL.

I don't think the QEMU -EBUSY behavior is related, that handles
previous mappings and may no longer even be necessary.  One of my unit
tests[1] does a similar map with no issues.  Your code excerpt doesn't
check whether mmap succeeds, so for all we know mmap is failing too.

> The latest 4.15.7 kernel (4.15.6 too IIRC) does not recognize the AMD 
> Ryzen 3 1200 as  VFIO_TYPE1_IOMMU so I am using version 4.15.4.

This doesn't make sense, anything implementing the kernel's IOMMU API in
the kernel is compatible with type1, that includes anything supporting
AMD-Vi.  There is no whitelist or specific processor support
selection.  There are no iommu or vfio changes between the kernels
you're quoting, perhaps you could do a bisect.  Thanks,

Alex

[1] https://github.com/awilliam/tests/blob/master/vfio-iommu-map-unmap.c

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users