Hi,
--verbose please. Do you see the same hang? Does the patch fix it?
> --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> @@ -97,8 +97,9 @@ int ttm_eu_reserve_buffers(struct ww_acq
> struct list_head *dups, bool del_lru)
[
On Mon, Sep 09, 2019 at 10:18:57AM +0800, Jason Wang wrote:
>
> On 2019/9/8 下午7:05, Michael S. Tsirkin wrote:
> > On Thu, Sep 05, 2019 at 08:27:36PM +0800, Jason Wang wrote:
> > > This is a rework on the commit 7f466032dc9e ("vhost: access vq
> > > metadata through kernel virtual address").
> > >
On 2019/9/9 上午10:18, Jason Wang wrote:
On a elder CPU Sandy Bridge without SMAP support. TX PPS doesn't see
any difference.
Why is not Kaby Lake with SMAP off the same as Sandy Bridge?
I don't know, I guess it was because the atomic is l
Sorry, I meant atomic costs less for Kaby Lake.
On 2019/9/7 下午11:03, Jason Gunthorpe wrote:
On Fri, Sep 06, 2019 at 06:02:35PM +0800, Jason Wang wrote:
On 2019/9/5 下午9:59, Jason Gunthorpe wrote:
On Thu, Sep 05, 2019 at 08:27:34PM +0800, Jason Wang wrote:
Hi:
Per request from Michael and Jason, the metadata accelreation is
reverted in
On 2019/9/8 下午7:05, Michael S. Tsirkin wrote:
On Thu, Sep 05, 2019 at 08:27:36PM +0800, Jason Wang wrote:
This is a rework on the commit 7f466032dc9e ("vhost: access vq
metadata through kernel virtual address").
It was noticed that the copy_to/from_user() friends that was used to
access
Add a gfp_t parameter to the iommu_ops::map function.
Remove the needless locking in the AMD iommu driver.
The iommu_ops::map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread:
With or without locking it doesn't make sense for two writers to be
writing to the same IOVA range at the same time. Even with locking we
still have a race condition, whoever gets the lock first, so we still
can't be sure what the result will be. With locking the result will be
more sane, it will
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Signed-off-by: Tom Murphy
---
drivers/iommu/Kconfig | 1 +
drivers/iommu/amd_iommu.c | 677 --
2 files changed, 68
Use the dev->coherent_dma_mask when allocating in the dma-iommu ops api.
Signed-off-by: Tom Murphy
Reviewed-by: Robin Murphy
Reviewed-by: Christoph Hellwig
---
drivers/iommu/dma-iommu.c | 12 +++-
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c
Handle devices which defer their attach to the iommu in the dma-iommu api
Signed-off-by: Tom Murphy
Reviewed-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 27 ++-
1 file changed, 26 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/dma-iommu.c
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V6:
-add more details to the description of patch
001-iommu-amd-Remove-unnecessary-locking-from-AMD-iommu-.patch
-rename handle_deferred_device to
iovec addresses coming from vhost are assumed to be
pre-validated, but in fact can be speculated to a value
out of range.
Userspace address are later validated with array_index_nospec so we can
be sure kernel info does not leak through these addresses, but vhost
must also not leak userspace info
On Thu, Sep 05, 2019 at 08:27:36PM +0800, Jason Wang wrote:
> This is a rework on the commit 7f466032dc9e ("vhost: access vq
> metadata through kernel virtual address").
>
> It was noticed that the copy_to/from_user() friends that was used to
> access virtqueue metdata tends to be very expensive
13 matches
Mail list logo