Re: [PATCH v4 3/5] vsock/virtio: fix locking in virtio_transport_inc_tx_pkt()

2019-07-18 Thread Stefano Garzarella
On Wed, Jul 17, 2019 at 4:51 PM Michael S. Tsirkin wrote: > > On Wed, Jul 17, 2019 at 01:30:28PM +0200, Stefano Garzarella wrote: > > fwd_cnt and last_fwd_cnt are protected by rx_lock, so we should use > > the same spinlock also if we are in the TX path. > > > > Move also buf_alloc under the same

Re: [PATCH v4 4/5] vhost/vsock: split packets to send using multiple buffers

2019-07-18 Thread Stefano Garzarella
On Wed, Jul 17, 2019 at 4:55 PM Michael S. Tsirkin wrote: > > On Wed, Jul 17, 2019 at 01:30:29PM +0200, Stefano Garzarella wrote: > > If the packets to sent to the guest are bigger than the buffer > > available, we can split them, using multiple buffers and fixing > > the length in the packet

Re: [PATCH v4 5/5] vsock/virtio: change the maximum packet size allowed

2019-07-18 Thread Stefano Garzarella
On Wed, Jul 17, 2019 at 5:00 PM Michael S. Tsirkin wrote: > > On Wed, Jul 17, 2019 at 01:30:30PM +0200, Stefano Garzarella wrote: > > Since now we are able to split packets, we can avoid limiting > > their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE. > > Instead, we can use

Re: [PATCH v4 4/5] vhost/vsock: split packets to send using multiple buffers

2019-07-18 Thread Michael S. Tsirkin
On Thu, Jul 18, 2019 at 09:50:14AM +0200, Stefano Garzarella wrote: > On Wed, Jul 17, 2019 at 4:55 PM Michael S. Tsirkin wrote: > > > > On Wed, Jul 17, 2019 at 01:30:29PM +0200, Stefano Garzarella wrote: > > > If the packets to sent to the guest are bigger than the buffer > > > available, we can

Re: [PATCH v4 4/5] vhost/vsock: split packets to send using multiple buffers

2019-07-18 Thread Stefano Garzarella
On Thu, Jul 18, 2019 at 10:13 AM Michael S. Tsirkin wrote: > On Thu, Jul 18, 2019 at 09:50:14AM +0200, Stefano Garzarella wrote: > > On Wed, Jul 17, 2019 at 4:55 PM Michael S. Tsirkin wrote: > > > On Wed, Jul 17, 2019 at 01:30:29PM +0200, Stefano Garzarella wrote: > > > > If the packets to sent

Re: [RFC v2] vhost: introduce mdev based hardware vhost backend

2019-07-18 Thread Jason Wang
On 2019/7/10 下午3:22, Jason Wang wrote: Yeah, that's a major concern. If it's true, is it something that's not acceptable? I think not, but I don't know if any other one that care this. And I do see some new RFC for VFIO to add more DMA API. Is there any pointers? I don't remember

Re: [PATCH v4 4/5] vhost/vsock: split packets to send using multiple buffers

2019-07-18 Thread Michael S. Tsirkin
On Thu, Jul 18, 2019 at 11:37:30AM +0200, Stefano Garzarella wrote: > On Thu, Jul 18, 2019 at 10:13 AM Michael S. Tsirkin wrote: > > On Thu, Jul 18, 2019 at 09:50:14AM +0200, Stefano Garzarella wrote: > > > On Wed, Jul 17, 2019 at 4:55 PM Michael S. Tsirkin > > > wrote: > > > > On Wed, Jul 17,

[PATCH v3 1/2] mm/balloon_compaction: avoid duplicate page removal

2019-07-18 Thread Michael S. Tsirkin
From: Wei Wang A #GP is reported in the guest when requesting balloon inflation via virtio-balloon. The reason is that the virtio-balloon driver has removed the page from its internal page list (via balloon_page_pop), but balloon_page_enqueue_one also calls "list_del" to do the removal. This is

[PATCH v3 2/2] balloon: fix up comments

2019-07-18 Thread Michael S. Tsirkin
Lots of comments bitrotted. Fix them up. Fixes: 418a3ab1e778 (mm/balloon_compaction: List interfaces) Signed-off-by: Michael S. Tsirkin --- mm/balloon_compaction.c | 73 +++-- 1 file changed, 41 insertions(+), 32 deletions(-) diff --git

Re: [PATCH v4 5/5] vsock/virtio: change the maximum packet size allowed

2019-07-18 Thread Michael S. Tsirkin
On Thu, Jul 18, 2019 at 09:52:41AM +0200, Stefano Garzarella wrote: > On Wed, Jul 17, 2019 at 5:00 PM Michael S. Tsirkin wrote: > > > > On Wed, Jul 17, 2019 at 01:30:30PM +0200, Stefano Garzarella wrote: > > > Since now we are able to split packets, we can avoid limiting > > > their sizes to

Re: [PATCH] virtio-net: parameterize min ring num_free for virtio receive

2019-07-18 Thread Michael S. Tsirkin
On Thu, Jul 18, 2019 at 12:55:50PM +, ? jiang wrote: > This change makes ring buffer reclaim threshold num_free configurable > for better performance, while it's hard coded as 1/2 * queue now. > According to our test with qemu + dpdk, packet dropping happens when > the guest is not able to

RE: [PATCH v3 2/2] balloon: fix up comments

2019-07-18 Thread Wang, Wei W
On Thursday, July 18, 2019 8:24 PM, Michael S. Tsirkin wrote: > /* > * balloon_page_alloc - allocates a new page for insertion into the balloon > - * page list. > + * page list. > * > - * Driver must call it to properly allocate a new enlisted balloon

Re: [PATCH v3 2/2] balloon: fix up comments

2019-07-18 Thread Michael S. Tsirkin
On Thu, Jul 18, 2019 at 01:47:40PM +, Wang, Wei W wrote: > On Thursday, July 18, 2019 8:24 PM, Michael S. Tsirkin wrote: > > /* > > * balloon_page_alloc - allocates a new page for insertion into the balloon > > - * page list. > > + * page list. > > * > >

Re: [PATCH] virtio-net: parameterize min ring num_free for virtio receive

2019-07-18 Thread Jason Wang
On 2019/7/18 下午9:04, Michael S. Tsirkin wrote: On Thu, Jul 18, 2019 at 12:55:50PM +, ? jiang wrote: This change makes ring buffer reclaim threshold num_free configurable for better performance, while it's hard coded as 1/2 * queue now. According to our test with qemu + dpdk, packet

[PATCH v4 1/2] mm/balloon_compaction: avoid duplicate page removal

2019-07-18 Thread Michael S. Tsirkin
From: Wei Wang A #GP is reported in the guest when requesting balloon inflation via virtio-balloon. The reason is that the virtio-balloon driver has removed the page from its internal page list (via balloon_page_pop), but balloon_page_enqueue_one also calls "list_del" to do the removal. This is

[PATCH v4 2/2] balloon: fix up comments

2019-07-18 Thread Michael S. Tsirkin
Lots of comments bitrotted. Fix them up. Fixes: 418a3ab1e778 (mm/balloon_compaction: List interfaces) Reviewed-by: Wei Wang Signed-off-by: Michael S. Tsirkin --- fixes since v3: teaks suggested by Wei mm/balloon_compaction.c | 71 ++--- 1 file

Re: [PATCH] virtio-net: parameterize min ring num_free for virtio receive

2019-07-18 Thread Michael S. Tsirkin
On Thu, Jul 18, 2019 at 10:01:05PM +0800, Jason Wang wrote: > > On 2019/7/18 下午9:04, Michael S. Tsirkin wrote: > > On Thu, Jul 18, 2019 at 12:55:50PM +, ? jiang wrote: > > > This change makes ring buffer reclaim threshold num_free configurable > > > for better performance, while it's hard

Re: [PATCH] virtio-net: parameterize min ring num_free for virtio receive

2019-07-18 Thread Michael S. Tsirkin
On Thu, Jul 18, 2019 at 10:42:47AM -0400, Michael S. Tsirkin wrote: > On Thu, Jul 18, 2019 at 10:01:05PM +0800, Jason Wang wrote: > > > > On 2019/7/18 下午9:04, Michael S. Tsirkin wrote: > > > On Thu, Jul 18, 2019 at 12:55:50PM +, ? jiang wrote: > > > > This change makes ring buffer reclaim

[PATCH v5 2/2] balloon: fix up comments

2019-07-18 Thread Michael S. Tsirkin
Lots of comments bitrotted. Fix them up. Fixes: 418a3ab1e778 (mm/balloon_compaction: List interfaces) Reviewed-by: Wei Wang Signed-off-by: Michael S. Tsirkin Reviewed-by: Ralph Campbell Acked-by: Nadav Amit --- mm/balloon_compaction.c | 67 +++-- 1 file

[PATCH v5 1/2] mm/balloon_compaction: avoid duplicate page removal

2019-07-18 Thread Michael S. Tsirkin
From: Wei Wang A #GP is reported in the guest when requesting balloon inflation via virtio-balloon. The reason is that the virtio-balloon driver has removed the page from its internal page list (via balloon_page_pop), but balloon_page_enqueue_one also calls "list_del" to do the removal. This is

[PATCH v3 0/9] x86: Concurrent TLB flushes

2019-07-18 Thread Nadav Amit via Virtualization
[ Cover-letter is identical to v2, including benchmark results, excluding the change log. ] Currently, local and remote TLB flushes are not performed concurrently, which introduces unnecessary overhead - each INVLPG can take 100s of cycles. This patch-set allows TLB flushes to be run

[PATCH v3 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently

2019-07-18 Thread Nadav Amit via Virtualization
To improve TLB shootdown performance, flush the remote and local TLBs concurrently. Introduce flush_tlb_multi() that does so. Introduce paravirtual versions of flush_tlb_multi() for KVM, Xen and hyper-v (Xen and hyper-v are only compile-tested). While the updated smp infrastructure is capable of

Re: [PATCH] virtio-net: parameterize min ring num_free for virtio receive

2019-07-18 Thread Jason Wang
On 2019/7/18 下午10:43, Michael S. Tsirkin wrote: On Thu, Jul 18, 2019 at 10:42:47AM -0400, Michael S. Tsirkin wrote: On Thu, Jul 18, 2019 at 10:01:05PM +0800, Jason Wang wrote: On 2019/7/18 下午9:04, Michael S. Tsirkin wrote: On Thu, Jul 18, 2019 at 12:55:50PM +, ? jiang wrote: This change

[PATCH AUTOSEL 5.2 009/171] drm/bochs: Fix connector leak during driver unload

2019-07-18 Thread Sasha Levin
From: Sam Bobroff [ Upstream commit 3c6b8625dde82600fd03ad1fcba223f1303ee535 ] When unloading the bochs-drm driver, a warning message is printed by drm_mode_config_cleanup() because a reference is still held to one of the drm_connector structs. Correct this by calling

[PATCH AUTOSEL 5.2 005/171] drm/virtio: set seqno for dma-fence

2019-07-18 Thread Sasha Levin
From: Chia-I Wu [ Upstream commit efe2bf965522bf0796d413b47a2abbf81d471d6f ] This is motivated by having meaningful ftrace events, but it also fixes use cases where dma_fence_is_later is called, such as in sync_file_merge. In other drivers, fence creation and cmdbuf submission normally happen

[PATCH AUTOSEL 5.2 054/171] drm/virtio: Add memory barriers for capset cache.

2019-07-18 Thread Sasha Levin
From: David Riley [ Upstream commit 9ff3a5c88e1f1ab17a31402b96d45abe14aab9d7 ] After data is copied to the cache entry, atomic_set is used indicate that the data is the entry is valid without appropriate memory barriers. Similarly the read side was missing the corresponding memory barriers.

[PATCH AUTOSEL 5.1 007/141] drm/bochs: Fix connector leak during driver unload

2019-07-18 Thread Sasha Levin
From: Sam Bobroff [ Upstream commit 3c6b8625dde82600fd03ad1fcba223f1303ee535 ] When unloading the bochs-drm driver, a warning message is printed by drm_mode_config_cleanup() because a reference is still held to one of the drm_connector structs. Correct this by calling

[PATCH AUTOSEL 5.1 004/141] drm/virtio: set seqno for dma-fence

2019-07-18 Thread Sasha Levin
From: Chia-I Wu [ Upstream commit efe2bf965522bf0796d413b47a2abbf81d471d6f ] This is motivated by having meaningful ftrace events, but it also fixes use cases where dma_fence_is_later is called, such as in sync_file_merge. In other drivers, fence creation and cmdbuf submission normally happen

[PATCH AUTOSEL 5.1 039/141] drm/virtio: Add memory barriers for capset cache.

2019-07-18 Thread Sasha Levin
From: David Riley [ Upstream commit 9ff3a5c88e1f1ab17a31402b96d45abe14aab9d7 ] After data is copied to the cache entry, atomic_set is used indicate that the data is the entry is valid without appropriate memory barriers. Similarly the read side was missing the corresponding memory barriers.

[PATCH AUTOSEL 4.19 027/101] drm/virtio: Add memory barriers for capset cache.

2019-07-18 Thread Sasha Levin
From: David Riley [ Upstream commit 9ff3a5c88e1f1ab17a31402b96d45abe14aab9d7 ] After data is copied to the cache entry, atomic_set is used indicate that the data is the entry is valid without appropriate memory barriers. Similarly the read side was missing the corresponding memory barriers.

[PATCH AUTOSEL 4.14 15/60] drm/virtio: Add memory barriers for capset cache.

2019-07-18 Thread Sasha Levin
From: David Riley [ Upstream commit 9ff3a5c88e1f1ab17a31402b96d45abe14aab9d7 ] After data is copied to the cache entry, atomic_set is used indicate that the data is the entry is valid without appropriate memory barriers. Similarly the read side was missing the corresponding memory barriers.

[PATCH AUTOSEL 4.9 13/45] drm/virtio: Add memory barriers for capset cache.

2019-07-18 Thread Sasha Levin
From: David Riley [ Upstream commit 9ff3a5c88e1f1ab17a31402b96d45abe14aab9d7 ] After data is copied to the cache entry, atomic_set is used indicate that the data is the entry is valid without appropriate memory barriers. Similarly the read side was missing the corresponding memory barriers.

[PATCH AUTOSEL 4.4 10/35] drm/virtio: Add memory barriers for capset cache.

2019-07-18 Thread Sasha Levin
From: David Riley [ Upstream commit 9ff3a5c88e1f1ab17a31402b96d45abe14aab9d7 ] After data is copied to the cache entry, atomic_set is used indicate that the data is the entry is valid without appropriate memory barriers. Similarly the read side was missing the corresponding memory barriers.