On Thu, May 16, 2019 at 04:25:33PM +0100, Stefan Hajnoczi wrote:
> On Fri, May 10, 2019 at 02:58:36PM +0200, Stefano Garzarella wrote:
> > +struct virtio_vsock_buf {
>
> Please add a comment describing the purpose of this struct and to
> differentiate its use from struc
On Thu, May 16, 2019 at 04:32:18PM +0100, Stefan Hajnoczi wrote:
> On Fri, May 10, 2019 at 02:58:37PM +0200, Stefano Garzarella wrote:
> > When the socket is released, we should free all packets
> > queued in the per-socket list in order to avoid a memory
> > leak.
> >
When the socket is released, we should free all packets
queued in the per-socket list in order to avoid a memory
leak.
Signed-off-by: Stefano Garzarella
---
This patch was in the series "[PATCH v2 0/8] vsock/virtio: optimizations
to increase the throughput" [1]. As Stefan suggested, I
On Sun, Jun 02, 2019 at 06:03:34PM -0700, David Miller wrote:
> From: Stefano Garzarella
> Date: Fri, 31 May 2019 15:39:51 +0200
>
> > @@ -434,7 +434,9 @@ void virtio_transport_set_buffer_size(struct vsock_sock
> > *vsk, u64 val)
> > if (val > vvs->
On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
>
> On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> > > On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > > > On Wed, May 29, 2019 at 1
140.68153.65 152.64 516.622
[1] https://github.com/stefano-garzarella/iperf/
Stefano Garzarella (5):
vsock/virtio: limit the memory used per-socket
vsock/virtio: fix locking for fwd_cnt and buf_alloc
vsock/virtio: reduce credit update messages
vhost/vsock: split packets to send usi
fwd_cnt is written with rx_lock, so we should read it using
the same spinlock also if we are in the TX path.
Move also buf_alloc under rx_lock and add a missing locking
when we modify it.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h| 2 +-
net/vmw_vsock
If the packets to sent to the guest are bigger than the buffer
available, we can split them, using multiple buffers and fixing
the length in the packet header.
This is safe since virtio-vsock supports only stream sockets.
Signed-off-by: Stefano Garzarella
---
drivers/vhost/vsock.c
of other sockets.
This patch mitigates this issue copying the payload of small
packets (< 128 bytes) into the buffer of last packet queued, in
order to avoid wasting memory.
Signed-off-by: Stefano Garzarella
---
drivers/vhost/vsock.c | 2 +
include/linux/virtio_vsoc
In order to reduce the number of credit update messages,
we send them only when the space available seen by the
transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h| 1 +
net/vmw_vsock/virtio_transport_common.c
Since now we are able to split packets, we can avoid limiting
their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE.
Instead, we can use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max
packet size.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport_common.c | 4 ++--
1 file changed, 2
On Wed, May 29, 2019 at 09:28:52PM -0700, David Miller wrote:
> From: Stefano Garzarella
> Date: Tue, 28 May 2019 12:56:20 +0200
>
> > @@ -68,7 +68,13 @@ struct virtio_vsock {
> >
> > static struct virtio_vsock *virtio_vsock_get(void)
> > {
> > -
On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>
> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > We flush all pending works b
On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>
> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > We flush all pending works before to call vdev->config->reset(vdev),
> > but other works can be queued before the vdev->config->del_vqs(vdev),
>
d of the .remove() to avoid use after free.
- Patch 4 free also used buffers in the virtqueues during the .remove().
Stefano Garzarella (4):
vsock/virtio: fix locking around 'the_virtio_vsock'
vsock/virtio: stop workers during the .remove()
vsock/virtio: fix flush of works during the .remove()
vs
Before this patch, we only freed unused buffers, but there may
still be used buffers to be freed.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport.c | 18 ++
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/net/vmw_vsock/virtio_transport.c b/net
We flush all pending works before to call vdev->config->reset(vdev),
but other works can be queued before the vdev->config->del_vqs(vdev),
so we add another flush after it, to avoid use after free.
Suggested-by: Michael S. Tsirkin
Signed-off-by: Stefano Garzarella
---
n
config->del_vqs(vdev).
Suggested-by: Stefan Hajnoczi
Suggested-by: Michael S. Tsirkin
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport.c | 49 +++-
1 file changed, 48 insertions(+), 1 deletion(-)
diff --git a/net/vmw_vsock/virtio_transport
the end of the function.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport.c | 19 +--
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 96ab344f17bb..d3ba7747aa73 100644
On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
> On 2019/5/31 下午4:18, Stefano Garzarella wrote:
> > On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
> > > On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > > > On Thu, May 30, 2019 at 05:4
On Wed, May 15, 2019 at 10:48:44AM +0800, Jason Wang wrote:
>
> On 2019/5/15 上午12:35, Stefano Garzarella wrote:
> > On Tue, May 14, 2019 at 11:25:34AM +0800, Jason Wang wrote:
> > > On 2019/5/14 上午1:23, Stefano Garzarella wrote:
> > > > On Mon, May 13, 2019 at 0
On Fri, May 10, 2019 at 03:20:08PM -0700, David Miller wrote:
> From: Stefano Garzarella
> Date: Fri, 10 May 2019 14:58:37 +0200
>
> > @@ -827,12 +827,20 @@ static bool virtio_transport_close(struct vsock_sock
> > *vsk)
> >
> > void virtio_transpo
On Mon, May 13, 2019 at 05:58:53PM +0800, Jason Wang wrote:
>
> On 2019/5/10 下午8:58, Stefano Garzarella wrote:
> > Since virtio-vsock was introduced, the buffers filled by the host
> > and pushed to the guest using the vring, are directly queued in
> > a per-sock
On Mon, May 13, 2019 at 06:01:52PM +0800, Jason Wang wrote:
>
> On 2019/5/10 下午8:58, Stefano Garzarella wrote:
> > In order to increase host -> guest throughput with large packets,
> > we can use 64 KiB RX buffers.
> >
> > Signed-off-by: Stefano Garzar
On Tue, May 14, 2019 at 11:25:34AM +0800, Jason Wang wrote:
>
> On 2019/5/14 上午1:23, Stefano Garzarella wrote:
> > On Mon, May 13, 2019 at 05:58:53PM +0800, Jason Wang wrote:
> > > On 2019/5/10 下午8:58, Stefano Garzarella wrote:
> > > > Since virtio-vsock w
On Sun, May 12, 2019 at 12:57:48PM -0400, Michael S. Tsirkin wrote:
> On Fri, May 10, 2019 at 02:58:36PM +0200, Stefano Garzarella wrote:
> > Since virtio-vsock was introduced, the buffers filled by the host
> > and pushed to the guest using the vring, are directly queued in
> &
On Mon, May 13, 2019 at 05:33:40PM +0800, Jason Wang wrote:
>
> On 2019/5/10 下午8:58, Stefano Garzarella wrote:
> > While I was testing this new series (v2) I discovered an huge use of memory
> > and a memory leak in the virtio-vsock driver in the guest when I sent
> > 1-b
On Wed, May 15, 2019 at 10:50:43AM +0800, Jason Wang wrote:
>
> On 2019/5/15 上午12:20, Stefano Garzarella wrote:
> > On Tue, May 14, 2019 at 11:38:05AM +0800, Jason Wang wrote:
> > > On 2019/5/14 上午1:51, Stefano Garzarella wrote:
> > > > On Mon, May 13, 2019 at 0
On Mon, May 13, 2019 at 05:33:40PM +0800, Jason Wang wrote:
>
> On 2019/5/10 下午8:58, Stefano Garzarella wrote:
> > While I was testing this new series (v2) I discovered an huge use of memory
> > and a memory leak in the virtio-vsock driver in the guest when I sent
> > 1-b
In order to increase host -> guest throughput with large packets,
we can use 64 KiB RX buffers.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsoc
If the packets to sent to the guest are bigger than the buffer
available, we can split them, using multiple buffers and fixing
the length in the packet header.
This is safe since virtio-vsock supports only stream sockets.
Signed-off-by: Stefano Garzarella
---
drivers/vhost/vsock.c
fwd_cnt is written with rx_lock, so we should read it using
the same spinlock also if we are in the TX path.
Move also buf_alloc under rx_lock and add a missing locking
when we modify it.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h| 2 +-
net/vmw_vsock
In order to reduce the number of credit update messages,
we send them only when the space available seen by the
transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h| 1 +
net/vmw_vsock/virtio_transport_common.c
326.42333.32
512K 157.29153.35 152.22 546.52 533.24315.55302.27
[1] https://github.com/stefano-garzarella/iperf/
Stefano Garzarella (8):
vsock/virtio: limit the memory used per-socket
vsock/virtio: free packets during the socket release
vsock/virtio:
, paying the cost of an
extra memory copy. When the buffer is completely full we do a
"zero-copy", moving the buffer directly in the per-socket list.
Signed-off-by: Stefano Garzarella
---
drivers/vhost/vsock.c | 2 +
include/linux/virtio_vsock.h| 8 +++
net
The RX buffer size determines the memory consumption of the
vsock/virtio guest driver, so we make it tunable through
a module parameter.
The size allowed are between 4 KB and 64 KB in order to be
compatible with old host drivers.
Suggested-by: Stefan Hajnoczi
Signed-off-by: Stefano Garzarella
Since now we are able to split packets, we can avoid limiting
their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE.
Instead, we can use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max
packet size.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport_common.c | 4 ++--
1 file changed, 2
When the socket is released, we should free all packets
queued in the per-socket list in order to avoid a memory
leak.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport_common.c | 8
1 file changed, 8 insertions(+)
diff --git a/net/vmw_vsock
On Wed, May 15, 2019 at 04:24:00PM +0100, Stefan Hajnoczi wrote:
> On Tue, May 07, 2019 at 02:25:43PM +0200, Stefano Garzarella wrote:
> > Hi Jorge,
> >
> > On Mon, May 06, 2019 at 01:19:55PM -0700, Jorge Moreira Broche wrote:
> > > > On Wed, May 01, 2019 at
On Mon, May 13, 2019 at 08:46:19PM +0800, Jason Wang wrote:
>
> On 2019/5/13 下午6:05, Jason Wang wrote:
> >
> > On 2019/5/10 下午8:58, Stefano Garzarella wrote:
> > > The RX buffer size determines the memory consumption of the
> > > vsock/virtio guest d
On Tue, May 14, 2019 at 11:38:05AM +0800, Jason Wang wrote:
>
> On 2019/5/14 上午1:51, Stefano Garzarella wrote:
> > On Mon, May 13, 2019 at 06:01:52PM +0800, Jason Wang wrote:
> > > On 2019/5/10 下午8:58, Stefano Garzarella wrote:
> > > > In order to increase ho
On Wed, May 01, 2019 at 03:08:31PM -0400, Stefan Hajnoczi wrote:
> On Tue, Apr 30, 2019 at 05:30:01PM -0700, Jorge E. Moreira wrote:
> > Avoid a race in which static variables in net/vmw_vsock/af_vsock.c are
> > accessed (while handling interrupts) before they are initialized.
> >
> >
> > [
ck_core_exit() does not need to be changed to fix the
> bug I found, but not changing it means the exit function is not
> symmetric to the init function.
>
> @Stefano
> Taking the mutex from virtio_vsock_init() could work too (I haven't
> tried it yet), but it's unnecessary, all that needs
ops virtio_net psmouse drm net_failover pata_acpi
> virtio_blk failover floppy
>
> Fixes: 22b5c0b63f32 ("vsock/virtio: fix kernel panic after device hot-unplug")
> Reported-by: Alexandru Herghelegiu
> Signed-off-by: Adalbert Lazăr
> Co-developed-by:
Hi Adalbert,
thanks for catching this issue, I have a comment below.
On Tue, Mar 05, 2019 at 08:01:45PM +0200, Adalbert Lazăr wrote:
> Previous to commit 22b5c0b63f32 ("vsock/virtio: fix kernel panic after device
> hot-unplug"),
> vsock_core_init() was called from virtio_vsock_probe(). Now,
>
In order to reduce the number of credit update messages,
we send them only when the space available seen by the
transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h| 1 +
net/vmw_vsock/virtio_transport_common.c
26.7
256K 7.7 8.424.9
512K 7.7 8.525.0
Thanks,
Stefano
[1] https://www.spinics.net/lists/netdev/msg531783.html
[2] https://github.com/stefano-garzarella/iperf/
Stefano Garzarella (4):
vsock/virtio: reduce credit update messages
vhost/vso
In order to increase host -> guest throughput with large packets,
we can use 64 KiB RX buffers.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsoc
Since now we are able to split packets, we can avoid limiting
their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE.
Instead, we can use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max
packet size.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport_common.c | 4 ++--
1 file changed, 2
If the packets to sent to the guest are bigger than the buffer
available, we can split them, using multiple buffers and fixing
the length in the packet header.
This is safe since virtio-vsock supports only stream sockets.
Signed-off-by: Stefano Garzarella
---
drivers/vhost/vsock.c | 35
On Thu, Apr 04, 2019 at 02:04:10PM -0400, Michael S. Tsirkin wrote:
> On Thu, Apr 04, 2019 at 06:47:15PM +0200, Stefano Garzarella wrote:
> > On Thu, Apr 04, 2019 at 11:52:46AM -0400, Michael S. Tsirkin wrote:
> > > I simply love it that you have analysed the individual impact o
On Fri, Apr 05, 2019 at 09:13:56AM +0100, Stefan Hajnoczi wrote:
> On Thu, Apr 04, 2019 at 12:58:36PM +0200, Stefano Garzarella wrote:
> > @@ -139,8 +139,18 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
> > break;
> > }
>
On Thu, Apr 04, 2019 at 11:52:46AM -0400, Michael S. Tsirkin wrote:
> I simply love it that you have analysed the individual impact of
> each patch! Great job!
Thanks! I followed Stefan's suggestions!
>
> For comparison's sake, it could be IMHO benefitial to add a column
> with
On Thu, Apr 04, 2019 at 03:14:10PM +0100, Stefan Hajnoczi wrote:
> On Thu, Apr 04, 2019 at 12:58:34PM +0200, Stefano Garzarella wrote:
> > This series tries to increase the throughput of virtio-vsock with slight
> > changes:
> > - patch 1/4: reduces the number of credit
On Thu, Apr 04, 2019 at 08:15:39PM +0100, Stefan Hajnoczi wrote:
> On Thu, Apr 04, 2019 at 12:58:35PM +0200, Stefano Garzarella wrote:
> > @@ -256,6 +257,7 @@ virtio_transport_stream_do_dequeue(struct vsock_sock
> > *vsk,
> > struct virtio_vsock_sock *vvs = vsk-&
_core_exit() in the
virtio_vsock respectively in module_init and module_exit functions,
that cannot be invoked until there are open sockets.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1609699
Reported-by: Yan Fu
Signed-off-by: Stefano Garzarella
Acked-by: Stefan Hajnoczi
---
net
to this approach for now.
The vsock_core proto_ops expect a valid pointer to the transport device, so we
can't call vsock_core_exit() until there are open sockets.
v2 -> v3:
- Rebased on master
v1 -> v2:
- Fixed commit message of patch 1.
- Added Reviewed-by, Acked-by tags by Stefan
S
When the virtio transport device disappear, we should reset all
connected sockets in order to inform the users.
Signed-off-by: Stefano Garzarella
Reviewed-by: Stefan Hajnoczi
---
net/vmw_vsock/virtio_transport.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/vmw_vsock
On Mon, Apr 08, 2019 at 10:57:44AM -0400, Michael S. Tsirkin wrote:
> On Mon, Apr 08, 2019 at 04:55:31PM +0200, Stefano Garzarella wrote:
> > > Anyway, any change to this behavior requires compatibility so new guest
> > > drivers work with old vhost_vsock.ko. Therefore we
On Fri, Apr 05, 2019 at 09:24:47AM +0100, Stefan Hajnoczi wrote:
> On Thu, Apr 04, 2019 at 12:58:37PM +0200, Stefano Garzarella wrote:
> > Since now we are able to split packets, we can avoid limiting
> > their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE.
> >
On Mon, Apr 08, 2019 at 10:37:23AM +0100, Stefan Hajnoczi wrote:
> On Fri, Apr 05, 2019 at 12:07:47PM +0200, Stefano Garzarella wrote:
> > On Fri, Apr 05, 2019 at 09:24:47AM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Apr 04, 2019 at 12:58:37PM +0200, Stefano Garzarella wrote:
&g
On Mon, Apr 08, 2019 at 02:43:28PM +0800, Jason Wang wrote:
>
> On 2019/4/4 下午6:58, Stefano Garzarella wrote:
> > This series tries to increase the throughput of virtio-vsock with slight
> > changes:
> > - patch 1/4: reduces the number of cre
On Wed, Jul 03, 2019 at 05:53:58PM +0800, Jason Wang wrote:
>
> On 2019/6/28 下午8:36, Stefano Garzarella wrote:
> > Some callbacks used by the upper layers can run while we are in the
> > .remove(). A potential use-after-free can happen, because we free
> > the_virt
On Wed, Jul 03, 2019 at 10:14:53AM +0100, Stefan Hajnoczi wrote:
> On Mon, Jul 01, 2019 at 07:03:57PM +0200, Stefano Garzarella wrote:
> > On Mon, Jul 01, 2019 at 04:11:13PM +0100, Stefan Hajnoczi wrote:
> > > On Fri, Jun 28, 2019 at 02:36:56PM +0200, Stefano Garzarella wro
config->del_vqs(vdev).
Suggested-by: Stefan Hajnoczi
Suggested-by: Michael S. Tsirkin
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport.c | 51 +++-
1 file changed, 50 insertions(+), 1 deletion(-)
diff --git a/net/vmw_vsock/virtio_transport
gt;config->reset(vdev), so we can safely move the workers' flush.
Before the vdev->config->del_vqs(vdev), workers can be scheduled
by VQ callbacks, so we must flush them after del_vqs(), to avoid
use-after-free of 'vsock' object.
Suggested-by: Michael S. Tsirkin
Signed-off-by: Stefano Ga
&
cat /dev/urandom | nc-vsock 3 6321 > /dev/null &
cat /dev/urandom | nc-vsock 3 7321 > /dev/null &
sleep 2
echo "device_del v1" | nc 127.0.0.1 1234
sleep 1
echo "device_add vhost-vsock-pci,id=v1,guest-cid=3" | nc 127.0.0.1 1234
dev->priv, because after the vdev->config->del_vqs() we are sure
that they are ended and will no longer be invoked.
We also take the mutex during the .remove() to avoid that .probe() can
run while we are resetting the device.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_tran
dev->priv, because after the vdev->config->del_vqs() we are sure
that they are ended and will no longer be invoked.
We also take the mutex during the .remove() to avoid that .probe() can
run while we are resetting the device.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_tran
config->del_vqs(vdev).
Suggested-by: Stefan Hajnoczi
Suggested-by: Michael S. Tsirkin
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport.c | 51 +++-
1 file changed, 50 insertions(+), 1 deletion(-)
diff --git a/net/vmw_vsock/virtio_transport
gt;config->reset(vdev), so we can safely move the workers' flush.
Before the vdev->config->del_vqs(vdev), workers can be scheduled
by VQ callbacks, so we must flush them after del_vqs(), to avoid
use-after-free of 'vsock' object.
Suggested-by: Michael S. Tsirkin
Signed-off-by: Stefano Ga
Patch 1: use RCU to protect 'the_virtio_vsock' pointer
- Patch 2: no changes
- Patch 3: flush works only at the end of .remove()
- Removed patch 4 because virtqueue_detach_unused_buf() returns all the buffers
allocated.
v1: https://patchwork.kernel.org/cover/10964733/
Stefano Garzarella (3):
vsock
Hi,
as Jason suggested some months ago, I looked better at the virtio-net driver to
understand if we can reuse some parts also in the virtio-vsock driver, since we
have similar challenges (mergeable buffers, page allocation, small
packets, etc.).
Initially, I would add the skbuff in the
On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
>
> On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > Hi,
> > as Jason suggested some months ago, I looked better at the virtio-net
> > driver to
> > understand if we can reuse some parts also in the virt
On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
> On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> > On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> > >
> > > On 2019/7/10 下午11:37, Stefano Garzarella wrote:
&
On Thu, Jul 04, 2019 at 11:58:00AM +0800, Jason Wang wrote:
>
> On 2019/7/3 下午6:41, Stefano Garzarella wrote:
> > On Wed, Jul 03, 2019 at 05:53:58PM +0800, Jason Wang wrote:
> > > On 2019/6/28 下午8:36, Stefano Garzarella wrote:
> > > > Some callbacks used by
On Mon, Jul 01, 2019 at 04:11:13PM +0100, Stefan Hajnoczi wrote:
> On Fri, Jun 28, 2019 at 02:36:56PM +0200, Stefano Garzarella wrote:
> > During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
> > before registering the driver", Stefan pointed
On Mon, Jul 15, 2019 at 05:16:09PM +0800, Jason Wang wrote:
>
> > > > > > > >struct sk_buff *virtskb_receive_small(struct virtskb
> > > > > > > > *vs, ...);
> > > > > > > >struct sk_buff *virtskb_receive_big(struct virtskb *vs,
> > > > > > > > ...);
> > > > > > > >struct
On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
>
> On 2019/7/12 下午6:00, Stefano Garzarella wrote:
> > On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
> > > On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> > > &
On Tue, Jul 16, 2019 at 06:01:33AM -0400, Michael S. Tsirkin wrote:
> On Tue, Jul 16, 2019 at 11:40:24AM +0200, Stefano Garzarella wrote:
> > On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> > > On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wro
On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
[...]
> > >
> > >
> > > I think it's just a branch, f
On Thu, Jun 13, 2019 at 04:57:15PM +0800, Jason Wang wrote:
>
> On 2019/6/6 下午4:11, Stefano Garzarella wrote:
> > On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
> > > On 2019/5/31 下午4:18, Stefano Garzarella wrote:
> > > > On Thu, May 30, 2019 at 0
On Mon, Jun 10, 2019 at 02:09:45PM +0100, Stefan Hajnoczi wrote:
> On Tue, May 28, 2019 at 12:56:19PM +0200, Stefano Garzarella wrote:
> > During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
> > before registering the driver", Stefan pointed
On Tue, Aug 20, 2019 at 09:32:03AM +0100, Stefan Hajnoczi wrote:
> On Thu, Aug 01, 2019 at 05:25:40PM +0200, Stefano Garzarella wrote:
> > When VMCI transport is used, if the guest closes a connection,
> > all data is gone and EOF is returned, so we should skip the read
>
On Tue, Aug 20, 2019 at 09:28:28AM +0100, Stefan Hajnoczi wrote:
> On Thu, Aug 01, 2019 at 05:25:41PM +0200, Stefano Garzarella wrote:
> > +/* Wait for the remote to close the connection */
> > +void vsock_wait_remote_close(int fd)
> > +{
> > + struct epoll_event ev
On Mon, Jul 29, 2019 at 10:04:29AM -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 17, 2019 at 01:30:26PM +0200, Stefano Garzarella wrote:
> > Since virtio-vsock was introduced, the buffers filled by the host
> > and pushed to the guest using the vring, are directly queued in
> &
On Tue, Sep 03, 2019 at 12:39:19AM -0400, Michael S. Tsirkin wrote:
> On Mon, Sep 02, 2019 at 11:57:23AM +0200, Stefano Garzarella wrote:
> > >
> > > Assuming we miss nothing and buffers < 4K are broken,
> > > I think we need to add this to the spec, possibly
On Tue, Sep 03, 2019 at 12:38:02AM -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 17, 2019 at 01:30:27PM +0200, Stefano Garzarella wrote:
> > In order to reduce the number of credit update messages,
> > we send them only when the space available seen by the
> > tra
On Tue, Sep 03, 2019 at 03:38:16AM -0400, Michael S. Tsirkin wrote:
> The comment we have is just repeating what the code does.
> Include the *reason* for the condition instead.
>
> Cc: Stefano Garzarella
> Signed-off-by: Michael S. Tsirkin
> ---
> net/vmw_vsock/virtio_t
On Tue, Sep 03, 2019 at 03:52:24AM -0400, Michael S. Tsirkin wrote:
> On Tue, Sep 03, 2019 at 09:45:54AM +0200, Stefano Garzarella wrote:
> > On Tue, Sep 03, 2019 at 12:39:19AM -0400, Michael S. Tsirkin wrote:
> > > On Mon, Sep 02, 2019 at 11:57:23AM +0200, Stefan
On Mon, Sep 02, 2019 at 09:39:12AM +0100, Stefan Hajnoczi wrote:
> On Sun, Sep 01, 2019 at 02:56:44AM -0400, Michael S. Tsirkin wrote:
> >
> > OK let me try to clarify. The idea is this:
> >
> > Let's say we queue a buffer of 4K, and we copy if len < 128 bytes. This
> > means that in the worst
On Sun, Sep 01, 2019 at 06:17:58AM -0400, Michael S. Tsirkin wrote:
> On Sun, Sep 01, 2019 at 04:26:19AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Aug 01, 2019 at 03:36:16PM +0200, Stefano Garzarella wrote:
> > > On Thu, Aug 01, 2019 at 09:21:15AM -0400, Michael S. Tsirkin wro
On Mon, Jul 29, 2019 at 03:10:15PM -0400, Michael S. Tsirkin wrote:
> On Mon, Jul 29, 2019 at 06:50:56PM +0200, Stefano Garzarella wrote:
> > On Mon, Jul 29, 2019 at 06:19:03PM +0200, Stefano Garzarella wrote:
> > > On Mon, Jul 29, 2019 at 11:49:02AM -0400, Michael S. Tsirkin wro
On Mon, Jul 29, 2019 at 09:59:23AM -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 17, 2019 at 01:30:25PM +0200, Stefano Garzarella wrote:
> > This series tries to increase the throughput of virtio-vsock with slight
> > changes.
> > While I was testing the v2 of this series I d
On Tue, Jul 30, 2019 at 11:55:09AM -0400, Michael S. Tsirkin wrote:
> On Tue, Jul 30, 2019 at 11:54:53AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Jul 30, 2019 at 05:43:29PM +0200, Stefano Garzarella wrote:
> > > This series tries to increase the throughput of virtio
fwd_cnt and last_fwd_cnt are protected by rx_lock, so we should use
the same spinlock also if we are in the TX path.
Move also buf_alloc under the same lock.
Signed-off-by: Stefano Garzarella
Reviewed-by: Stefan Hajnoczi
Acked-by: Michael S. Tsirkin
---
include/linux/virtio_vsock.h
If the packets to sent to the guest are bigger than the buffer
available, we can split them, using multiple buffers and fixing
the length in the packet header.
This is safe since virtio-vsock supports only stream sockets.
Signed-off-by: Stefano Garzarella
Reviewed-by: Stefan Hajnoczi
Acked
174.70
32K 147.06144.74 146.02 282.48
64K 145.25143.99 141.62 406.40
128K 149.34146.96 147.49 489.34
256K 156.35149.81 152.21 536.37
512K 151.65150.74 151.52 519.93
[1] https://github.com/stefano-garzarella/iperf/
Stefano Garzarella (5):
of other sockets.
This patch mitigates this issue copying the payload of small
packets (< 128 bytes) into the buffer of last packet queued, in
order to avoid wasting memory.
Signed-off-by: Stefano Garzarella
Reviewed-by: Stefan Hajnoczi
Acked-by: Michael S. Tsirkin
---
drivers/vhost/vsoc
In order to reduce the number of credit update messages,
we send them only when the space available seen by the
transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE.
Signed-off-by: Stefano Garzarella
Reviewed-by: Stefan Hajnoczi
Acked-by: Michael S. Tsirkin
---
include/linux/virtio_vsock.h
1 - 100 of 1601 matches
Mail list logo