> >
> > Hi Dan,
> >
> > Thank you for the review. Please see my reply inline.
> >
> > >
> > > Hi Pankaj,
> > >
> > > Some minor file placement comments below.
> >
> > Sure.
> >
> > >
> > > On Thu, Apr 25, 2019 at 10:02 PM Pankaj Gupta wrote:
> > > >
> > > > This patch adds virtio-pmem driver for
> > > >
> > > > This patch adds 'DAXDEV_SYNC' flag which is set
> > > > for nd_region doing synchronous flush. This later
> > > > is used to disable MAP_SYNC functionality for
> > > > ext4 & xfs filesystem for devices don't support
> > > > synchronous flush.
> > > >
> > > > Signed-off-by: Pankaj
On Fri, May 10, 2019 at 5:45 PM Pankaj Gupta wrote:
>
>
>
> > >
> > > This patch adds 'DAXDEV_SYNC' flag which is set
> > > for nd_region doing synchronous flush. This later
> > > is used to disable MAP_SYNC functionality for
> > > ext4 & xfs filesystem for devices don't support
> > > synchronous
> >
> > This patch adds 'DAXDEV_SYNC' flag which is set
> > for nd_region doing synchronous flush. This later
> > is used to disable MAP_SYNC functionality for
> > ext4 & xfs filesystem for devices don't support
> > synchronous flush.
> >
> > Signed-off-by: Pankaj Gupta
> > ---
> > drivers/dax
> >
> > Hi Michael & Dan,
> >
> > Please review/ack the patch series from LIBNVDIMM & VIRTIO side.
> > We have ack on ext4, xfs patches(4, 5 & 6) patch 2. Still need
> > your ack on nvdimm patches(1 & 3) & virtio patch 2.
>
> I was planning to merge these via the nvdimm tree, not ack them. D
On Wed, May 8, 2019 at 4:19 AM Pankaj Gupta wrote:
>
>
> Hi Dan,
>
> Thank you for the review. Please see my reply inline.
>
> >
> > Hi Pankaj,
> >
> > Some minor file placement comments below.
>
> Sure.
>
> >
> > On Thu, Apr 25, 2019 at 10:02 PM Pankaj Gupta wrote:
> > >
> > > This patch adds vi
From: Stefano Garzarella
Date: Fri, 10 May 2019 14:58:37 +0200
> @@ -827,12 +827,20 @@ static bool virtio_transport_close(struct vsock_sock
> *vsk)
>
> void virtio_transport_release(struct vsock_sock *vsk)
> {
> + struct virtio_vsock_sock *vvs = vsk->trans;
> + struct virtio_vsock_bu
On Fri, May 10, 2019 at 8:53 AM Pankaj Gupta wrote:
>
> This patch adds 'DAXDEV_SYNC' flag which is set
> for nd_region doing synchronous flush. This later
> is used to disable MAP_SYNC functionality for
> ext4 & xfs filesystem for devices don't support
> synchronous flush.
>
> Signed-off-by: Pank
On Fri, May 10, 2019 at 8:52 AM Pankaj Gupta wrote:
>
> Hi Michael & Dan,
>
> Please review/ack the patch series from LIBNVDIMM & VIRTIO side.
> We have ack on ext4, xfs patches(4, 5 & 6) patch 2. Still need
> your ack on nvdimm patches(1 & 3) & virtio patch 2.
I was planning to merge these v
On Fri, May 10, 2019 at 09:21:56PM +0530, Pankaj Gupta wrote:
> Hi Michael & Dan,
>
> Please review/ack the patch series from LIBNVDIMM & VIRTIO side.
Thanks!
Hope to do this early next week.
> We have ack on ext4, xfs patches(4, 5 & 6) patch 2. Still need
> your ack on nvdimm patches(1 & 3)
This patch adds 'DAXDEV_SYNC' flag which is set
for nd_region doing synchronous flush. This later
is used to disable MAP_SYNC functionality for
ext4 & xfs filesystem for devices don't support
synchronous flush.
Signed-off-by: Pankaj Gupta
---
drivers/dax/bus.c| 2 +-
drivers/dax/sup
This patch introduces 'daxdev_mapping_supported' helper
which checks if 'MAP_SYNC' is supported with filesystem
mapping. It also checks if corresponding dax_device is
synchronous. Virtio pmem device is asynchronous and
does not not support VM_SYNC.
Suggested-by: Jan Kara
Signed-off-by: Pankaj Gu
Dont support 'MAP_SYNC' with non-DAX files and DAX files
with asynchronous dax_device. Virtio pmem provides
asynchronous host page cache flush mechanism. We don't
support 'MAP_SYNC' with virtio pmem and xfs.
Signed-off-by: Pankaj Gupta
Reviewed-by: Darrick J. Wong
---
fs/xfs/xfs_file.c | 9
Dont support 'MAP_SYNC' with non-DAX files and DAX files
with asynchronous dax_device. Virtio pmem provides
asynchronous host page cache flush mechanism. We don't
support 'MAP_SYNC' with virtio pmem and ext4.
Signed-off-by: Pankaj Gupta
Reviewed-by: Jan Kara
---
fs/ext4/file.c | 10 ++
This patch adds virtio-pmem driver for KVM guest.
Guest reads the persistent memory range information from
Qemu over VIRTIO and registers it on nvdimm_bus. It also
creates a nd_region object with the persistent memory
range information so that existing 'nvdimm/pmem' driver
can reserve this into sy
This patch adds functionality to perform flush from guest
to host over VIRTIO. We are registering a callback based
on 'nd_region' type. virtio_pmem driver requires this special
flush function. For rest of the region types we are registering
existing flush function. Report error returned by host fsy
Hi Michael & Dan,
Please review/ack the patch series from LIBNVDIMM & VIRTIO side.
We have ack on ext4, xfs patches(4, 5 & 6) patch 2. Still need
your ack on nvdimm patches(1 & 3) & virtio patch 2.
Changes done from v7 are only in patch(2 & 3) and not
affecting existing reviews. Request to
On Fri, 10 May 2019 00:11:12 +0200
Halil Pasic wrote:
> On Thu, 9 May 2019 12:11:06 +0200
> Cornelia Huck wrote:
>
> > On Wed, 8 May 2019 23:22:10 +0200
> > Halil Pasic wrote:
> >
> > > On Wed, 8 May 2019 15:18:10 +0200 (CEST)
> > > Sebastian Ott wrote:
> >
> > > > > @@ -1063,6 +1163,
On Tue, 7 May 2019 15:58:12 +0200
Christian Borntraeger wrote:
> On 05.05.19 13:15, Cornelia Huck wrote:
> > On Sat, 4 May 2019 16:03:40 +0200
> > Halil Pasic wrote:
> >
> >> On Fri, 3 May 2019 16:04:48 -0400
> >> "Michael S. Tsirkin" wrote:
> >>
> >>> On Fri, May 03, 2019 at 11:17:24AM +0
Since virtio-vsock was introduced, the buffers filled by the host
and pushed to the guest using the vring, are directly queued in
a per-socket list avoiding to copy it.
These buffers are preallocated by the guest with a fixed
size (4 KB).
The maximum amount of memory used by each socket should be
While I was testing this new series (v2) I discovered an huge use of memory
and a memory leak in the virtio-vsock driver in the guest when I sent
1-byte packets to the guest.
These issues are present since the introduction of the virtio-vsock
driver. I added the patches 1 and 2 to fix them in this
In order to reduce the number of credit update messages,
we send them only when the space available seen by the
transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h| 1 +
net/vmw_vsock/virtio_transport_common.c |
If the packets to sent to the guest are bigger than the buffer
available, we can split them, using multiple buffers and fixing
the length in the packet header.
This is safe since virtio-vsock supports only stream sockets.
Signed-off-by: Stefano Garzarella
---
drivers/vhost/vsock.c
fwd_cnt is written with rx_lock, so we should read it using
the same spinlock also if we are in the TX path.
Move also buf_alloc under rx_lock and add a missing locking
when we modify it.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h| 2 +-
net/vmw_vsock/virtio_
In order to increase host -> guest throughput with large packets,
we can use 64 KiB RX buffers.
Signed-off-by: Stefano Garzarella
---
include/linux/virtio_vsock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index
The RX buffer size determines the memory consumption of the
vsock/virtio guest driver, so we make it tunable through
a module parameter.
The size allowed are between 4 KB and 64 KB in order to be
compatible with old host drivers.
Suggested-by: Stefan Hajnoczi
Signed-off-by: Stefano Garzarella
-
Since now we are able to split packets, we can avoid limiting
their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE.
Instead, we can use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max
packet size.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport_common.c | 4 ++--
1 file changed, 2 inser
When the socket is released, we should free all packets
queued in the per-socket list in order to avoid a memory
leak.
Signed-off-by: Stefano Garzarella
---
net/vmw_vsock/virtio_transport_common.c | 8
1 file changed, 8 insertions(+)
diff --git a/net/vmw_vsock/virtio_transport_common.c
On Fri, 10 May 2019 09:43:08 +0200
Pierre Morel wrote:
> On 09/05/2019 20:26, Halil Pasic wrote:
> > On Thu, 9 May 2019 14:01:01 +0200
> > Pierre Morel wrote:
> >
> >> On 08/05/2019 16:31, Pierre Morel wrote:
> >>> On 26/04/2019 20:32, Halil Pasic wrote:
> This will come in handy soon when
29 matches
Mail list logo