Hi,
Patch is currently under review.
>From my end, it was tested and proved to solve the problem.
To follow up you may need to check qemu-devel@nongnu.org from time to time.
Marcel, any feedback?
Yuval
On Mon, 13 Mar 2023 at 18:56, Red Hat Product Security
wrote:
> Hello!
>
> INC2534320
make sure that the number of page table entries the driver
reports, do not exceeds the one page table size.
Reported-by: Soul Chen
Signed-off-by: Yuval Shaia
---
v0 -> v1:
* Take ring-state into account
* Add Reported-by
---
hw/rdma/vmw/pvrdma_main.c | 16 +++-
1 f
make sure that the number of page table entries the driver
reports, do not exceeds the one page table size.
Signed-off-by: Yuval Shaia
---
hw/rdma/vmw/pvrdma_main.c | 8
1 file changed, 8 insertions(+)
diff --git a/hw/rdma/vmw/pvrdma_main.c b/hw/rdma/vmw/pvrdma_main.c
index 4fc6712025
Can anyone else pick this one?
Thanks,
Yuval
On Wed, 7 Dec 2022 at 17:05, Claudio Fontana wrote:
> On 4/5/22 12:31, Marcel Apfelbaum wrote:
> > Hi Yuval,
> > Thank you for the changes.
> >
> > On Sun, Apr 3, 2022 at 11:54 AM Yuval Shaia
> wrote:
> >
Signed-off-by: Yuval Shaia
---
hw/rdma/vmw/pvrdma_main.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/hw/rdma/vmw/pvrdma_main.c b/hw/rdma/vmw/pvrdma_main.c
index 91206dbb8e..aae382af59 100644
--- a/hw/rdma/vmw/pvrdma_main.c
+++ b/hw/rdma/vmw/pvrdma_main.c
Guest driver might execute HW commands when shared buffers are not yet
allocated.
This could happen on purpose (malicious guest) or because of some other
guest/host address mapping error.
We need to protect againts such case.
Fixes: CVE-2022-1050
Reported-by: Raven
Signed-off-by: Yuval Shaia
.
Reported-by: Raven
Signed-off-by: Yuval Shaia
---
v1 -> v2:
* Commit message changes
---
hw/rdma/vmw/pvrdma_cmd.c | 6 ++
hw/rdma/vmw/pvrdma_main.c | 9 +
2 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/hw/rdma/vmw/pvrdma_cmd.c b/hw/rdma/vmw/pvrdma_cmd.c
in
Guest driver might execute HW commands when shared buffers are not yet
allocated.
This might happen on purpose (malicious guest) or because some other
guest/host address mapping.
We need to protect againts such case.
Reported-by: Mauro Matteo Cascella
Signed-off-by: Yuval Shaia
---
hw/rdma/vmw
addr, len);
> + addr, pci_len);
> return NULL;
> }
>
> -if (len != plen) {
> -rdma_pci_dma_unmap(dev, p, len);
> +if (pci_len != len) {
> + rdma_pci_dma_unmap(dev, p, pci_len);
> return NULL;
> }
>
> -trace_rdma_pci_dma_map(addr, p, len);
> +trace_rdma_pci_dma_map(addr, p, pci_len);
>
> return p;
> }
>
Reviewed-by: Yuval Shaia
Tested-by: Yuval Shaia
> --
> 2.33.1
>
>
>
_ring_init(PvrdmaRing *ring, const char *name,
> PCIDevice *dev,
> qatomic_set(>ring_state->cons_head, 0);
> */
> ring->npages = npages;
> -ring->pages = g_malloc(npages * sizeof(void *));
> + ring->pages = g_malloc0(npages * sizeof(void *))
,11 @@ static int init_dev_ring(PvrdmaRing *ring,
> PvrdmaRingState **ring_state,
> uint64_t *dir, *tbl;
> int rc = 0;
>
> +if (!num_pages) {
> +rdma_error_report("Ring pages count must be strictly positive");
> + return -EINVAL;
> +}
> +
Reviewed-by: Yuval Shaia
Tested-by: Yuval Shaia
On Wed, 16 Jun 2021 at 14:06, Marcel Apfelbaum
wrote:
> From: Marcel Apfelbaum
>
> Ensure mremap boundaries not trusting the guest kernel to
> pass the correct buffer length.
>
> Fixes: CVE-2021-3582
> Reported-by
nt32_t pvrdma_idx_ring_has_data(const struct pvrdma_ring
> *r,
> -uint32_t max_elems, uint32_t
> *out_head)
> -{
> - const uint32_t tail = qatomic_read(>prod_tail);
> - const uint32_t head = qatomic_read(>cons_head);
> -
> - if (pvrdma_idx_valid(tail, max_elems) &&
> - pvrdma_idx_valid(head, max_elems)) {
> - *out_head = head & (max_elems - 1);
> - return tail != head;
> - }
> - return PVRDMA_INVALID_IDX;
> -}
> -
> -#endif /* __PVRDMA_RING_H__ */
> diff --git a/scripts/update-linux-headers.sh
> b/scripts/update-linux-headers.sh
> index fa6f2b6272b7..1050e361694f 100755
> --- a/scripts/update-linux-headers.sh
> +++ b/scripts/update-linux-headers.sh
> @@ -215,8 +215,7 @@ sed -e '1h;2,$H;$!d;g' -e 's/[^};]*pvrdma[^(|
> ]*([^)]*);//g' \
> "$linux/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h" > \
> "$tmp_pvrdma_verbs";
>
> -for i in "$linux/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h" \
> - "$linux/drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h" \
> +for i in "$linux/drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h" \
> "$tmp_pvrdma_verbs"; do \
> cp_portable "$i" \
>
> "$output/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/"
> --
> 2.26.2
>
>
Thanks!
I guess somewhere in the kernel there is such a clean and generic
implementation of a ring and VM folks could utilize that instead of writing
their own.
Tested-by: Yuval Shaia
Reviewed-by: Yuval Shaia
Thanks,
Reviewed-by: Yuval Shaia
On Tue, 3 Nov 2020 at 03:53, Chen Qun wrote:
> After the WITH_QEMU_LOCK_GUARD macro is added, the compiler cannot identify
> that the statements in the macro must be executed. As a result, some
> variables
> assignment statements in t
i guess we are expected to see this back soon, right?
Ignore my r-b and t-b for v1, i did not encounter the build errors, this
one is okay too.
For the hw/rdma stuff:
Reviewed-by: Yuval Shaia
Tested-by: Yuval Shaia
Thanks,
Yuval
>
> hw/hyperv/hyperv.c | 15 ++---
For the hw/rdma stuff:
Reviewed-by: Yuval Shaia
Tested-by: Yuval Shaia
Thanks,
Yuval
On Wed, 1 Apr 2020 at 19:20, Simran Singhal
wrote:
> Replace manual lock()/unlock() calls with lock guard macros
> (QEMU_LOCK_GUARD/WITH_QEMU_LOCK_GUARD).
>
> Signed-off-by: Simran Singhal
List mutex should be destroyed when gs list gets destroyed.
Reported-by: Peter Maydell
Signed-off-by: Yuval Shaia
---
hw/rdma/rdma_utils.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c
index 73f279104c..698ed4716c 100644
--- a/hw/rdma
On Tue, 24 Mar 2020 at 13:55, Peter Maydell
wrote:
> On Tue, 24 Mar 2020 at 11:25, Yuval Shaia
> wrote:
> > As i already said, current code makes sure it will not happen
> > however it better that API will ensure this and will not trust callers.
>
> I agree with th
On Tue, 24 Mar 2020 at 13:25, Peter Maydell
wrote:
> On Tue, 24 Mar 2020 at 11:18, Marcel Apfelbaum
> wrote:
> >
> > Hi Peter,Yuval
> >
> > On 3/24/20 1:05 PM, Peter Maydell wrote:
> > > So I think we require that the user of a protected-qlist
> > > ensures that there are no more users of it
On Tue, 24 Mar 2020 at 13:18, Marcel Apfelbaum
wrote:
> Hi Peter,Yuval
>
> On 3/24/20 1:05 PM, Peter Maydell wrote:
> > On Tue, 24 Mar 2020 at 10:54, Yuval Shaia
> wrote:
> >> To protect from the case that users of the protected_qlist are still
> >> using th
To protect from the case that users of the protected_qlist are still
using the qlist let's lock before detsroying it.
Reported-by: Coverity (CID 1421951)
Signed-off-by: Yuval Shaia
---
hw/rdma/rdma_utils.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/rdma/rdma_utils.c b/hw/rdma
On Tue, 24 Mar 2020 at 11:56, Yuval Shaia wrote:
>
>
> On Mon, 23 Mar 2020 at 12:32, Peter Maydell
> wrote:
>
>> On Sun, 10 Mar 2019 at 09:25, Yuval Shaia wrote:
>> >
>> > When QP is destroyed the backend QP is destroyed as well. This ensures
>
On Mon, 23 Mar 2020 at 12:32, Peter Maydell
wrote:
> On Sun, 10 Mar 2019 at 09:25, Yuval Shaia wrote:
> >
> > When QP is destroyed the backend QP is destroyed as well. This ensures
> > we clean all received buffer we posted to it.
> > However, a contexts of the
ge to make the code cleaner - make one
copy of the function rdma_backend_create_mr and leave the redundant
guest_start argument in the legacy code.
Signed-off-by: Yuval Shaia
---
hw/rdma/rdma_backend.c | 21 ++---
hw/rdma/rdma_backend.h | 5 -
hw/rdma/rdma_rm.c | 13 ++--
ng in data-path by
eliminating the need to translate emulated mr_id to backend device mr_id.
v0 -> v1:
* Accept comment from Marcel
Yuval Shaia (2):
hw/rdma: Cosmetic change - no need for two sge arrays
hw/rdma: Skip data-path mr_id translation
hw/rdma/rdma_b
The function build_host_sge_array uses two sge arrays, one for input and
one for output.
Since the size of the two arrays is the same, the function can write
directly to the given source array (i.e. input/output argument).
Signed-off-by: Yuval Shaia
---
hw/rdma/rdma_backend.c | 40
On Mon, 16 Mar 2020 at 15:30, Marcel Apfelbaum
wrote:
> Hi Yuval,
>
> On 3/7/20 2:56 PM, Yuval Shaia wrote:
> > The function build_host_sge_array uses two sge arrays, one for input and
> > one for output.
> > Since the size of the two arrays is the same, the func
+31,7 @@ int pvrdma_ring_init(PvrdmaRing *ring, const char *name,
> PCIDevice *dev,
> int i;
> int rc = 0;
>
> -strncpy(ring->name, name, MAX_RING_NAME_SZ);
> -ring->name[MAX_RING_NAME_SZ - 1] = 0;
> +pstrcpy(ring->name, MAX_RING_NAME_SZ, name);
> ring->dev = dev;
> ring->ring_state = ring_state;
> ring->max_elems = max_elems;
> --
> 2.24.1
>
>
Thanks,
Reviewed-by: Yuval Shaia
Thanks,
Reviewed-by: Yuval Shaia
On Wed, 18 Mar 2020 at 15:49, Julia Suvorova wrote:
> ring->name is defined as 'char name[MAX_RING_NAME_SZ]'. Replace untruncated
> strncpy with QEMU function.
> This case prevented QEMU from compiling with --enable-sanitizers.
>
> Signed-off-
The function build_host_sge_array uses two sge arrays, one for input and
one for output.
Since the size of the two arrays is the same, the function can write
directly to the given source array (i.e. input/output argument).
Signed-off-by: Yuval Shaia
---
hw/rdma/rdma_backend.c | 40
ng in data-path by
eliminating the need to translate emulated mr_id to backend device mr_id.
Yuval Shaia (2):
hw/rdma: Cosmetic change - no need for two sge arrays
hw/rdma: Skip data-path mr_id translation
hw/rdma/rdma_backend.c | 61 +-
hw/rdma/rdma_back
ge to make the code cleaner - make one
copy of the function rdma_backend_create_mr and leave the redundant
guest_start argument in the legacy code.
Signed-off-by: Yuval Shaia
---
hw/rdma/rdma_backend.c | 23 ++-
hw/rdma/rdma_backend.h | 5 -
hw/rdma/rdma_rm.c | 13 ++
Use gmail account for maintainer tasks.
Signed-off-by: Yuval Shaia
---
MAINTAINERS | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5e5e3e52d6..4297b54fcb 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2640,7 +2640,7 @@ F: tests/test
2 +-
> block/qcow2-cache.c| 2 +-
> contrib/vhost-user-gpu/vugbm.c | 2 +-
> exec.c | 6 +++---
> hw/intc/s390_flic_kvm.c| 2 +-
> hw/ppc/mac_newworld.c | 2 +-
> hw/ppc/spapr_pci.c | 2 +-
> hw/rdma/vmw
or **errp)
> out:
> if (rc) {
> pvrdma_fini(pdev);
> -error_append_hint(errp, "Device failed to load\n");
> +rdma_error_report("Device failed to load");
Reviewed-by: Yuval Shaia
> }
> }
>
>
>
On Wed, Sep 04, 2019 at 03:03:20AM +0530, Sukrit Bhatnagar wrote:
> On Thu, 29 Aug 2019 at 18:23, Yuval Shaia wrote:
> >
> > On Wed, Aug 28, 2019 at 07:53:28PM +0530, Sukrit Bhatnagar wrote:
> > > vmstate_pvrdma describes the PCI and MSIX states as well as the d
ckend device, and finally calls load_dsr()
> > to perform other mappings and ring init operations.
> >
> > Cc: Marcel Apfelbaum
> > Cc: Yuval Shaia
> > Signed-off-by: Sukrit Bhatnagar
> > ---
> > hw/rdma/vmw/pvrdma_main.c | 77 +++
On Sat, Aug 31, 2019 at 10:31:57PM +0300, Marcel Apfelbaum wrote:
> Hi Yuval,
>
> On 8/18/19 4:21 PM, Yuval Shaia wrote:
> > The virtual address that is provided by the guest in post_send and
> > post_recv operations is related to the guest address space. This address
On Sat, Aug 31, 2019 at 10:28:18PM +0300, Marcel Apfelbaum wrote:
>
>
> On 8/18/19 4:21 PM, Yuval Shaia wrote:
> > The function reg_mr_iova is an enhanced version of ibv_reg_mr function
> > that can help to easly register and use guest's MRs.
> >
> > Add che
gt;
> Cc: Marcel Apfelbaum
> Cc: Yuval Shaia
> Signed-off-by: Sukrit Bhatnagar
> ---
> hw/rdma/vmw/pvrdma_main.c | 17 +
> 1 file changed, 9 insertions(+), 8 deletions(-)
>
> diff --git a/hw/rdma/vmw/pvrdma_main.c b/hw/rdma/vmw/pvrdma_main.c
> index adc
f unregistering gid entries from the
> backend device in the source host.
>
> pvrdma_post_load() maps to dsr using the loaded dma address, registers
> each loaded gid into the backend device, and finally calls load_dsr()
> to perform other mappings and ring init operations.
>
&g
gration-support does not
includes QP migration. This means that support for life migration *during*
traffic is not yet supported.
>
> Cc: Marcel Apfelbaum
> Cc: Yuval Shaia
> Signed-off-by: Sukrit Bhatnagar
> ---
> hw/rdma/vmw/pvrdma_main.c | 77 +
On Wed, Aug 28, 2019 at 07:53:26PM +0530, Sukrit Bhatnagar wrote:
> This series enables the migration of various GIDs used by the device.
> This is in addition to the successful migration of PCI and MSIX states
> as well as various DMA addresses and ring page information.
>
> We have a
The function reg_mr_iova is an enhanced version of ibv_reg_mr function
that can help to easly register and use guest's MRs.
Add check in 'configure' phase to detect if we have libibverbs with this
support.
Signed-off-by: Yuval Shaia
---
configure | 28
1 file
is needed to detect if the library installed in
the host supports this function
patch #2 enhance the data-path ops by utilizing the new function
Yuval Shaia (2):
configure: Check if we can use ibv_reg_mr_iova
hw/rdma: Utilize ibv_reg_mr_iova for memory registration
configure
is done in data-path affects performances.
An enhanced verion of MR registration introduced here
https://patchwork.kernel.org/patch/11044467/ can be used so that the
guest virtual address space for this MR is known to the HCA in host.
This will save the data-path adjustment.
Signed-off-by: Yuval Shaia
On Thu, Aug 15, 2019 at 02:12:44PM +0200, Stephen Kitt wrote:
> On Thu, 15 Aug 2019 13:57:05 +0300, Yuval Shaia
> wrote:
>
> > On Sun, Aug 11, 2019 at 09:42:47PM +0200, Stephen Kitt wrote:
> > > This was broken by the cherry-pick in 41dd30f. Fix by handling err
On Sun, Jul 21, 2019 at 05:18:01AM +0530, Sukrit Bhatnagar wrote:
> In v2, we had successful migration of PCI and MSIX states as well as
> various DMA addresses and ring page information.
> This series enables the migration of various GIDs used by the device.
>
> We have switched to a setup
On Sun, Aug 11, 2019 at 09:42:47PM +0200, Stephen Kitt wrote:
> This was broken by the cherry-pick in 41dd30f. Fix by handling errors
> as in the rest of the function: "goto out" instead of "return rc".
>
> Signed-off-by: Stephen Kitt
> ---
> hw/rdma/vmw/pvrdma_cmd.c | 2 +-
> 1 file changed, 1
et_path, optarg, SOCKET_PATH_MAX);
> +strncpy(unix_socket_path, optarg, SOCKET_PATH_MAX - 1);
> break;
Reviewed-by: Yuval Shaia
>
> case 'p':
> --
> 2.22.0.428.g6d5b264208
>
et_path, optarg, SOCKET_PATH_MAX);
> +strncpy(unix_socket_path, optarg, SOCKET_PATH_MAX - 1);
Oops,
Thanks!
Reviewed-by: Yuval Shaia
> break;
>
> case 'p':
> --
> 2.22.0.428.g6d5b264208
>
> Also, there might be a considerable amount of pages in the rings,
> which will have dma map operations when the init functions are
> called.
> If this takes noticeable time, it might be better to have lazy
> load instead.
Yeah, make sense but i hope we will not get to this.
>
>
> > >
> > > @Marcel, @Yuval: As David has suggested, what if we just read the dma
> > > addresses in pvrdma_load(), and let the load_dsr() do the mapping?
> > > In pvrdma_regs_write(), we can check if dev->dsr_info.dma is already set,
> > > so
> > > that its value is not overwritten.
> >
> > Have
On Sat, Jun 29, 2019 at 06:15:21PM +0530, Sukrit Bhatnagar wrote:
> On Fri, 28 Jun 2019 at 16:56, Dr. David Alan Gilbert
> wrote:
> >
> > * Yuval Shaia (yuval.sh...@oracle.com) wrote:
> > > On Fri, Jun 21, 2019 at 08:15:41PM +0530, Sukrit Bhatnagar wrote:
DSR, command slot and response slot upon
> loading the addresses in the pvrdma_load function.
>
> Cc: Marcel Apfelbaum
> Cc: Yuval Shaia
> Signed-off-by: Sukrit Bhatnagar
> ---
> hw/rdma/vmw/pvrdma_main.c | 56 +++
> 1 file changed, 56 i
DSR, command slot and response slot upon
> loading the addresses in the pvrdma_load function.
>
> Cc: Marcel Apfelbaum
> Cc: Yuval Shaia
> Signed-off-by: Sukrit Bhatnagar
> ---
> hw/rdma/vmw/pvrdma_main.c | 56 +++
> 1 file changed, 56 i
On Fri, May 24, 2019 at 08:24:30AM +0300, Marcel Apfelbaum wrote:
>
> Hi Yuval,
>
> On 5/5/19 1:55 PM, Yuval Shaia wrote:
> > Any GID change in guest must be propogate to host. This is already done
> > by firing QMP event to managment system such as libvirt which in turn
.
Fix it by adding support to update the RoCE device's Ethernet function
IP list from qemu via netlink.
Signed-off-by: Yuval Shaia
---
v0 -> v1:
* Fix spelling mistakes pointed by Eric Blake
---
configure | 6
hw/rdma/rdma_backend.c |
On Mon, May 06, 2019 at 10:09:29AM -0500, Eric Blake wrote:
> On 5/5/19 5:55 AM, Yuval Shaia wrote:
> > Any GID change in guest must be propogate to host. This is already done
>
> s/propogate to/propagated to the/
>
> > by firing QMP event to managment system such as libv
this function for legacy devices */
> virtio_queue_update_rings(vdev, vdev->queue_sel);
> @@ -303,11 +299,13 @@ static void virtio_mmio_write(void *opaque, hwaddr
> offset, uint64_t value,
> case VIRTIO_MMIO_DEVICE_FEATURES:
> case VIRTIO_MMIO_QUEUE_NUM_MAX:
> case VIRTIO_MMIO_INTERRUPT_STATUS:
> -DPRINTF("write to readonly register\n");
> +qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: write to readonly register\n",
> + __func__);
> break;
>
> default:
> -DPRINTF("bad register offset\n");
> +qemu_log_mask(LOG_GUEST_ERROR, "%s: bad register offset\n",
> __func__);
> }
> }
>
> @@ -327,7 +325,7 @@ static void virtio_mmio_update_irq(DeviceState *opaque,
> uint16_t vector)
> return;
> }
> level = (atomic_read(>isr) != 0);
> -DPRINTF("virtio_mmio setting IRQ %d\n", level);
> +trace_virtio_mmio_setting_irq(level);
> qemu_set_irq(proxy->irq, level);
> }
Reviewed-by: Yuval Shaia
>
> --
> 2.13.2
>
it by adding support to update the RoCE device's Ethernet function
IP list from qemu via netlink.
Signed-off-by: Yuval Shaia
---
configure | 6
hw/rdma/rdma_backend.c | 74 +-
2 files changed, 79 insertions(+), 1 deletion(-)
diff --git
This is a trivial cleanup patch.
Signed-off-by: Yuval Shaia
---
hw/rdma/rdma_backend.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/hw/rdma/rdma_backend.c b/hw/rdma/rdma_backend.c
index d1660b6474..05f6b03221 100644
--- a/hw/rdma/rdma_backend.c
+++ b/hw/rdma/rdma_backend.c
On Wed, May 01, 2019 at 08:42:35PM +0800, LI, BO XUAN wrote:
>On Wed, May 1, 2019 at 4:58 PM Yuval Shaia <[1]yuval.sh...@oracle.com>
>wrote:
>
> On Wed, May 01, 2019 at 04:10:39PM +0800, Boxuan Li wrote:
> > Signed-off-by: Boxuan Li <
On Wed, May 01, 2019 at 04:10:39PM +0800, Boxuan Li wrote:
> Signed-off-by: Boxuan Li
> ---
> v2: Instead of using conditional debugs, convert DPRINTF to traces
> ---
> hw/virtio/trace-events | 13 +
> hw/virtio/virtio-mmio.c | 35 ---
> 2 files
On Mon, Apr 22, 2019 at 01:45:27PM -0300, Jason Gunthorpe wrote:
> On Fri, Apr 19, 2019 at 01:16:06PM +0200, Hannes Reinecke wrote:
> > On 4/15/19 12:35 PM, Yuval Shaia wrote:
> > > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > > > On T
On Mon, Apr 22, 2019 at 09:00:34AM +0300, Leon Romanovsky wrote:
> On Fri, Apr 19, 2019 at 01:16:06PM +0200, Hannes Reinecke wrote:
> > On 4/15/19 12:35 PM, Yuval Shaia wrote:
> > > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > > > On T
On Fri, Apr 19, 2019 at 01:16:06PM +0200, Hannes Reinecke wrote:
> On 4/15/19 12:35 PM, Yuval Shaia wrote:
> > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > > On Thu, 11 Apr 2019 14:01:54 +0300
> > > Yuval Shaia wrote:
> > >
> >
virtqueue *req_vq;
> +
> + /* nvdimm bus registers virtio pmem device */
> + struct nvdimm_bus *nvdimm_bus;
> + struct nvdimm_bus_descriptor nd_desc;
> +
> + /* List to store deferred work if virtqueue is full */
> + struct list_head req_list;
> +
> + /* Synchronize virtqueue data */
> + spinlock_t pmem_lock;
> +
> + /* Memory region information */
> + uint64_t start;
> + uint64_t size;
> +};
> +
> +void host_ack(struct virtqueue *vq);
> +int async_pmem_flush(struct nd_region *nd_region, struct bio *bio);
> +#endif
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..32b2f94d1f58 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
> #define VIRTIO_ID_INPUT18 /* virtio input */
> #define VIRTIO_ID_VSOCK19 /* virtio vsock transport */
> #define VIRTIO_ID_CRYPTO 20 /* virtio crypto */
> +#define VIRTIO_ID_PMEM 27 /* virtio pmem */
>
> #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_pmem.h
> b/include/uapi/linux/virtio_pmem.h
> new file mode 100644
> index ..fa3f7d52717a
> --- /dev/null
> +++ b/include/uapi/linux/virtio_pmem.h
> @@ -0,0 +1,10 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _UAPI_LINUX_VIRTIO_PMEM_H
> +#define _UAPI_LINUX_VIRTIO_PMEM_H
> +
> +struct virtio_pmem_config {
> + __le64 start;
> + __le64 size;
> +};
> +#endif
Suggesting to fix the above minor formatting error.
With this:
Reviewed-by: Yuval Shaia
> --
> 2.20.1
>
>
On Mon, Apr 15, 2019 at 06:07:52PM -0700, Bart Van Assche wrote:
> On 4/11/19 4:01 AM, Yuval Shaia wrote:
> > +++ b/drivers/infiniband/hw/virtio/Kconfig
> > @@ -0,0 +1,6 @@
> > +config INFINIBAND_VIRTIO_RDMA
> > + tristate "VirtIO Paravirtualized RDMA Drive
On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> On Thu, 11 Apr 2019 14:01:54 +0300
> Yuval Shaia wrote:
>
> > Data center backends use more and more RDMA or RoCE devices and more and
> > more software runs in virtualized environment.
> > There is a ne
On Fri, Apr 12, 2019 at 03:21:56PM +0530, Devesh Sharma wrote:
> On Thu, Apr 11, 2019 at 11:11 PM Yuval Shaia wrote:
> >
> > On Thu, Apr 11, 2019 at 08:34:20PM +0300, Yuval Shaia wrote:
> > > On Thu, Apr 11, 2019 at 05:24:08PM +, Jason Gunthorpe wrote:
> > >
On Thu, Apr 11, 2019 at 05:40:26PM +, Jason Gunthorpe wrote:
> On Thu, Apr 11, 2019 at 08:34:20PM +0300, Yuval Shaia wrote:
> > On Thu, Apr 11, 2019 at 05:24:08PM +, Jason Gunthorpe wrote:
> > > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > &g
> > +
> > + wake_up(>acked);
> > +
> > + printk("%s\n", __func__);
>
> Cool:-)
>
> this line should be for debug?
Yes
>
> Zhu Yanjun
>
On Thu, Apr 11, 2019 at 08:34:20PM +0300, Yuval Shaia wrote:
> On Thu, Apr 11, 2019 at 05:24:08PM +, Jason Gunthorpe wrote:
> > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > > On Thu, 11 Apr 2019 14:01:54 +0300
> > > Yuval Shaia wrote:
> >
On Thu, Apr 11, 2019 at 05:24:08PM +, Jason Gunthorpe wrote:
> On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > On Thu, 11 Apr 2019 14:01:54 +0300
> > Yuval Shaia wrote:
> >
> > > Data center backends use more and more RDMA or RoCE devices an
Signed-off-by: Yuval Shaia
---
drivers/infiniband/Kconfig| 1 +
drivers/infiniband/hw/Makefile| 1 +
drivers/infiniband/hw/virtio/Kconfig | 6 +
drivers/infiniband/hw/virtio/Makefile | 4 +
drivers/infiniband/hw/virtio/virtio_rdma.h
Data center backends use more and more RDMA or RoCE devices and more and
more software runs in virtualized environment.
There is a need for a standard to enable RDMA/RoCE on Virtual Machines.
Virtio is the optimal solution since is the de-facto para-virtualizaton
technology and also because the
Signed-off-by: Yuval Shaia
---
hw/virtio/virtio-net-pci.c | 18 ++-
include/hw/virtio/virtio-net-pci.h | 35 ++
2 files changed, 37 insertions(+), 16 deletions(-)
create mode 100644 include/hw/virtio/virtio-net-pci.h
diff --git a/hw/virtio/virtio
Signed-off-by: Yuval Shaia
---
hw/Kconfig | 1 +
hw/rdma/Kconfig | 4 +
hw/rdma/Makefile.objs | 2 +
hw/rdma/virtio/virtio-rdma-ib.c | 287
hw/rdma/virtio/virtio-rdma-ib.h
> +
> +static int virtio_pmem_probe(struct virtio_device *vdev)
> +{
> + int err = 0;
> + struct resource res;
> + struct virtio_pmem *vpmem;
> + struct nvdimm_bus *nvdimm_bus;
> + struct nd_region_desc ndr_desc = {};
> + int nid = dev_to_node(>dev);
> + struct
On Wed, Apr 10, 2019 at 02:24:26PM +0200, Cornelia Huck wrote:
> On Wed, 10 Apr 2019 09:38:22 +0530
> Pankaj Gupta wrote:
>
> > This patch adds virtio-pmem driver for KVM guest.
> >
> > Guest reads the persistent memory range information from
> > Qemu over VIRTIO and registers it on nvdimm_bus.
On Wed, Apr 03, 2019 at 09:19:38PM +0300, Yuval Shaia wrote:
> On Wed, Apr 03, 2019 at 02:33:39PM +0300, Kamal Heib wrote:
> > This series implements the SRQ (Shared Receive Queue) for the pvrdma
> > device, It also includes all the needed functions and definitions for
On Sun, Apr 07, 2019 at 11:13:15AM +0300, Kamal Heib wrote:
>
>
> On 4/3/19 9:05 PM, Yuval Shaia wrote:
> > On Wed, Apr 03, 2019 at 02:33:40PM +0300, Kamal Heib wrote:
> >> Add the required functions and definitions to support shared receive
> >> q
On Wed, Apr 03, 2019 at 08:40:13AM -0400, Pankaj Gupta wrote:
>
> > Subject: Re: [Qemu-devel] [PATCH v4 2/5] virtio-pmem: Add virtio pmem driver
> >
> > On Wed, Apr 03, 2019 at 04:10:15PM +0530, Pankaj Gupta wrote:
> > > This patch adds virtio-pmem driver for KVM guest.
> > >
> > > Guest reads
On Wed, Apr 03, 2019 at 02:33:39PM +0300, Kamal Heib wrote:
> This series implements the SRQ (Shared Receive Queue) for the pvrdma
> device, It also includes all the needed functions and definitions for
> support SRQ in the backend and resource management layers.
>
> Changes from v2->3:
> - Patch
q_handle;
> +comp_ctx->cqe.wr_id = wqe->hdr.wr_id;
> +comp_ctx->cqe.qp = 0;
> +comp_ctx->cqe.opcode = IBV_WC_RECV;
> +
> +if (wqe->hdr.num_sge > dev->dev_attr.max_sge) {
> +rdma_error_report("Invalid
->total_chunks - cmd->send_chunks - 1,
> cmd->is_srq);
> if (rc) {
> return rc;
> }
> @@ -467,9 +481,9 @@ static int create_qp(PVRDMADev *dev, union pvrdma_cmd_req
> *req,
>cmd->max_send_wr, cmd->max_send_sge,
>cmd->send_cq_handle, cmd->max_recv_wr,
>cmd->max_recv_sge, cmd->recv_cq_handle, rings,
> - >qpn);
> + >qpn, cmd->is_srq, cmd->srq_handle);
> if (rc) {
> -destroy_qp_rings(rings);
> +destroy_qp_rings(rings, cmd->is_srq);
> return rc;
> }
>
> @@ -531,10 +545,9 @@ static int destroy_qp(PVRDMADev *dev, union
> pvrdma_cmd_req *req,
> return -EINVAL;
> }
>
> -rdma_rm_dealloc_qp(>rdma_dev_res, cmd->qp_handle);
> -
> ring = (PvrdmaRing *)qp->opaque;
> -destroy_qp_rings(ring);
> +destroy_qp_rings(ring, qp->is_srq);
> +rdma_rm_dealloc_qp(>rdma_dev_res, cmd->qp_handle);
>
> return 0;
> }
Reviewed-by: Yuval Shaia
> --
> 2.20.1
>
>
above this */
> @@ -89,6 +90,12 @@ typedef struct RdmaRmQP {
> enum ibv_qp_state qp_state;
> } RdmaRmQP;
>
> +typedef struct RdmaRmSRQ {
> +RdmaBackendSRQ backend_srq;
> +uint32_t recv_cq_handle;
> +void *opaque;
> +} RdmaRmSRQ;
> +
> typedef struct RdmaRmGid {
> union ibv_gid gid;
> int backend_gid_index;
> @@ -129,6 +136,7 @@ struct RdmaDeviceResources {
> RdmaRmResTbl qp_tbl;
> RdmaRmResTbl cq_tbl;
> RdmaRmResTbl cqe_ctx_tbl;
> +RdmaRmResTbl srq_tbl;
> GHashTable *qp_hash; /* Keeps mapping between real and emulated */
> QemuMutex lock;
> RdmaRmStats stats;
For some reason v3 is omitted from subject, weird.
Anyway, looks like you took care for the comment raised.
With that - patch lgtm.
Reviewed-by: Yuval Shaia
> --
> 2.20.1
>
>
ma_rm.c b/hw/rdma/rdma_rm.c
> index bac3b2f4a6c3..b683506b8616 100644
> --- a/hw/rdma/rdma_rm.c
> +++ b/hw/rdma/rdma_rm.c
> @@ -37,6 +37,8 @@ void rdma_dump_device_counters(Monitor *mon,
> RdmaDeviceResources *dev_res)
> dev_res->stats.tx_err);
>
On Wed, Apr 03, 2019 at 04:10:15PM +0530, Pankaj Gupta wrote:
> This patch adds virtio-pmem driver for KVM guest.
>
> Guest reads the persistent memory range information from
> Qemu over VIRTIO and registers it on nvdimm_bus. It also
> creates a nd_region object with the persistent memory
> range
Hi,
I guess you read the basic instructions from here:
https://www.qemu.org/contribute/.
What kind of project you are looking for?
What is the scope of of the project? i.e. some minor bug fixes, some
minor enactments, a major project etc.
Do you have a contribution to some other open source
> >>
> >> @@ -525,16 +539,21 @@ static int destroy_qp(PVRDMADev *dev, union
> >> pvrdma_cmd_req *req,
> >> struct pvrdma_cmd_destroy_qp *cmd = >destroy_qp;
> >> RdmaRmQP *qp;
> >> PvrdmaRing *ring;
> >> +uint8_t is_srq = 0;
> >>
> >> qp = rdma_rm_get_qp(>rdma_dev_res,
On Tue, Mar 26, 2019 at 02:54:29PM +0200, Kamal Heib wrote:
> This series implements the SRQ (Shared Receive Queue) for the pvrdma
> device, It also includes all the needed functions and definitions for
> support SRQ in the backend and resource management layers.
>
> Changes from v1->v2:
> -
On Tue, Mar 26, 2019 at 02:54:33PM +0200, Kamal Heib wrote:
> Implement the pvrdma device commands for supporting SRQ
>
> Signed-off-by: Kamal Heib
> ---
> hw/rdma/vmw/pvrdma_cmd.c| 147
> hw/rdma/vmw/pvrdma_main.c | 16
>
On Tue, Mar 26, 2019 at 02:54:31PM +0200, Kamal Heib wrote:
> Adding the required functions and definitions for support managing the
> shared receive queues (SRQs).
>
> Signed-off-by: Kamal Heib
> ---
> hw/rdma/rdma_rm.c | 83 ++
> hw/rdma/rdma_rm.h
On Tue, Mar 26, 2019 at 02:54:32PM +0200, Kamal Heib wrote:
> Modify create/destroy QP to support shared receive queue.
>
> Signed-off-by: Kamal Heib
> ---
> hw/rdma/rdma_backend.c | 9 --
> hw/rdma/rdma_backend.h | 6 ++--
> hw/rdma/rdma_rm.c| 23 +--
>
On Tue, Mar 26, 2019 at 02:54:30PM +0200, Kamal Heib wrote:
> Add the required function and definitions for support shared receive
s/function/functions
s/for/to (but not sure about that though)
> queues (SRQs) in the backend layer.
>
> Signed-off-by: Kamal Heib
> ---
> hw/rdma/rdma_backend.c
Signed-off-by: Yuval Shaia
---
hw/net/virtio-net.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 7e2c2a6f6a..ffe0872fff 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2281,7 +2281,7 @@ static void
pvrdma_qp_ops.c
> index 508d8fca3c9b..5b9786efbe4b 100644
> --- a/hw/rdma/vmw/pvrdma_qp_ops.c
> +++ b/hw/rdma/vmw/pvrdma_qp_ops.c
> @@ -114,7 +114,7 @@ static void pvrdma_qp_ops_comp_handler(void *ctx, struct
> ibv_wc *wc)
>
> static void complete_with_error(uint32_t vendor_err, void *ctx)
> {
> -struct ibv_wc wc = {0};
> +struct ibv_wc wc = {};
>
> wc.status = IBV_WC_GENERAL_ERR;
> wc.vendor_err = vendor_err;
> --
Reviewed-by: Yuval Shaia
> 2.20.1
>
c = rdma_rm_query_qp(>rdma_dev_res, >backend_dev,
> cmd->qp_handle,
>(struct ibv_qp_attr *)>attrs, cmd->attr_mask,
> --
Reviewed-by: Yuval Shaia
> 2.20.1
>
>
1 - 100 of 585 matches
Mail list logo