Re: [PATCH] vdpa_sim: Fix DMA mask

2020-10-27 Thread Jason Wang


On 2020/10/28 上午1:59, Laurent Vivier wrote:

Since commit f959dcd6ddfd
("dma-direct: Fix potential NULL pointer dereference")
an error is reported when we load vdpa_sim and virtio-vdpa:

[  129.351207] net eth0: Unexpected TXQ (0) queue failure: -12

It seems that dma_mask is not initialized.

This patch initializes dma_mask() and calls dma_set_mask_and_coherent()
to fix the problem.

Full log:

[  128.548628] [ cut here ]
[  128.553268] WARNING: CPU: 23 PID: 1105 at kernel/dma/mapping.c:149 
dma_map_page_attrs+0x14c/0x1d0
[  128.562139] Modules linked in: virtio_net net_failover failover virtio_vdpa 
vdpa_sim vringh vhost_iotlb vdpa xt_CHECKSUM xt_MASQUERADE xt_conntrack 
ipt_REJECT nf_reject_ipv4 nft_compat nft_counter nft_chain_nat nf_nat 
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink tun bridge stp 
llc iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi rfkill intel_rapl_msr 
intel_rapl_common isst_if_common sunrpc skx_edac nfit libnvdimm 
x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel ipmi_ssif kvm mgag200 
i2c_algo_bit irqbypass drm_kms_helper crct10dif_pclmul crc32_pclmul syscopyarea 
ghash_clmulni_intel iTCO_wdt sysfillrect iTCO_vendor_support sysimgblt rapl 
fb_sys_fops dcdbas intel_cstate drm acpi_ipmi ipmi_si mei_me dell_smbios 
intel_uncore ipmi_devintf mei i2c_i801 dell_wmi_descriptor wmi_bmof pcspkr 
lpc_ich i2c_smbus ipmi_msghandler acpi_power_meter ip_tables xfs libcrc32c 
sd_mod t10_pi sg ahci libahci libata megaraid_sas tg3 crc32c_intel wmi 
dm_mirror dm_region_hash dm_log
[  128.562188]  dm_mod
[  128.651334] CPU: 23 PID: 1105 Comm: NetworkManager Tainted: G SI 
  5.10.0-rc1+ #59
[  128.659939] Hardware name: Dell Inc. PowerEdge R440/04JN2K, BIOS 2.8.1 
06/30/2020
[  128.667419] RIP: 0010:dma_map_page_attrs+0x14c/0x1d0
[  128.672384] Code: 1c 25 28 00 00 00 0f 85 97 00 00 00 48 83 c4 10 5b 5d 41 5c 41 
5d c3 4c 89 da eb d7 48 89 f2 48 2b 50 18 48 89 d0 eb 8d 0f 0b <0f> 0b 48 c7 c0 
ff ff ff ff eb c3 48 89 d9 48 8b 40 40 e8 2d a0 aa
[  128.691131] RSP: 0018:ae0f0151f3c8 EFLAGS: 00010246
[  128.696357] RAX: c06b7400 RBX: 05fa RCX: 
[  128.703488] RDX: 0040 RSI: cee3c7861200 RDI: 9e2bc16cd000
[  128.710620] RBP:  R08: 0002 R09: 
[  128.717754] R10: 0002 R11:  R12: 9e472cb291f8
[  128.724886] R13: 9e2bc14da780 R14: 9e472bc2 R15: 9e2bc1b14940
[  128.732020] FS:  7f887bae23c0() GS:9e4ac01c() 
knlGS:
[  128.740105] CS:  0010 DS:  ES:  CR0: 80050033
[  128.745852] CR2: 562bc09de998 CR3: 0003c156c006 CR4: 007706e0
[  128.752982] DR0:  DR1:  DR2: 
[  128.760114] DR3:  DR6: fffe0ff0 DR7: 0400
[  128.767247] PKRU: 5554
[  128.769961] Call Trace:
[  128.772418]  virtqueue_add+0x81e/0xb00
[  128.776176]  virtqueue_add_inbuf_ctx+0x26/0x30
[  128.780625]  try_fill_recv+0x3a2/0x6e0 [virtio_net]
[  128.785509]  virtnet_open+0xf9/0x180 [virtio_net]
[  128.790217]  __dev_open+0xe8/0x180
[  128.793620]  __dev_change_flags+0x1a7/0x210
[  128.797808]  dev_change_flags+0x21/0x60
[  128.801646]  do_setlink+0x328/0x10e0
[  128.805227]  ? __nla_validate_parse+0x121/0x180
[  128.809757]  ? __nla_parse+0x21/0x30
[  128.813338]  ? inet6_validate_link_af+0x5c/0xf0
[  128.817871]  ? cpumask_next+0x17/0x20
[  128.821535]  ? __snmp6_fill_stats64.isra.54+0x6b/0x110
[  128.826676]  ? __nla_validate_parse+0x47/0x180
[  128.831120]  __rtnl_newlink+0x541/0x8e0
[  128.834962]  ? __nla_reserve+0x38/0x50
[  128.838713]  ? security_sock_rcv_skb+0x2a/0x40
[  128.843158]  ? netlink_deliver_tap+0x2c/0x1e0
[  128.847518]  ? netlink_attachskb+0x1d8/0x220
[  128.851793]  ? skb_queue_tail+0x1b/0x50
[  128.855641]  ? fib6_clean_node+0x43/0x170
[  128.859652]  ? _cond_resched+0x15/0x30
[  128.863406]  ? kmem_cache_alloc_trace+0x3a3/0x420
[  128.868110]  rtnl_newlink+0x43/0x60
[  128.871602]  rtnetlink_rcv_msg+0x12c/0x380
[  128.875701]  ? rtnl_calcit.isra.39+0x110/0x110
[  128.880147]  netlink_rcv_skb+0x50/0x100
[  128.883987]  netlink_unicast+0x1a5/0x280
[  128.887913]  netlink_sendmsg+0x23d/0x470
[  128.891839]  sock_sendmsg+0x5b/0x60
[  128.895331]  sys_sendmsg+0x1ef/0x260
[  128.899255]  ? copy_msghdr_from_user+0x5c/0x90
[  128.903702]  ___sys_sendmsg+0x7c/0xc0
[  128.907369]  ? dev_forward_change+0x130/0x130
[  128.911731]  ? sysctl_head_finish.part.29+0x24/0x40
[  128.916616]  ? new_sync_write+0x11f/0x1b0
[  128.920628]  ? mntput_no_expire+0x47/0x240
[  128.924727]  __sys_sendmsg+0x57/0xa0
[  128.928309]  do_syscall_64+0x33/0x40
[  128.931887]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  128.936937] RIP: 0033:0x7f88792e3857
[  128.940518] Code: c3 66 90 41 54 41 89 d4 55 48 89 f5 53 89 fb 48 83 ec 10 e8 0b 
ed ff ff 44 89 e2 48 89 ee 89 df 41 89 c0 b8 2e 00 00 

Re: [PATCH 07/17] vhost scsi: support delayed IO vq creation

2020-10-27 Thread Jason Wang


On 2020/10/27 下午1:47, Mike Christie wrote:

On 10/25/20 10:51 PM, Jason Wang wrote:


On 2020/10/22 上午8:34, Mike Christie wrote:

Each vhost-scsi device will need a evt and ctl queue, but the number
of IO queues depends on whatever the user has configured in userspace.
This patch has vhost-scsi create the evt, ctl and one IO vq at device
open time. We then create the other IO vqs when userspace starts to
set them up. We still waste some mem on the vq and scsi vq structs,
but we don't waste mem on iovec related arrays and for later patches
we know which queues are used by the dev->nvqs value.

Signed-off-by: Mike Christie 
---
  drivers/vhost/scsi.c | 19 +++
  1 file changed, 15 insertions(+), 4 deletions(-)



Not familiar with SCSI. But I wonder if it could behave like vhost-net.

E.g userspace should known the number of virtqueues so it can just 
open and close multiple vhost-scsi file descriptors.




One hiccup I'm hitting is that we might end up creating about 3x more 
vqs than we need. The problem is that for scsi each vhost device has:


vq=0: special control vq
vq=1: event vq
vq=2 and above: SCSI CMD/IO vqs. We want to create N of these.

Today we do:

Uerspace does open(/dev/vhost-scsi)
    vhost_dev_init(create 128 vqs and then later we setup and use 
N of them);


Qemu does ioctl(VHOST_SET_OWNER)
    vhost_dev_set_owner()

For N vqs userspace does:
    // virtqueue setup related ioctls

Qemu does ioctl(VHOST_SCSI_SET_ENDPOINT)
    - match LIO/target port to vhost_dev


So we could change that to:

For N IO vqs userspace does
    open(/dev/vhost-scsi)
    vhost_dev_init(create IO, evt, and ctl);

for N IO vqs Qemu does:
    ioctl(VHOST_SET_OWNER)
    vhost_dev_set_owner()

for N IO vqs Qemu does:
    // virtqueue setup related ioctls

for N IO vqs Qemu does:
    ioctl(VHOST_SCSI_SET_ENDPOINT)
    - match LIO/target port to vhost_dev and assemble the 
multiple vhost_dev device.


The problem is that we have to setup some of the evt/ctl specific 
parts at open() time when vhost_dev_init does vhost_poll_init for 
example.


- At open time, we don't know if this vhost_dev is going to be part of 
a multiple vhost_device device or a single one so we need to create at 
least 3 of them
- If it is a multiple device we don't know if its the first device 
being created for the device or the N'th, so we don't know if the 
dev's vqs will be used for IO or ctls/evts, so we have to create all 3.


When we get the first VHOST_SCSI_SET_ENDPOINT call for a new style 
multiple vhost_dev device, we can use that dev's evt/ctl vqs for 
events/controls requests. When we get the other 
VHOST_SCSI_SET_ENDPOINT calls for the multiple vhost_dev device then 
those dev's evt/ctl vqs will be ignored and we will only use their IO 
vqs. So we end up with a lot of extra vqs.



Right, so in this case we can use this patch to address this issue 
probably. If evt/ctl vq is not used, we won't even create them.






One other question/issue I have is that qemu can open the 
/dev/vhost-scsi device or it allows tools like libvirtd to open the 
device and pass in the fd to use.



It allows libvirt to open and pass fds to qemu. This is how multie-queue 
virtio-net is done, libvirt is in charge of opening multiple file 
descriptors and pass them to qemu.



For the latter case, would we continue to have those tools pass in the 
leading fd, then have qemu do the other num_queues - 1 
open(/dev/vhost-scsi) calls? Or do these apps that pass in the fd need 
to know about all of the fds for some management reason?



Usually qemu is running without privilege. So it depends on the 
management to open the device.


Note that I'm not object your proposal, just want to see if it could be 
done via a more easy way. During the development if multiqueue 
virito-net, something similar as you've done was proposed but we end up 
with the multiple vhost-net fd model which keeps kernel code unchanged.


Thanks



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 6/8] drm: atomic: use krealloc_array()

2020-10-27 Thread Daniel Vetter
On Tue, Oct 27, 2020 at 01:17:23PM +0100, Bartosz Golaszewski wrote:
> From: Bartosz Golaszewski 
> 
> Use the helper that checks for overflows internally instead of manually
> calculating the size of the new array.
> 
> Signed-off-by: Bartosz Golaszewski 

Acked-by: Daniel Vetter 

I don't expect conflicts with this going through some other tree, so
please make that happen. Or resend once I can apply this to drm trees.

Thanks, Daniel

> ---
>  drivers/gpu/drm/drm_atomic.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
> index 58527f151984..09ad6a2ec17b 100644
> --- a/drivers/gpu/drm/drm_atomic.c
> +++ b/drivers/gpu/drm/drm_atomic.c
> @@ -960,7 +960,8 @@ drm_atomic_get_connector_state(struct drm_atomic_state 
> *state,
>   struct __drm_connnectors_state *c;
>   int alloc = max(index + 1, config->num_connector);
>  
> - c = krealloc(state->connectors, alloc * 
> sizeof(*state->connectors), GFP_KERNEL);
> + c = krealloc_array(state->connectors, alloc,
> +sizeof(*state->connectors), GFP_KERNEL);
>   if (!c)
>   return ERR_PTR(-ENOMEM);
>  
> -- 
> 2.29.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 5/8] edac: ghes: use krealloc_array()

2020-10-27 Thread Borislav Petkov
On Tue, Oct 27, 2020 at 01:17:22PM +0100, Bartosz Golaszewski wrote:
> From: Bartosz Golaszewski 
> 
> Use the helper that checks for overflows internally instead of manually
> calculating the size of the new array.
> 
> Signed-off-by: Bartosz Golaszewski 
> ---
>  drivers/edac/ghes_edac.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/edac/ghes_edac.c b/drivers/edac/ghes_edac.c
> index a918ca93e4f7..6d1ddecbf0da 100644
> --- a/drivers/edac/ghes_edac.c
> +++ b/drivers/edac/ghes_edac.c
> @@ -207,8 +207,8 @@ static void enumerate_dimms(const struct dmi_header *dh, 
> void *arg)
>   if (!hw->num_dimms || !(hw->num_dimms % 16)) {
>   struct dimm_info *new;
>  
> - new = krealloc(hw->dimms, (hw->num_dimms + 16) * sizeof(struct 
> dimm_info),
> - GFP_KERNEL);
> + new = krealloc_array(hw->dimms, hw->num_dimms + 16,
> +  sizeof(struct dimm_info), GFP_KERNEL);
>   if (!new) {
>   WARN_ON_ONCE(1);
>   return;
> -- 

Sure, why not.

Acked-by: Borislav Petkov 

-- 
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/8] mm: slab: provide krealloc_array()

2020-10-27 Thread Vlastimil Babka

On 10/27/20 1:17 PM, Bartosz Golaszewski wrote:

From: Bartosz Golaszewski 

When allocating an array of elements, users should check for
multiplication overflow or preferably use one of the provided helpers
like: kmalloc_array().

There's no krealloc_array() counterpart but there are many users who use
regular krealloc() to reallocate arrays. Let's provide an actual
krealloc_array() implementation.

Signed-off-by: Bartosz Golaszewski 


Makes sense.
Acked-by: Vlastimil Babka 


---
  include/linux/slab.h | 11 +++
  1 file changed, 11 insertions(+)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index dd6897f62010..0e6683affee7 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -592,6 +592,17 @@ static inline void *kmalloc_array(size_t n, size_t size, 
gfp_t flags)
return __kmalloc(bytes, flags);
  }
  
+static __must_check inline void *

+krealloc_array(void *p, size_t new_n, size_t new_size, gfp_t flags)
+{
+   size_t bytes;
+
+   if (unlikely(check_mul_overflow(new_n, new_size, &bytes)))
+   return NULL;
+
+   return krealloc(p, bytes, flags);
+}
+
  /**
   * kcalloc - allocate memory for an array. The memory is set to zero.
   * @n: number of elements.



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH] vdpa_sim: Fix DMA mask

2020-10-27 Thread Laurent Vivier
Since commit f959dcd6ddfd
("dma-direct: Fix potential NULL pointer dereference")
an error is reported when we load vdpa_sim and virtio-vdpa:

[  129.351207] net eth0: Unexpected TXQ (0) queue failure: -12

It seems that dma_mask is not initialized.

This patch initializes dma_mask() and calls dma_set_mask_and_coherent()
to fix the problem.

Full log:

[  128.548628] [ cut here ]
[  128.553268] WARNING: CPU: 23 PID: 1105 at kernel/dma/mapping.c:149 
dma_map_page_attrs+0x14c/0x1d0
[  128.562139] Modules linked in: virtio_net net_failover failover virtio_vdpa 
vdpa_sim vringh vhost_iotlb vdpa xt_CHECKSUM xt_MASQUERADE xt_conntrack 
ipt_REJECT nf_reject_ipv4 nft_compat nft_counter nft_chain_nat nf_nat 
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink tun bridge stp 
llc iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi rfkill intel_rapl_msr 
intel_rapl_common isst_if_common sunrpc skx_edac nfit libnvdimm 
x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel ipmi_ssif kvm mgag200 
i2c_algo_bit irqbypass drm_kms_helper crct10dif_pclmul crc32_pclmul syscopyarea 
ghash_clmulni_intel iTCO_wdt sysfillrect iTCO_vendor_support sysimgblt rapl 
fb_sys_fops dcdbas intel_cstate drm acpi_ipmi ipmi_si mei_me dell_smbios 
intel_uncore ipmi_devintf mei i2c_i801 dell_wmi_descriptor wmi_bmof pcspkr 
lpc_ich i2c_smbus ipmi_msghandler acpi_power_meter ip_tables xfs libcrc32c 
sd_mod t10_pi sg ahci libahci libata megaraid_sas tg3 crc32c_intel wmi 
dm_mirror dm_region_hash dm
 _log
[  128.562188]  dm_mod
[  128.651334] CPU: 23 PID: 1105 Comm: NetworkManager Tainted: G SI 
  5.10.0-rc1+ #59
[  128.659939] Hardware name: Dell Inc. PowerEdge R440/04JN2K, BIOS 2.8.1 
06/30/2020
[  128.667419] RIP: 0010:dma_map_page_attrs+0x14c/0x1d0
[  128.672384] Code: 1c 25 28 00 00 00 0f 85 97 00 00 00 48 83 c4 10 5b 5d 41 
5c 41 5d c3 4c 89 da eb d7 48 89 f2 48 2b 50 18 48 89 d0 eb 8d 0f 0b <0f> 0b 48 
c7 c0 ff ff ff ff eb c3 48 89 d9 48 8b 40 40 e8 2d a0 aa
[  128.691131] RSP: 0018:ae0f0151f3c8 EFLAGS: 00010246
[  128.696357] RAX: c06b7400 RBX: 05fa RCX: 
[  128.703488] RDX: 0040 RSI: cee3c7861200 RDI: 9e2bc16cd000
[  128.710620] RBP:  R08: 0002 R09: 
[  128.717754] R10: 0002 R11:  R12: 9e472cb291f8
[  128.724886] R13: 9e2bc14da780 R14: 9e472bc2 R15: 9e2bc1b14940
[  128.732020] FS:  7f887bae23c0() GS:9e4ac01c() 
knlGS:
[  128.740105] CS:  0010 DS:  ES:  CR0: 80050033
[  128.745852] CR2: 562bc09de998 CR3: 0003c156c006 CR4: 007706e0
[  128.752982] DR0:  DR1:  DR2: 
[  128.760114] DR3:  DR6: fffe0ff0 DR7: 0400
[  128.767247] PKRU: 5554
[  128.769961] Call Trace:
[  128.772418]  virtqueue_add+0x81e/0xb00
[  128.776176]  virtqueue_add_inbuf_ctx+0x26/0x30
[  128.780625]  try_fill_recv+0x3a2/0x6e0 [virtio_net]
[  128.785509]  virtnet_open+0xf9/0x180 [virtio_net]
[  128.790217]  __dev_open+0xe8/0x180
[  128.793620]  __dev_change_flags+0x1a7/0x210
[  128.797808]  dev_change_flags+0x21/0x60
[  128.801646]  do_setlink+0x328/0x10e0
[  128.805227]  ? __nla_validate_parse+0x121/0x180
[  128.809757]  ? __nla_parse+0x21/0x30
[  128.813338]  ? inet6_validate_link_af+0x5c/0xf0
[  128.817871]  ? cpumask_next+0x17/0x20
[  128.821535]  ? __snmp6_fill_stats64.isra.54+0x6b/0x110
[  128.826676]  ? __nla_validate_parse+0x47/0x180
[  128.831120]  __rtnl_newlink+0x541/0x8e0
[  128.834962]  ? __nla_reserve+0x38/0x50
[  128.838713]  ? security_sock_rcv_skb+0x2a/0x40
[  128.843158]  ? netlink_deliver_tap+0x2c/0x1e0
[  128.847518]  ? netlink_attachskb+0x1d8/0x220
[  128.851793]  ? skb_queue_tail+0x1b/0x50
[  128.855641]  ? fib6_clean_node+0x43/0x170
[  128.859652]  ? _cond_resched+0x15/0x30
[  128.863406]  ? kmem_cache_alloc_trace+0x3a3/0x420
[  128.868110]  rtnl_newlink+0x43/0x60
[  128.871602]  rtnetlink_rcv_msg+0x12c/0x380
[  128.875701]  ? rtnl_calcit.isra.39+0x110/0x110
[  128.880147]  netlink_rcv_skb+0x50/0x100
[  128.883987]  netlink_unicast+0x1a5/0x280
[  128.887913]  netlink_sendmsg+0x23d/0x470
[  128.891839]  sock_sendmsg+0x5b/0x60
[  128.895331]  sys_sendmsg+0x1ef/0x260
[  128.899255]  ? copy_msghdr_from_user+0x5c/0x90
[  128.903702]  ___sys_sendmsg+0x7c/0xc0
[  128.907369]  ? dev_forward_change+0x130/0x130
[  128.911731]  ? sysctl_head_finish.part.29+0x24/0x40
[  128.916616]  ? new_sync_write+0x11f/0x1b0
[  128.920628]  ? mntput_no_expire+0x47/0x240
[  128.924727]  __sys_sendmsg+0x57/0xa0
[  128.928309]  do_syscall_64+0x33/0x40
[  128.931887]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  128.936937] RIP: 0033:0x7f88792e3857
[  128.940518] Code: c3 66 90 41 54 41 89 d4 55 48 89 f5 53 89 fb 48 83 ec 10 
e8 0b ed ff ff 44 89 e2 48 89 ee 89 df 41 89 c0 b8 2e 00 00 00 0f 05 <48> 3d 00 
f0 ff ff 77 35 44 89 c7 

Re: [PATCH 3/8] vhost: vringh: use krealloc_array()

2020-10-27 Thread Joe Perches
On Tue, 2020-10-27 at 17:58 +0100, Bartosz Golaszewski wrote:
> On Tue, Oct 27, 2020 at 5:50 PM Joe Perches  wrote:
> > 
> > On Tue, 2020-10-27 at 11:28 -0400, Michael S. Tsirkin wrote:
> > > On Tue, Oct 27, 2020 at 01:17:20PM +0100, Bartosz Golaszewski wrote:
> > > > From: Bartosz Golaszewski 
> > > > 
> > > > Use the helper that checks for overflows internally instead of manually
> > > > calculating the size of the new array.
> > > > 
> > > > Signed-off-by: Bartosz Golaszewski 
> > > 
> > > No problem with the patch, it does introduce some symmetry in the code.
> > 
> > Perhaps more symmetry by using kmemdup
> > ---
> >  drivers/vhost/vringh.c | 23 ++-
> >  1 file changed, 10 insertions(+), 13 deletions(-)
> > 
> > diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> > index 8bd8b403f087..99222a3651cd 100644
> > --- a/drivers/vhost/vringh.c
> > +++ b/drivers/vhost/vringh.c
> > @@ -191,26 +191,23 @@ static int move_to_indirect(const struct vringh *vrh,
> >  static int resize_iovec(struct vringh_kiov *iov, gfp_t gfp)
> >  {
> > struct kvec *new;
> > -   unsigned int flag, new_num = (iov->max_num & ~VRINGH_IOV_ALLOCATED) 
> > * 2;
> > +   size_t new_num = (iov->max_num & ~VRINGH_IOV_ALLOCATED) * 2;
> > +   size_t size;
> > 
> > if (new_num < 8)
> > new_num = 8;
> > 
> > -   flag = (iov->max_num & VRINGH_IOV_ALLOCATED);
> > -   if (flag)
> > -   new = krealloc(iov->iov, new_num * sizeof(struct iovec), 
> > gfp);
> > -   else {
> > -   new = kmalloc_array(new_num, sizeof(struct iovec), gfp);
> > -   if (new) {
> > -   memcpy(new, iov->iov,
> > -  iov->max_num * sizeof(struct iovec));
> > -   flag = VRINGH_IOV_ALLOCATED;
> > -   }
> > -   }
> > +   if (unlikely(check_mul_overflow(new_num, sizeof(struct iovec), 
> > &size)))
> > +   return -ENOMEM;
> > +
> 
> The whole point of using helpers such as kmalloc_array() is not doing
> these checks manually.

Tradeoffs for in readability for overflow and not mistyping or doing
the multiplication of iov->max_num * sizeof(struct iovec) twice.

Just fyi:

the realloc doesn't do a multiplication overflow test as written so the
suggestion is slightly more resistant to defect.

   

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 3/8] vhost: vringh: use krealloc_array()

2020-10-27 Thread Joe Perches
On Tue, 2020-10-27 at 11:28 -0400, Michael S. Tsirkin wrote:
> On Tue, Oct 27, 2020 at 01:17:20PM +0100, Bartosz Golaszewski wrote:
> > From: Bartosz Golaszewski 
> > 
> > Use the helper that checks for overflows internally instead of manually
> > calculating the size of the new array.
> > 
> > Signed-off-by: Bartosz Golaszewski 
> 
> No problem with the patch, it does introduce some symmetry in the code.

Perhaps more symmetry by using kmemdup
---
 drivers/vhost/vringh.c | 23 ++-
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
index 8bd8b403f087..99222a3651cd 100644
--- a/drivers/vhost/vringh.c
+++ b/drivers/vhost/vringh.c
@@ -191,26 +191,23 @@ static int move_to_indirect(const struct vringh *vrh,
 static int resize_iovec(struct vringh_kiov *iov, gfp_t gfp)
 {
struct kvec *new;
-   unsigned int flag, new_num = (iov->max_num & ~VRINGH_IOV_ALLOCATED) * 2;
+   size_t new_num = (iov->max_num & ~VRINGH_IOV_ALLOCATED) * 2;
+   size_t size;
 
if (new_num < 8)
new_num = 8;
 
-   flag = (iov->max_num & VRINGH_IOV_ALLOCATED);
-   if (flag)
-   new = krealloc(iov->iov, new_num * sizeof(struct iovec), gfp);
-   else {
-   new = kmalloc_array(new_num, sizeof(struct iovec), gfp);
-   if (new) {
-   memcpy(new, iov->iov,
-  iov->max_num * sizeof(struct iovec));
-   flag = VRINGH_IOV_ALLOCATED;
-   }
-   }
+   if (unlikely(check_mul_overflow(new_num, sizeof(struct iovec), &size)))
+   return -ENOMEM;
+
+   if (iov->max_num & VRINGH_IOV_ALLOCATED)
+   new = krealloc(iov->iov, size, gfp);
+   else
+   new = kmemdup(iov->iov, size, gfp);
if (!new)
return -ENOMEM;
iov->iov = new;
-   iov->max_num = (new_num | flag);
+   iov->max_num = new_num | VRINGH_IOV_ALLOCATED;
return 0;
 }
 
 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 3/8] vhost: vringh: use krealloc_array()

2020-10-27 Thread Michael S. Tsirkin
On Tue, Oct 27, 2020 at 01:17:20PM +0100, Bartosz Golaszewski wrote:
> From: Bartosz Golaszewski 
> 
> Use the helper that checks for overflows internally instead of manually
> calculating the size of the new array.
> 
> Signed-off-by: Bartosz Golaszewski 

No problem with the patch, it does introduce some symmetry in the code.

Acked-by: Michael S. Tsirkin 



> ---
>  drivers/vhost/vringh.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> index 8bd8b403f087..08a0e1c842df 100644
> --- a/drivers/vhost/vringh.c
> +++ b/drivers/vhost/vringh.c
> @@ -198,7 +198,8 @@ static int resize_iovec(struct vringh_kiov *iov, gfp_t 
> gfp)
>  
>   flag = (iov->max_num & VRINGH_IOV_ALLOCATED);
>   if (flag)
> - new = krealloc(iov->iov, new_num * sizeof(struct iovec), gfp);
> + new = krealloc_array(iov->iov, new_num,
> +  sizeof(struct iovec), gfp);
>   else {
>   new = kmalloc_array(new_num, sizeof(struct iovec), gfp);
>   if (new) {
> -- 
> 2.29.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 8/8] dma-buf: use krealloc_array()

2020-10-27 Thread Christian König

Am 27.10.20 um 13:17 schrieb Bartosz Golaszewski:

From: Bartosz Golaszewski 

Use the helper that checks for overflows internally instead of manually
calculating the size of the new array.

Signed-off-by: Bartosz Golaszewski 


Acked-by: Christian König 


---
  drivers/dma-buf/sync_file.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
index 5a5a1da01a00..2925ea03eef0 100644
--- a/drivers/dma-buf/sync_file.c
+++ b/drivers/dma-buf/sync_file.c
@@ -270,8 +270,8 @@ static struct sync_file *sync_file_merge(const char *name, 
struct sync_file *a,
fences[i++] = dma_fence_get(a_fences[0]);
  
  	if (num_fences > i) {

-   nfences = krealloc(fences, i * sizeof(*fences),
- GFP_KERNEL);
+   nfences = krealloc_array(fences, i,
+sizeof(*fences), GFP_KERNEL);
if (!nfences)
goto err;
  


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization