Re: [PATCH] userfaultfd: Write protect when virtual memory range has no page table entry
On Mon, Mar 22, 2021 at 03:00:37PM +0200, Mike Rapoport wrote: > On Mon, Mar 22, 2021 at 11:14:37AM +0100, Michal Hocko wrote: > > Le'ts Andrea and Mike > > > > On Fri 19-03-21 22:24:28, Bui Quang Minh wrote: > > > userfaultfd_writeprotect() use change_protection() to clear write bit in > > > page table entries (pte/pmd). So, later write to this virtual address > > > range causes a page fault, which is then handled by userspace program. > > > However, change_protection() has no effect when there is no page table > > > entries associated with that virtual memory range (a newly mapped memory > > > range). As a result, later access to that memory range causes allocating a > > > page table entry with write bit still set (due to VM_WRITE flag in > > > vma->vm_flags). > > > > > > Add checks for VM_UFFD_WP in vma->vm_flags when allocating new page table > > > entry in missing page table entry page fault path. > > > > From the above it is not really clear whether this is a usability > > problem or a bug of the interface. > > I'd say it's usability/documentation clarity issue. > Userspace can register an area with > > UFFDIO_REGISTER_MODE_MISSING | UFFDIO_REGISTER_MODE_WP > > and then it will be notified either when page table has no entry for a > virtual address or when there is a write to a write protected address. Yes, you are right. I saw a patch from Peter to linux-man and saw that "When there is only UFFDIO_REGISTER_MODE_WP registered, the userspace will not receive any message when a missing page is written" It's my mistake that I didn't look at the documentation carefully when playing around. Thanks, Quang Minh.
[PATCH] userfaultfd: Write protect when virtual memory range has no page table entry
userfaultfd_writeprotect() use change_protection() to clear write bit in page table entries (pte/pmd). So, later write to this virtual address range causes a page fault, which is then handled by userspace program. However, change_protection() has no effect when there is no page table entries associated with that virtual memory range (a newly mapped memory range). As a result, later access to that memory range causes allocating a page table entry with write bit still set (due to VM_WRITE flag in vma->vm_flags). Add checks for VM_UFFD_WP in vma->vm_flags when allocating new page table entry in missing page table entry page fault path. Signed-off-by: Bui Quang Minh --- mm/huge_memory.c | 12 mm/memory.c | 10 ++ 2 files changed, 22 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ae907a9c2050..9bb16a55a48c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -636,6 +636,11 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = mk_huge_pmd(page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + if (userfaultfd_wp(vma)) { + entry = pmd_wrprotect(entry); + entry = pmd_mkuffd_wp(entry); + } + page_add_new_anon_rmap(page, vma, haddr, true); lru_cache_add_inactive_or_unevictable(page, vma); pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); @@ -643,6 +648,13 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); mm_inc_nr_ptes(vma->vm_mm); + + if (userfaultfd_huge_pmd_wp(vma, *vmf->pmd)) { + spin_unlock(vmf->ptl); + count_vm_event(THP_FAULT_ALLOC); + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); + return handle_userfault(vmf, VM_UFFD_WP); + } spin_unlock(vmf->ptl); count_vm_event(THP_FAULT_ALLOC); count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); diff --git a/mm/memory.c b/mm/memory.c index 5efa07fb6cdc..b835746545bf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3564,6 +3564,11 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) if (vma->vm_flags & VM_WRITE) entry = pte_mkwrite(pte_mkdirty(entry)); + if (userfaultfd_wp(vma)) { + entry = pte_wrprotect(entry); + entry = pte_mkuffd_wp(entry); + } + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, >ptl); if (!pte_none(*vmf->pte)) { @@ -3590,6 +3595,11 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) /* No need to invalidate - it was non-present before */ update_mmu_cache(vma, vmf->address, vmf->pte); + + if (userfaultfd_pte_wp(vma, *vmf->pte)) { + pte_unmap_unlock(vmf->pte, vmf->ptl); + return handle_userfault(vmf, VM_UFFD_WP); + } unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); return ret; -- 2.25.1
[PATCH] bpf: Check for integer overflow when using roundup_pow_of_two()
On 32-bit architecture, roundup_pow_of_two() can return 0 when the argument has upper most bit set due to resulting 1UL << 32. Add a check for this case. Fixes: d5a3b1f ("bpf: introduce BPF_MAP_TYPE_STACK_TRACE") Signed-off-by: Bui Quang Minh --- kernel/bpf/stackmap.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index aea96b638473..bfafbf115bf3 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -115,6 +115,8 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr) /* hash table size must be power of 2 */ n_buckets = roundup_pow_of_two(attr->max_entries); + if (!n_buckets) + return ERR_PTR(-E2BIG); cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap); cost += n_buckets * (value_size + sizeof(struct stack_map_bucket)); -- 2.17.1
Re: [PATCH] bpf: Fix integer overflow in argument calculation for bpf_map_area_alloc
On Wed, Jan 27, 2021 at 11:23:41AM +0700, Bui Quang Minh wrote: > > * Seems like there are quite a few similar calls scattered around > > (cpumap, etc.). Did you audit these as well? > > I spotted another bug after re-auditting. In hashtab, there ares 2 places > using > the same calls > > static struct bpf_map *htab_map_alloc(union bpf_attr *attr) > { > /* ... snip ... */ > if (htab->n_buckets == 0 || > htab->n_buckets > U32_MAX / sizeof(struct bucket)) > goto free_htab; > > htab->buckets = bpf_map_area_alloc(htab->n_buckets * > sizeof(struct bucket), > htab->map.numa_node); > } > > This is safe because of the above check. > > static int prealloc_init(struct bpf_htab *htab) > { > u32 num_entries = htab->map.max_entries; > htab->elems = bpf_map_area_alloc(htab->elem_size * num_entries, >htab->map.numa_node); > } > > This is not safe since there is no limit check in elem_size. So sorry but I rechecked and saw this bug in hashtab has been fixed with commit e1868b9e36d0ca Thank you, Quang Minh.
Re: [PATCH] bpf: Fix integer overflow in argument calculation for bpf_map_area_alloc
On Tue, Jan 26, 2021 at 09:36:57AM +, Lorenz Bauer wrote: > On Tue, 26 Jan 2021 at 08:26, Bui Quang Minh wrote: > > > > In 32-bit architecture, the result of sizeof() is a 32-bit integer so > > the expression becomes the multiplication between 2 32-bit integer which > > can potentially leads to integer overflow. As a result, > > bpf_map_area_alloc() allocates less memory than needed. > > > > Fix this by casting 1 operand to u64. > > Some quick thoughts: > * Should this have a Fixes tag? Ok, I will add Fixes tag in later version patch. > * Seems like there are quite a few similar calls scattered around > (cpumap, etc.). Did you audit these as well? I spotted another bug after re-auditting. In hashtab, there ares 2 places using the same calls static struct bpf_map *htab_map_alloc(union bpf_attr *attr) { /* ... snip ... */ if (htab->n_buckets == 0 || htab->n_buckets > U32_MAX / sizeof(struct bucket)) goto free_htab; htab->buckets = bpf_map_area_alloc(htab->n_buckets * sizeof(struct bucket), htab->map.numa_node); } This is safe because of the above check. static int prealloc_init(struct bpf_htab *htab) { u32 num_entries = htab->map.max_entries; htab->elems = bpf_map_area_alloc(htab->elem_size * num_entries, htab->map.numa_node); } This is not safe since there is no limit check in elem_size. In cpumap, static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) { cmap->cpu_map = bpf_map_area_alloc(cmap->map.max_entries * sizeof(struct bpf_cpu_map_entry *), cmap->map.numa_node); } I think this is safe because max_entries is not permitted to be larger than NR_CPUS. In stackmap, there is a place that I'm not very sure about static int prealloc_elems_and_freelist(struct bpf_stack_map *smap) { u32 elem_size = sizeof(struct stack_map_bucket) + smap->map.value_size; smap->elems = bpf_map_area_alloc(elem_size * smap->map.max_entries, smap->map.numa_node); } This is called after another bpf_map_area_alloc in stack_map_alloc(). In the first bpf_map_area_alloc() the argument is calculated in an u64 variable; so if in the second one, there is an integer overflow then the first one must be called with size > 4GB. I think the first one will probably fail (I am not sure about the actual limit of vmalloc()), so the second one might not be called. Overall, I think it is error prone in this pattern, maybe we should use typecasting in all similar calls or make a comment why we don't use typecasting. As I see typecasting is not so expensive and we can typecast the sizeof() operand so this change only affect 32-bit architecture. > * I'd prefer a calloc style version of bpf_map_area_alloc although > that might conflict with Fixes tag. Yes, I think the calloc style will prevent this kind of integer overflow bug. Thank you, Quang Minh.
[PATCH] bpf: Fix integer overflow in argument calculation for bpf_map_area_alloc
In 32-bit architecture, the result of sizeof() is a 32-bit integer so the expression becomes the multiplication between 2 32-bit integer which can potentially leads to integer overflow. As a result, bpf_map_area_alloc() allocates less memory than needed. Fix this by casting 1 operand to u64. Signed-off-by: Bui Quang Minh --- kernel/bpf/devmap.c | 4 ++-- net/core/sock_map.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index f6e9c68afdd4..e849c3e8a49f 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -92,7 +92,7 @@ static struct hlist_head *dev_map_create_hash(unsigned int entries, int i; struct hlist_head *hash; - hash = bpf_map_area_alloc(entries * sizeof(*hash), numa_node); + hash = bpf_map_area_alloc((u64) entries * sizeof(*hash), numa_node); if (hash != NULL) for (i = 0; i < entries; i++) INIT_HLIST_HEAD([i]); @@ -143,7 +143,7 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr) spin_lock_init(>index_lock); } else { - dtab->netdev_map = bpf_map_area_alloc(dtab->map.max_entries * + dtab->netdev_map = bpf_map_area_alloc((u64) dtab->map.max_entries * sizeof(struct bpf_dtab_netdev *), dtab->map.numa_node); if (!dtab->netdev_map) diff --git a/net/core/sock_map.c b/net/core/sock_map.c index 64b5ec14ff50..7a42016a981d 100644 --- a/net/core/sock_map.c +++ b/net/core/sock_map.c @@ -44,7 +44,7 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr) bpf_map_init_from_attr(>map, attr); raw_spin_lock_init(>lock); - stab->sks = bpf_map_area_alloc(stab->map.max_entries * + stab->sks = bpf_map_area_alloc((u64) stab->map.max_entries * sizeof(struct sock *), stab->map.numa_node); if (!stab->sks) { -- 2.17.1
Re: [PATCH v2] can: mcba_usb: Fix memory leak when cancelling urb
On Tue, Jan 12, 2021 at 01:42:33PM +0700, Minh Bùi Quang wrote: > On Mon, Jan 11, 2021 at 9:31 PM Bui Quang Minh > wrote: > > > > On Mon, Jan 11, 2021 at 01:00:31PM +0100, Oliver Neukum wrote: > > > Am Montag, den 11.01.2021, 10:49 + schrieb Bui Quang Minh: > > > > In mcba_usb_read_bulk_callback(), when we don't resubmit or fails to > > > > resubmit the urb, we need to deallocate the transfer buffer that is > > > > allocated in mcba_usb_start(). > > > > > > > > Reported-by: syzbot+57281c762a3922e14...@syzkaller.appspotmail.com > > > > Signed-off-by: Bui Quang Minh > > > > --- > > > > v1: add memory leak fix when not resubmitting urb > > > > v2: add memory leak fix when failing to resubmit urb > > > > > > > > drivers/net/can/usb/mcba_usb.c | 11 --- > > > > 1 file changed, 8 insertions(+), 3 deletions(-) > > > > > > > > diff --git a/drivers/net/can/usb/mcba_usb.c > > > > b/drivers/net/can/usb/mcba_usb.c > > > > index df54eb7d4b36..30236e640116 100644 > > > > --- a/drivers/net/can/usb/mcba_usb.c > > > > +++ b/drivers/net/can/usb/mcba_usb.c > > > > @@ -584,6 +584,8 @@ static void mcba_usb_read_bulk_callback(struct urb > > > > *urb) > > > > case -EPIPE: > > > > case -EPROTO: > > > > case -ESHUTDOWN: > > > > + usb_free_coherent(urb->dev, urb->transfer_buffer_length, > > > > + urb->transfer_buffer, urb->transfer_dma); > > > > return; > > > > > > > > > > Can you call usb_free_coherent() in what can be hard IRQ context? > > > > You are right, I digged in the code and saw some comments that on some > > architectures, usb_free_coherent() cannot be called in hard IRQ context. > > I see the usb_free_coherent() is called in write_bulk_callback too. I will > > send a patch that uses usb_anchor to keep track of these urbs and cleanup > > the transfer buffer later in disconnect(). > > Hi, I have sent a version 3 patch. However, I found out that > usb_free_coherent() > is ok in this situation. In usb_free_coherent(), if the buffer is allocated > via > dma_alloc_coherent() in usb_alloc_coherent(), dma_free_coherent() is called. > In dma_free_coherent(), ops->free() may be called in some cases which may > contains calls to vunmap() that is not permitted in interrupt context. > However, > in usb_alloc_coherent(), buffer can be allocated from dma pool if the > size is less > than 2048 and the buffer size in mcba_usb is obviously less than 2048. > As a result, > usb_free_coherent() will at most fall in the path that calls > dma_pool_free(), which is > safe. Am I right? Hi, I'm CC'ing CAN network driver maintainers so we can discuss the patch properly. I'm so sorry for spamming emails. Thanks, Quang Minh.
[no subject]
Bcc: Subject: Re: [PATCH v2] can: mcba_usb: Fix memory leak when cancelling urb Reply-To: In-Reply-To: On Tue, Jan 12, 2021 at 01:42:33PM +0700, Minh Bùi Quang wrote: > On Mon, Jan 11, 2021 at 9:31 PM Bui Quang Minh > wrote: > > > > On Mon, Jan 11, 2021 at 01:00:31PM +0100, Oliver Neukum wrote: > > > Am Montag, den 11.01.2021, 10:49 + schrieb Bui Quang Minh: > > > > In mcba_usb_read_bulk_callback(), when we don't resubmit or fails to > > > > resubmit the urb, we need to deallocate the transfer buffer that is > > > > allocated in mcba_usb_start(). > > > > > > > > Reported-by: syzbot+57281c762a3922e14...@syzkaller.appspotmail.com > > > > Signed-off-by: Bui Quang Minh > > > > --- > > > > v1: add memory leak fix when not resubmitting urb > > > > v2: add memory leak fix when failing to resubmit urb > > > > > > > > drivers/net/can/usb/mcba_usb.c | 11 --- > > > > 1 file changed, 8 insertions(+), 3 deletions(-) > > > > > > > > diff --git a/drivers/net/can/usb/mcba_usb.c > > > > b/drivers/net/can/usb/mcba_usb.c > > > > index df54eb7d4b36..30236e640116 100644 > > > > --- a/drivers/net/can/usb/mcba_usb.c > > > > +++ b/drivers/net/can/usb/mcba_usb.c > > > > @@ -584,6 +584,8 @@ static void mcba_usb_read_bulk_callback(struct urb > > > > *urb) > > > > case -EPIPE: > > > > case -EPROTO: > > > > case -ESHUTDOWN: > > > > + usb_free_coherent(urb->dev, urb->transfer_buffer_length, > > > > + urb->transfer_buffer, urb->transfer_dma); > > > > return; > > > > > > > > > > Can you call usb_free_coherent() in what can be hard IRQ context? > > > > You are right, I digged in the code and saw some comments that on some > > architectures, usb_free_coherent() cannot be called in hard IRQ context. > > I see the usb_free_coherent() is called in write_bulk_callback too. I will > > send a patch that uses usb_anchor to keep track of these urbs and cleanup > > the transfer buffer later in disconnect(). > > Hi, I have sent a version 3 patch. However, I found out that > usb_free_coherent() > is ok in this situation. In usb_free_coherent(), if the buffer is allocated > via > dma_alloc_coherent() in usb_alloc_coherent(), dma_free_coherent() is called. > In dma_free_coherent(), ops->free() may be called in some cases which may > contains calls to vunmap() that is not permitted in interrupt context. > However, > in usb_alloc_coherent(), buffer can be allocated from dma pool if the > size is less > than 2048 and the buffer size in mcba_usb is obviously less than 2048. > As a result, > usb_free_coherent() will at most fall in the path that calls > dma_pool_free(), which is > safe. Am I right? Hi, I'm CC'ing CAN network driver maintainers so we can discuss the patch properly. Thanks, Quang Minh.
[PATCH v3] can: mcba_usb: Fix memory leak when cancelling urb
In mcba_usb_read_bulk_callback(), when we don't resubmit or fails to resubmit the urb, we need to deallocate the transfer buffer that is allocated in mcba_usb_start(). On some architectures, usb_free_coherent() cannot be called in interrupt context, create a usb_anchor to keep track of these urbs and free them later. Reported-by: syzbot+57281c762a3922e14...@syzkaller.appspotmail.com Signed-off-by: Bui Quang Minh --- v1: add memory leak fix when not resubmitting urb v2: add memory leak fix when failing to resubmit urb v3: remove usb_free_coherent() calls in interrupt context drivers/net/can/usb/mcba_usb.c | 48 +++--- 1 file changed, 39 insertions(+), 9 deletions(-) diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c index df54eb7d4b36..8025db484a22 100644 --- a/drivers/net/can/usb/mcba_usb.c +++ b/drivers/net/can/usb/mcba_usb.c @@ -77,6 +77,7 @@ struct mcba_priv { struct net_device *netdev; struct usb_anchor tx_submitted; struct usb_anchor rx_submitted; + struct usb_anchor urbs_cleanup; struct can_berr_counter bec; bool usb_ka_first_pass; bool can_ka_first_pass; @@ -220,14 +221,17 @@ static void mcba_usb_write_bulk_callback(struct urb *urb) { struct mcba_usb_ctx *ctx = urb->context; struct net_device *netdev; + struct mcba_priv *priv; WARN_ON(!ctx); netdev = ctx->priv->netdev; + priv = netdev_priv(netdev); - /* free up our allocated buffer */ - usb_free_coherent(urb->dev, urb->transfer_buffer_length, - urb->transfer_buffer, urb->transfer_dma); + /* On some architectures, usb_free_coherent() cannot be called in +* interrupt context, queue the urb for later cleanup +*/ + usb_anchor_urb(urb, >urbs_cleanup); if (ctx->can) { if (!netif_device_present(netdev)) @@ -291,8 +295,11 @@ static netdev_tx_t mcba_usb_xmit(struct mcba_priv *priv, failed: usb_unanchor_urb(urb); - usb_free_coherent(priv->udev, MCBA_USB_TX_BUFF_SIZE, buf, - urb->transfer_dma); + + /* On some architectures, usb_free_coherent() cannot be called in +* interrupt context, queue the urb for later cleanup +*/ + usb_anchor_urb(urb, >urbs_cleanup); if (err == -ENODEV) netif_device_detach(priv->netdev); @@ -584,7 +591,7 @@ static void mcba_usb_read_bulk_callback(struct urb *urb) case -EPIPE: case -EPROTO: case -ESHUTDOWN: - return; + goto free; default: netdev_info(netdev, "Rx URB aborted (%d)\n", urb->status); @@ -615,11 +622,20 @@ static void mcba_usb_read_bulk_callback(struct urb *urb) retval = usb_submit_urb(urb, GFP_ATOMIC); - if (retval == -ENODEV) - netif_device_detach(netdev); - else if (retval) + if (retval < 0) { netdev_err(netdev, "failed resubmitting read bulk urb: %d\n", retval); + if (retval == -ENODEV) + netif_device_detach(netdev); + goto free; + } + + return; +free: + /* On some architectures, usb_free_coherent() cannot be called in +* interrupt context, queue the urb for later cleanup +*/ + usb_anchor_urb(urb, >urbs_cleanup); } /* Start USB device */ @@ -706,6 +722,17 @@ static int mcba_usb_open(struct net_device *netdev) return 0; } +static void mcba_urb_cleanup(struct mcba_priv *priv) +{ + struct urb *urb; + + while ((urb = usb_get_from_anchor(>urbs_cleanup))) { + usb_free_coherent(urb->dev, urb->transfer_buffer_length, + urb->transfer_buffer, urb->transfer_dma); + usb_free_urb(urb); + } +} + static void mcba_urb_unlink(struct mcba_priv *priv) { usb_kill_anchored_urbs(>rx_submitted); @@ -723,6 +750,7 @@ static int mcba_usb_close(struct net_device *netdev) /* Stop polling */ mcba_urb_unlink(priv); + mcba_urb_cleanup(priv); close_candev(netdev); can_led_event(netdev, CAN_LED_EVENT_STOP); @@ -812,6 +840,7 @@ static int mcba_usb_probe(struct usb_interface *intf, init_usb_anchor(>rx_submitted); init_usb_anchor(>tx_submitted); + init_usb_anchor(>urbs_cleanup); usb_set_intfdata(intf, priv); @@ -877,6 +906,7 @@ static void mcba_usb_disconnect(struct usb_interface *intf) unregister_candev(priv->netdev); mcba_urb_unlink(priv); + mcba_urb_cleanup(priv); free_candev(priv->netdev); } -- 2.17.1
Re: [PATCH v2] can: mcba_usb: Fix memory leak when cancelling urb
On Mon, Jan 11, 2021 at 01:00:31PM +0100, Oliver Neukum wrote: > Am Montag, den 11.01.2021, 10:49 + schrieb Bui Quang Minh: > > In mcba_usb_read_bulk_callback(), when we don't resubmit or fails to > > resubmit the urb, we need to deallocate the transfer buffer that is > > allocated in mcba_usb_start(). > > > > Reported-by: syzbot+57281c762a3922e14...@syzkaller.appspotmail.com > > Signed-off-by: Bui Quang Minh > > --- > > v1: add memory leak fix when not resubmitting urb > > v2: add memory leak fix when failing to resubmit urb > > > > drivers/net/can/usb/mcba_usb.c | 11 --- > > 1 file changed, 8 insertions(+), 3 deletions(-) > > > > diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c > > index df54eb7d4b36..30236e640116 100644 > > --- a/drivers/net/can/usb/mcba_usb.c > > +++ b/drivers/net/can/usb/mcba_usb.c > > @@ -584,6 +584,8 @@ static void mcba_usb_read_bulk_callback(struct urb *urb) > > case -EPIPE: > > case -EPROTO: > > case -ESHUTDOWN: > > + usb_free_coherent(urb->dev, urb->transfer_buffer_length, > > + urb->transfer_buffer, urb->transfer_dma); > > return; > > > > Can you call usb_free_coherent() in what can be hard IRQ context? You are right, I digged in the code and saw some comments that on some architectures, usb_free_coherent() cannot be called in hard IRQ context. I see the usb_free_coherent() is called in write_bulk_callback too. I will send a patch that uses usb_anchor to keep track of these urbs and cleanup the transfer buffer later in disconnect(). Thank you for your review, Quang Minh.
[PATCH v2] can: mcba_usb: Fix memory leak when cancelling urb
In mcba_usb_read_bulk_callback(), when we don't resubmit or fails to resubmit the urb, we need to deallocate the transfer buffer that is allocated in mcba_usb_start(). Reported-by: syzbot+57281c762a3922e14...@syzkaller.appspotmail.com Signed-off-by: Bui Quang Minh --- v1: add memory leak fix when not resubmitting urb v2: add memory leak fix when failing to resubmit urb drivers/net/can/usb/mcba_usb.c | 11 --- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c index df54eb7d4b36..30236e640116 100644 --- a/drivers/net/can/usb/mcba_usb.c +++ b/drivers/net/can/usb/mcba_usb.c @@ -584,6 +584,8 @@ static void mcba_usb_read_bulk_callback(struct urb *urb) case -EPIPE: case -EPROTO: case -ESHUTDOWN: + usb_free_coherent(urb->dev, urb->transfer_buffer_length, + urb->transfer_buffer, urb->transfer_dma); return; default: @@ -615,11 +617,14 @@ static void mcba_usb_read_bulk_callback(struct urb *urb) retval = usb_submit_urb(urb, GFP_ATOMIC); - if (retval == -ENODEV) - netif_device_detach(netdev); - else if (retval) + if (retval < 0) { netdev_err(netdev, "failed resubmitting read bulk urb: %d\n", retval); + usb_free_coherent(urb->dev, urb->transfer_buffer_length, + urb->transfer_buffer, urb->transfer_dma); + if (retval == -ENODEV) + netif_device_detach(netdev); + } } /* Start USB device */ -- 2.17.1
Re: memory leak in mcba_usb_probe
#syz test: https://github.com/minhbq-99/linux.git 080e743dff190ee8ebec63a13ac33fe8b7e4fc9e
[PATCH] can: mcba_usb: Fix memory leak when cancelling urb
In mcba_usb_read_bulk_callback(), when we don't resubmit the urb, we need to deallocate the transfer buffer that is allocated in mcba_usb_start(). Reported-by: syzbot+57281c762a3922e14...@syzkaller.appspotmail.com Signed-off-by: Bui Quang Minh --- drivers/net/can/usb/mcba_usb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c index df54eb7d4b36..7375c384cbd2 100644 --- a/drivers/net/can/usb/mcba_usb.c +++ b/drivers/net/can/usb/mcba_usb.c @@ -584,6 +584,8 @@ static void mcba_usb_read_bulk_callback(struct urb *urb) case -EPIPE: case -EPROTO: case -ESHUTDOWN: + usb_free_coherent(urb->dev, urb->transfer_buffer_length, + urb->transfer_buffer, urb->transfer_dma); return; default: -- 2.17.1
Re: [PATCH] USB: dummy-hcd: Fix uninitialized array use in init()
On Sat, Dec 05, 2020 at 10:15:11AM -0500, Alan Stern wrote: > On Sat, Dec 05, 2020 at 07:47:01PM +0700, Minh Bùi Quang wrote: > > Vào Th 6, 4 thg 12, 2020 vào lúc 23:12 Alan Stern > > đã viết: > > > Does this initialization end up using less memory than an explicit > > > memset() call? > > > > You mean speed? > > No, I mean memory space. > > A memset call requires a certain amount of instruction space (to push > the arguments and make the call) but no static data space. > Initialization requires some instruction space (to copy the data) and > static data space as well (to hold the data that is to be copied). > > Alan Stern > Thank you for your clarification, I didn't think about it before. As I check when compiling the code, with MAX_NUM_UDC=32 the initialization becomes xoreax,eax movecx,0x40 rep stos DWORD PTR es:[rdi],eax With MAX_NUM_UDC=2, the initialization becomes movQWORD PTR [rbp-0x30],0x0 movQWORD PTR [rbp-0x28],0x0 As I see, initialization does not require additional static data space. Am I right? Thanks, Quang Minh
[PATCH] USB: dummy-hcd: Fix uninitialized array use in init()
This error path err_add_pdata: for (i = 0; i < mod_data.num; i++) kfree(dum[i]); can be triggered when not all dum's elements are initialized. Fix this by initializing all dum's elements to NULL. Signed-off-by: Bui Quang Minh --- drivers/usb/gadget/udc/dummy_hcd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c index 0eeaead..a2cf009 100644 --- a/drivers/usb/gadget/udc/dummy_hcd.c +++ b/drivers/usb/gadget/udc/dummy_hcd.c @@ -2734,7 +2734,7 @@ static int __init init(void) { int retval = -ENOMEM; int i; - struct dummy *dum[MAX_NUM_UDC]; + struct dummy *dum[MAX_NUM_UDC] = {}; if (usb_disabled()) return -ENODEV; -- 2.7.4