CC: [email protected] TO: Yishai Hadas <[email protected]> CC: Leon Romanovsky <[email protected]>
tree: https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git rdma-next head: fb60ce5917ddd769e17552ae421d056c0bb151a5 commit: 745698890f68cf5cf038ff00af2e400371aa1e5c [73/76] IB/core: Improve ODP to use hmm_range_fault() :::::: branch date: 16 hours ago :::::: commit date: 16 hours ago config: ia64-randconfig-s032-20200909 (attached as .config) compiler: ia64-linux-gcc (GCC) 9.3.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # apt-get install sparse # sparse version: v0.6.2-191-g10164920-dirty git checkout 745698890f68cf5cf038ff00af2e400371aa1e5c # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=ia64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <[email protected]> sparse warnings: (new ones prefixed by >>) >> drivers/infiniband/core/umem_odp.c:347:5: sparse: sparse: context imbalance >> in 'ib_umem_odp_map_dma_and_lock' - wrong count at exit # https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/commit/?id=745698890f68cf5cf038ff00af2e400371aa1e5c git remote add leon-rdma https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git git fetch --no-tags leon-rdma rdma-next git checkout 745698890f68cf5cf038ff00af2e400371aa1e5c vim +/ib_umem_odp_map_dma_and_lock +347 drivers/infiniband/core/umem_odp.c 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 327 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 328 /** 745698890f68cf5 Yishai Hadas 2020-08-19 329 * ib_umem_odp_map_dma_and_lock - DMA map userspace memory in an ODP MR and lock it. 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 330 * 745698890f68cf5 Yishai Hadas 2020-08-19 331 * Maps the range passed in the argument to DMA addresses. 745698890f68cf5 Yishai Hadas 2020-08-19 332 * The DMA addresses of the mapped pages is updated in umem_odp->dma_list. 745698890f68cf5 Yishai Hadas 2020-08-19 333 * Upon success the ODP MR will be locked to let caller complete its device 745698890f68cf5 Yishai Hadas 2020-08-19 334 * page table update. 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 335 * 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 336 * Returns the number of pages mapped in success, negative error code 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 337 * for failure. b5231b019d76521 Jason Gunthorpe 2018-09-16 338 * @umem_odp: the umem to map and pin 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 339 * @user_virt: the address from which we need to map. 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 340 * @bcnt: the minimal number of bytes to pin and map. The mapping might be 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 341 * bigger due to alignment, and may also be smaller in case of an error 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 342 * pinning or mapping a page. The actual pages mapped is returned in 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 343 * the return value. 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 344 * @access_mask: bit mask of the requested access permissions for the given 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 345 * range. 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 346 */ 745698890f68cf5 Yishai Hadas 2020-08-19 @347 int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, 745698890f68cf5 Yishai Hadas 2020-08-19 348 u64 bcnt, u64 access_mask) 745698890f68cf5 Yishai Hadas 2020-08-19 349 __acquires(&umem_odp->umem_mutex) 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 350 { 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 351 struct task_struct *owning_process = NULL; f27a0d50a4bc286 Jason Gunthorpe 2018-09-16 352 struct mm_struct *owning_mm = umem_odp->umem.owning_mm; 745698890f68cf5 Yishai Hadas 2020-08-19 353 int pfn_index, dma_index, ret = 0, start_idx; 745698890f68cf5 Yishai Hadas 2020-08-19 354 unsigned int page_shift, hmm_order, pfn_start_idx; 745698890f68cf5 Yishai Hadas 2020-08-19 355 unsigned long num_pfns, current_seq; 745698890f68cf5 Yishai Hadas 2020-08-19 356 struct hmm_range range = {}; 745698890f68cf5 Yishai Hadas 2020-08-19 357 unsigned long timeout; 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 358 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 359 if (access_mask == 0) 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 360 return -EINVAL; 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 361 d2183c6f1958e6b Jason Gunthorpe 2019-05-20 362 if (user_virt < ib_umem_start(umem_odp) || d2183c6f1958e6b Jason Gunthorpe 2019-05-20 363 user_virt + bcnt > ib_umem_end(umem_odp)) 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 364 return -EFAULT; 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 365 d2183c6f1958e6b Jason Gunthorpe 2019-05-20 366 page_shift = umem_odp->page_shift; 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 367 f27a0d50a4bc286 Jason Gunthorpe 2018-09-16 368 /* f27a0d50a4bc286 Jason Gunthorpe 2018-09-16 369 * owning_process is allowed to be NULL, this means somehow the mm is f27a0d50a4bc286 Jason Gunthorpe 2018-09-16 370 * existing beyond the lifetime of the originating process.. Presumably f27a0d50a4bc286 Jason Gunthorpe 2018-09-16 371 * mmget_not_zero will fail in this case. f27a0d50a4bc286 Jason Gunthorpe 2018-09-16 372 */ f25a546e65292b3 Jason Gunthorpe 2019-11-12 373 owning_process = get_pid_task(umem_odp->tgid, PIDTYPE_PID); 4438ee3f130c9de Moni Shoua 2019-02-17 374 if (!owning_process || !mmget_not_zero(owning_mm)) { 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 375 ret = -EINVAL; 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 376 goto out_put_task; 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 377 } 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 378 745698890f68cf5 Yishai Hadas 2020-08-19 379 range.notifier = &umem_odp->notifier; 745698890f68cf5 Yishai Hadas 2020-08-19 380 range.start = ALIGN_DOWN(user_virt, 1UL << page_shift); 745698890f68cf5 Yishai Hadas 2020-08-19 381 range.end = ALIGN(user_virt + bcnt, 1UL << page_shift); 745698890f68cf5 Yishai Hadas 2020-08-19 382 pfn_start_idx = (range.start - ib_umem_start(umem_odp)) >> PAGE_SHIFT; 745698890f68cf5 Yishai Hadas 2020-08-19 383 num_pfns = (range.end - range.start) >> PAGE_SHIFT; 745698890f68cf5 Yishai Hadas 2020-08-19 384 range.default_flags = HMM_PFN_REQ_FAULT; 745698890f68cf5 Yishai Hadas 2020-08-19 385 9beae1ea89305a9 Lorenzo Stoakes 2016-10-13 386 if (access_mask & ODP_WRITE_ALLOWED_BIT) 745698890f68cf5 Yishai Hadas 2020-08-19 387 range.default_flags |= HMM_PFN_REQ_WRITE; 9beae1ea89305a9 Lorenzo Stoakes 2016-10-13 388 745698890f68cf5 Yishai Hadas 2020-08-19 389 range.hmm_pfns = &(umem_odp->pfn_list[pfn_start_idx]); 745698890f68cf5 Yishai Hadas 2020-08-19 390 timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 391 745698890f68cf5 Yishai Hadas 2020-08-19 392 retry: 745698890f68cf5 Yishai Hadas 2020-08-19 393 current_seq = range.notifier_seq = 745698890f68cf5 Yishai Hadas 2020-08-19 394 mmu_interval_read_begin(&umem_odp->notifier); 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 395 d8ed45c5dcd455f Michel Lespinasse 2020-06-08 396 mmap_read_lock(owning_mm); 745698890f68cf5 Yishai Hadas 2020-08-19 397 ret = hmm_range_fault(&range); d8ed45c5dcd455f Michel Lespinasse 2020-06-08 398 mmap_read_unlock(owning_mm); 745698890f68cf5 Yishai Hadas 2020-08-19 399 if (unlikely(ret)) { 745698890f68cf5 Yishai Hadas 2020-08-19 400 if (ret == -EBUSY && !time_after(jiffies, timeout)) 745698890f68cf5 Yishai Hadas 2020-08-19 401 goto retry; 745698890f68cf5 Yishai Hadas 2020-08-19 402 goto out_put_mm; b02394aa75e3942 Moni Shoua 2018-11-08 403 } 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 404 745698890f68cf5 Yishai Hadas 2020-08-19 405 start_idx = (range.start - ib_umem_start(umem_odp)) >> page_shift; 745698890f68cf5 Yishai Hadas 2020-08-19 406 dma_index = start_idx; 745698890f68cf5 Yishai Hadas 2020-08-19 407 b5231b019d76521 Jason Gunthorpe 2018-09-16 408 mutex_lock(&umem_odp->umem_mutex); 745698890f68cf5 Yishai Hadas 2020-08-19 409 if (mmu_interval_read_retry(&umem_odp->notifier, current_seq)) { 745698890f68cf5 Yishai Hadas 2020-08-19 410 mutex_unlock(&umem_odp->umem_mutex); 745698890f68cf5 Yishai Hadas 2020-08-19 411 goto retry; 403cd12e2cf759e Artemy Kovalyov 2017-04-05 412 } 403cd12e2cf759e Artemy Kovalyov 2017-04-05 413 745698890f68cf5 Yishai Hadas 2020-08-19 414 for (pfn_index = 0; pfn_index < num_pfns; 745698890f68cf5 Yishai Hadas 2020-08-19 415 pfn_index += 1 << (page_shift - PAGE_SHIFT), dma_index++) { 745698890f68cf5 Yishai Hadas 2020-08-19 416 /* 745698890f68cf5 Yishai Hadas 2020-08-19 417 * Since we asked for hmm_range_fault() to populate pages, 745698890f68cf5 Yishai Hadas 2020-08-19 418 * it shouldn't return an error entry on success. 745698890f68cf5 Yishai Hadas 2020-08-19 419 */ 745698890f68cf5 Yishai Hadas 2020-08-19 420 WARN_ON(range.hmm_pfns[pfn_index] & HMM_PFN_ERROR); 745698890f68cf5 Yishai Hadas 2020-08-19 421 WARN_ON(!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)); 745698890f68cf5 Yishai Hadas 2020-08-19 422 hmm_order = hmm_pfn_to_map_order(range.hmm_pfns[pfn_index]); 745698890f68cf5 Yishai Hadas 2020-08-19 423 /* If a hugepage was detected and ODP wasn't set for, the umem 745698890f68cf5 Yishai Hadas 2020-08-19 424 * page_shift will be used, the opposite case is an error. 745698890f68cf5 Yishai Hadas 2020-08-19 425 */ 745698890f68cf5 Yishai Hadas 2020-08-19 426 if (hmm_order + PAGE_SHIFT < page_shift) { 745698890f68cf5 Yishai Hadas 2020-08-19 427 ret = -EINVAL; 745698890f68cf5 Yishai Hadas 2020-08-19 428 pr_debug("%s: un-expected hmm_order %d, page_shift %d\n", 745698890f68cf5 Yishai Hadas 2020-08-19 429 __func__, hmm_order, page_shift); 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 430 break; b02394aa75e3942 Moni Shoua 2018-11-08 431 } 403cd12e2cf759e Artemy Kovalyov 2017-04-05 432 745698890f68cf5 Yishai Hadas 2020-08-19 433 ret = ib_umem_odp_map_dma_single_page( 745698890f68cf5 Yishai Hadas 2020-08-19 434 umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index]), 745698890f68cf5 Yishai Hadas 2020-08-19 435 access_mask); 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 436 if (ret < 0) { 745698890f68cf5 Yishai Hadas 2020-08-19 437 pr_debug("ib_umem_odp_map_dma_single_page failed with error %d\n", ret); 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 438 break; 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 439 } 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 440 } 745698890f68cf5 Yishai Hadas 2020-08-19 441 /* upon sucesss lock should stay on hold for the callee */ 745698890f68cf5 Yishai Hadas 2020-08-19 442 if (!ret) 745698890f68cf5 Yishai Hadas 2020-08-19 443 ret = dma_index - start_idx; 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 444 else 745698890f68cf5 Yishai Hadas 2020-08-19 445 mutex_unlock(&umem_odp->umem_mutex); 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 446 745698890f68cf5 Yishai Hadas 2020-08-19 447 out_put_mm: 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 448 mmput(owning_mm); 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 449 out_put_task: f27a0d50a4bc286 Jason Gunthorpe 2018-09-16 450 if (owning_process) 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 451 put_task_struct(owning_process); 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 452 return ret; 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 453 } 745698890f68cf5 Yishai Hadas 2020-08-19 454 EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); 8ada2c1c0c1d75a Shachar Raindel 2014-12-11 455 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/[email protected]
.config.gz
Description: application/gzip
_______________________________________________ kbuild mailing list -- [email protected] To unsubscribe send an email to [email protected]
