CC: [email protected] CC: Linux Memory Management List <[email protected]> TO: "Thomas Hellström" <[email protected]> CC: Matthew Auld <[email protected]>
tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master head: 711428e8f370eff043ae549572b8141987861583 commit: 950505cabe517ad40759cae6f88f33f0bdfbb7c8 [417/3223] drm/i915: Asynchronous migration selftest :::::: branch date: 11 hours ago :::::: commit date: 3 weeks ago config: x86_64-randconfig-m031-20220131 (https://download.01.org/0day-ci/archive/20220201/[email protected]/config) compiler: gcc-9 (Debian 9.3.0-22) 9.3.0 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <[email protected]> Reported-by: Dan Carpenter <[email protected]> New smatch warnings: drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c:381 igt_async_migrate() error: uninitialized symbol 'err'. Old smatch warnings: drivers/gpu/drm/i915/gem/i915_gem_object.h:194 __i915_gem_object_lock() error: we previously assumed 'ww' could be null (see line 183) vim +/err +381 drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c 950505cabe517a Thomas Hellström 2022-01-10 301 950505cabe517a Thomas Hellström 2022-01-10 302 /* 950505cabe517a Thomas Hellström 2022-01-10 303 * This subtest tests that unbinding at migration is indeed performed 950505cabe517a Thomas Hellström 2022-01-10 304 * async. We launch a spinner and a number of migrations depending on 950505cabe517a Thomas Hellström 2022-01-10 305 * that spinner to have terminated. Before each migration we bind a 950505cabe517a Thomas Hellström 2022-01-10 306 * vma, which should then be async unbound by the migration operation. 950505cabe517a Thomas Hellström 2022-01-10 307 * If we are able to schedule migrations without blocking while the 950505cabe517a Thomas Hellström 2022-01-10 308 * spinner is still running, those unbinds are indeed async and non- 950505cabe517a Thomas Hellström 2022-01-10 309 * blocking. 950505cabe517a Thomas Hellström 2022-01-10 310 * 950505cabe517a Thomas Hellström 2022-01-10 311 * Note that each async bind operation is awaiting the previous migration 950505cabe517a Thomas Hellström 2022-01-10 312 * due to the moving fence resulting from the migration. 950505cabe517a Thomas Hellström 2022-01-10 313 */ 950505cabe517a Thomas Hellström 2022-01-10 314 static int igt_async_migrate(struct intel_gt *gt) 950505cabe517a Thomas Hellström 2022-01-10 315 { 950505cabe517a Thomas Hellström 2022-01-10 316 struct intel_engine_cs *engine; 950505cabe517a Thomas Hellström 2022-01-10 317 enum intel_engine_id id; 950505cabe517a Thomas Hellström 2022-01-10 318 struct i915_ppgtt *ppgtt; 950505cabe517a Thomas Hellström 2022-01-10 319 struct igt_spinner spin; 950505cabe517a Thomas Hellström 2022-01-10 320 int err; 950505cabe517a Thomas Hellström 2022-01-10 321 950505cabe517a Thomas Hellström 2022-01-10 322 ppgtt = i915_ppgtt_create(gt, 0); 950505cabe517a Thomas Hellström 2022-01-10 323 if (IS_ERR(ppgtt)) 950505cabe517a Thomas Hellström 2022-01-10 324 return PTR_ERR(ppgtt); 950505cabe517a Thomas Hellström 2022-01-10 325 950505cabe517a Thomas Hellström 2022-01-10 326 if (igt_spinner_init(&spin, gt)) { 950505cabe517a Thomas Hellström 2022-01-10 327 err = -ENOMEM; 950505cabe517a Thomas Hellström 2022-01-10 328 goto out_spin; 950505cabe517a Thomas Hellström 2022-01-10 329 } 950505cabe517a Thomas Hellström 2022-01-10 330 950505cabe517a Thomas Hellström 2022-01-10 331 for_each_engine(engine, gt, id) { 950505cabe517a Thomas Hellström 2022-01-10 332 struct ttm_operation_ctx ctx = { 950505cabe517a Thomas Hellström 2022-01-10 333 .interruptible = true 950505cabe517a Thomas Hellström 2022-01-10 334 }; 950505cabe517a Thomas Hellström 2022-01-10 335 struct dma_fence *spin_fence; 950505cabe517a Thomas Hellström 2022-01-10 336 struct intel_context *ce; 950505cabe517a Thomas Hellström 2022-01-10 337 struct i915_request *rq; 950505cabe517a Thomas Hellström 2022-01-10 338 struct i915_deps deps; 950505cabe517a Thomas Hellström 2022-01-10 339 950505cabe517a Thomas Hellström 2022-01-10 340 ce = intel_context_create(engine); 950505cabe517a Thomas Hellström 2022-01-10 341 if (IS_ERR(ce)) { 950505cabe517a Thomas Hellström 2022-01-10 342 err = PTR_ERR(ce); 950505cabe517a Thomas Hellström 2022-01-10 343 goto out_ce; 950505cabe517a Thomas Hellström 2022-01-10 344 } 950505cabe517a Thomas Hellström 2022-01-10 345 950505cabe517a Thomas Hellström 2022-01-10 346 /* 950505cabe517a Thomas Hellström 2022-01-10 347 * Use MI_NOOP, making the spinner non-preemptible. If there 950505cabe517a Thomas Hellström 2022-01-10 348 * is a code path where we fail async operation due to the 950505cabe517a Thomas Hellström 2022-01-10 349 * running spinner, we will block and fail to end the 950505cabe517a Thomas Hellström 2022-01-10 350 * spinner resulting in a deadlock. But with a non- 950505cabe517a Thomas Hellström 2022-01-10 351 * preemptible spinner, hangcheck will terminate the spinner 950505cabe517a Thomas Hellström 2022-01-10 352 * for us, and we will later detect that and fail the test. 950505cabe517a Thomas Hellström 2022-01-10 353 */ 950505cabe517a Thomas Hellström 2022-01-10 354 rq = igt_spinner_create_request(&spin, ce, MI_NOOP); 950505cabe517a Thomas Hellström 2022-01-10 355 intel_context_put(ce); 950505cabe517a Thomas Hellström 2022-01-10 356 if (IS_ERR(rq)) { 950505cabe517a Thomas Hellström 2022-01-10 357 err = PTR_ERR(rq); 950505cabe517a Thomas Hellström 2022-01-10 358 goto out_ce; 950505cabe517a Thomas Hellström 2022-01-10 359 } 950505cabe517a Thomas Hellström 2022-01-10 360 950505cabe517a Thomas Hellström 2022-01-10 361 i915_deps_init(&deps, GFP_KERNEL); 950505cabe517a Thomas Hellström 2022-01-10 362 err = i915_deps_add_dependency(&deps, &rq->fence, &ctx); 950505cabe517a Thomas Hellström 2022-01-10 363 spin_fence = dma_fence_get(&rq->fence); 950505cabe517a Thomas Hellström 2022-01-10 364 i915_request_add(rq); 950505cabe517a Thomas Hellström 2022-01-10 365 if (err) 950505cabe517a Thomas Hellström 2022-01-10 366 goto out_ce; 950505cabe517a Thomas Hellström 2022-01-10 367 950505cabe517a Thomas Hellström 2022-01-10 368 err = __igt_lmem_pages_migrate(gt, &ppgtt->vm, &deps, &spin, 950505cabe517a Thomas Hellström 2022-01-10 369 spin_fence); 950505cabe517a Thomas Hellström 2022-01-10 370 i915_deps_fini(&deps); 950505cabe517a Thomas Hellström 2022-01-10 371 dma_fence_put(spin_fence); 950505cabe517a Thomas Hellström 2022-01-10 372 if (err) 950505cabe517a Thomas Hellström 2022-01-10 373 goto out_ce; 950505cabe517a Thomas Hellström 2022-01-10 374 } 950505cabe517a Thomas Hellström 2022-01-10 375 950505cabe517a Thomas Hellström 2022-01-10 376 out_ce: 950505cabe517a Thomas Hellström 2022-01-10 377 igt_spinner_fini(&spin); 950505cabe517a Thomas Hellström 2022-01-10 378 out_spin: 950505cabe517a Thomas Hellström 2022-01-10 379 i915_vm_put(&ppgtt->vm); 950505cabe517a Thomas Hellström 2022-01-10 380 950505cabe517a Thomas Hellström 2022-01-10 @381 return err; 950505cabe517a Thomas Hellström 2022-01-10 382 } 950505cabe517a Thomas Hellström 2022-01-10 383 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/[email protected] _______________________________________________ kbuild mailing list -- [email protected] To unsubscribe send an email to [email protected]
