CC: kbuild-...@lists.01.org CC: "Darrick J. Wong" <darrick.w...@oracle.com> TO: "Darrick J. Wong" <darrick.w...@oracle.com>
tree: https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git stale-exposure head: 2d84e06a8db67be32cea287104a09084df78d4ee commit: 3edd6be92db73576b2bd71d4c2b1a623245a2336 [45/47] xfs: measure all contiguous previous extents for prealloc size :::::: branch date: 30 hours ago :::::: commit date: 30 hours ago config: i386-allyesconfig (attached as .config) compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0 If you fix the issue, kindly add following tag as appropriate Reported-by: kbuild test robot <l...@intel.com> Reported-by: Dan Carpenter <dan.carpen...@oracle.com> smatch warnings: fs/xfs/xfs_iomap.c:438 xfs_iomap_prealloc_size() warn: should 'plen << 1' be a 64 bit type? # https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/commit/?id=3edd6be92db73576b2bd71d4c2b1a623245a2336 git remote add djwong-xfs https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git git remote update djwong-xfs git checkout 3edd6be92db73576b2bd71d4c2b1a623245a2336 vim +438 fs/xfs/xfs_iomap.c 76a4202a388690e Brian Foster 2013-03-18 353 055388a3188f566 Dave Chinner 2011-01-04 354 /* 51446f5ba44874d Christoph Hellwig 2016-09-19 355 * If we are doing a write at the end of the file and there are no allocations 51446f5ba44874d Christoph Hellwig 2016-09-19 356 * past this one, then extend the allocation out to the file system's write 51446f5ba44874d Christoph Hellwig 2016-09-19 357 * iosize. 51446f5ba44874d Christoph Hellwig 2016-09-19 358 * 055388a3188f566 Dave Chinner 2011-01-04 359 * If we don't have a user specified preallocation size, dynamically increase 055388a3188f566 Dave Chinner 2011-01-04 360 * the preallocation size as the size of the file grows. Cap the maximum size 055388a3188f566 Dave Chinner 2011-01-04 361 * at a single extent or less if the filesystem is near full. The closer the 055388a3188f566 Dave Chinner 2011-01-04 362 * filesystem is to full, the smaller the maximum prealocation. 51446f5ba44874d Christoph Hellwig 2016-09-19 363 * 51446f5ba44874d Christoph Hellwig 2016-09-19 364 * As an exception we don't do any preallocation at all if the file is smaller 51446f5ba44874d Christoph Hellwig 2016-09-19 365 * than the minimum preallocation and we are using the default dynamic 51446f5ba44874d Christoph Hellwig 2016-09-19 366 * preallocation scheme, as it is likely this is the only write to the file that 51446f5ba44874d Christoph Hellwig 2016-09-19 367 * is going to be done. 51446f5ba44874d Christoph Hellwig 2016-09-19 368 * 51446f5ba44874d Christoph Hellwig 2016-09-19 369 * We clean up any extra space left over when the file is closed in 51446f5ba44874d Christoph Hellwig 2016-09-19 370 * xfs_inactive(). 055388a3188f566 Dave Chinner 2011-01-04 371 */ 055388a3188f566 Dave Chinner 2011-01-04 372 STATIC xfs_fsblock_t 055388a3188f566 Dave Chinner 2011-01-04 373 xfs_iomap_prealloc_size( a1e16c26660b301 Dave Chinner 2013-02-11 374 struct xfs_inode *ip, 66ae56a53f0e341 Christoph Hellwig 2019-02-18 375 int whichfork, 51446f5ba44874d Christoph Hellwig 2016-09-19 376 loff_t offset, 51446f5ba44874d Christoph Hellwig 2016-09-19 377 loff_t count, b2b1712a640824e Christoph Hellwig 2017-11-03 378 struct xfs_iext_cursor *icur) 055388a3188f566 Dave Chinner 2011-01-04 379 { 3edd6be92db7357 Darrick J. Wong 2020-05-20 380 struct xfs_iext_cursor ncur = *icur; 3edd6be92db7357 Darrick J. Wong 2020-05-20 381 struct xfs_bmbt_irec prev, got; 51446f5ba44874d Christoph Hellwig 2016-09-19 382 struct xfs_mount *mp = ip->i_mount; 66ae56a53f0e341 Christoph Hellwig 2019-02-18 383 struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork); 51446f5ba44874d Christoph Hellwig 2016-09-19 384 xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset); 3c58b5f809eda8a Brian Foster 2013-03-18 385 int64_t freesp; 76a4202a388690e Brian Foster 2013-03-18 386 xfs_fsblock_t qblocks; 51446f5ba44874d Christoph Hellwig 2016-09-19 387 xfs_fsblock_t alloc_blocks = 0; 3edd6be92db7357 Darrick J. Wong 2020-05-20 388 xfs_extlen_t plen; 3edd6be92db7357 Darrick J. Wong 2020-05-20 389 int shift = 0; 3edd6be92db7357 Darrick J. Wong 2020-05-20 390 int qshift = 0; 51446f5ba44874d Christoph Hellwig 2016-09-19 391 51446f5ba44874d Christoph Hellwig 2016-09-19 392 if (offset + count <= XFS_ISIZE(ip)) 51446f5ba44874d Christoph Hellwig 2016-09-19 393 return 0; 51446f5ba44874d Christoph Hellwig 2016-09-19 394 3274d00801007cc Christoph Hellwig 2019-10-28 395 if (!(mp->m_flags & XFS_MOUNT_ALLOCSIZE) && 5da8a07c79e8a1c Christoph Hellwig 2019-10-28 396 (XFS_ISIZE(ip) < XFS_FSB_TO_B(mp, mp->m_allocsize_blocks))) 51446f5ba44874d Christoph Hellwig 2016-09-19 397 return 0; 055388a3188f566 Dave Chinner 2011-01-04 398 51446f5ba44874d Christoph Hellwig 2016-09-19 399 /* 51446f5ba44874d Christoph Hellwig 2016-09-19 400 * If an explicit allocsize is set, the file is small, or we 51446f5ba44874d Christoph Hellwig 2016-09-19 401 * are writing behind a hole, then use the minimum prealloc: 51446f5ba44874d Christoph Hellwig 2016-09-19 402 */ 3274d00801007cc Christoph Hellwig 2019-10-28 403 if ((mp->m_flags & XFS_MOUNT_ALLOCSIZE) || 51446f5ba44874d Christoph Hellwig 2016-09-19 404 XFS_ISIZE(ip) < XFS_FSB_TO_B(mp, mp->m_dalign) || 3edd6be92db7357 Darrick J. Wong 2020-05-20 405 !xfs_iext_prev_extent(ifp, &ncur, &prev) || 656152e552e5cbe Christoph Hellwig 2016-11-24 406 prev.br_startoff + prev.br_blockcount < offset_fsb) 5da8a07c79e8a1c Christoph Hellwig 2019-10-28 407 return mp->m_allocsize_blocks; 51446f5ba44874d Christoph Hellwig 2016-09-19 408 51446f5ba44874d Christoph Hellwig 2016-09-19 409 /* 51446f5ba44874d Christoph Hellwig 2016-09-19 410 * Determine the initial size of the preallocation. We are beyond the 51446f5ba44874d Christoph Hellwig 2016-09-19 411 * current EOF here, but we need to take into account whether this is 51446f5ba44874d Christoph Hellwig 2016-09-19 412 * a sparse write or an extending write when determining the 51446f5ba44874d Christoph Hellwig 2016-09-19 413 * preallocation size. Hence we need to look up the extent that ends 51446f5ba44874d Christoph Hellwig 2016-09-19 414 * at the current write offset and use the result to determine the 51446f5ba44874d Christoph Hellwig 2016-09-19 415 * preallocation size. 51446f5ba44874d Christoph Hellwig 2016-09-19 416 * 51446f5ba44874d Christoph Hellwig 2016-09-19 417 * If the extent is a hole, then preallocation is essentially disabled. 3edd6be92db7357 Darrick J. Wong 2020-05-20 418 * Otherwise we take the size of the preceding data extents as the basis 3edd6be92db7357 Darrick J. Wong 2020-05-20 419 * for the preallocation size. Note that we don't care if the previous 3edd6be92db7357 Darrick J. Wong 2020-05-20 420 * extents are written or not. 3edd6be92db7357 Darrick J. Wong 2020-05-20 421 * 3edd6be92db7357 Darrick J. Wong 2020-05-20 422 * If the size of the extents is greater than half the maximum extent 3edd6be92db7357 Darrick J. Wong 2020-05-20 423 * length, then use the current offset as the basis. This ensures that 3edd6be92db7357 Darrick J. Wong 2020-05-20 424 * for large files the preallocation size always extends to MAXEXTLEN 3edd6be92db7357 Darrick J. Wong 2020-05-20 425 * rather than falling short due to things like stripe unit/width 3edd6be92db7357 Darrick J. Wong 2020-05-20 426 * alignment of real extents. 51446f5ba44874d Christoph Hellwig 2016-09-19 427 */ 3edd6be92db7357 Darrick J. Wong 2020-05-20 428 plen = prev.br_blockcount; 3edd6be92db7357 Darrick J. Wong 2020-05-20 429 while (xfs_iext_prev_extent(ifp, &ncur, &got)) { 3edd6be92db7357 Darrick J. Wong 2020-05-20 430 if (plen > MAXEXTLEN / 2 || 3edd6be92db7357 Darrick J. Wong 2020-05-20 431 isnullstartblock(got.br_startblock) || 3edd6be92db7357 Darrick J. Wong 2020-05-20 432 got.br_startoff + got.br_blockcount != prev.br_startoff || 3edd6be92db7357 Darrick J. Wong 2020-05-20 433 got.br_startblock + got.br_blockcount != prev.br_startblock) 3edd6be92db7357 Darrick J. Wong 2020-05-20 434 break; 3edd6be92db7357 Darrick J. Wong 2020-05-20 435 plen += got.br_blockcount; 3edd6be92db7357 Darrick J. Wong 2020-05-20 436 prev = got; 3edd6be92db7357 Darrick J. Wong 2020-05-20 437 } 3edd6be92db7357 Darrick J. Wong 2020-05-20 @438 alloc_blocks = plen << 1; 3edd6be92db7357 Darrick J. Wong 2020-05-20 439 if (alloc_blocks > MAXEXTLEN) 51446f5ba44874d Christoph Hellwig 2016-09-19 440 alloc_blocks = XFS_B_TO_FSB(mp, offset); 3c58b5f809eda8a Brian Foster 2013-03-18 441 if (!alloc_blocks) 3c58b5f809eda8a Brian Foster 2013-03-18 442 goto check_writeio; 76a4202a388690e Brian Foster 2013-03-18 443 qblocks = alloc_blocks; 055388a3188f566 Dave Chinner 2011-01-04 444 c9bdbdc0741d909 Brian Foster 2013-03-18 445 /* c9bdbdc0741d909 Brian Foster 2013-03-18 446 * MAXEXTLEN is not a power of two value but we round the prealloc down c9bdbdc0741d909 Brian Foster 2013-03-18 447 * to the nearest power of two value after throttling. To prevent the c9bdbdc0741d909 Brian Foster 2013-03-18 448 * round down from unconditionally reducing the maximum supported prealloc c9bdbdc0741d909 Brian Foster 2013-03-18 449 * size, we round up first, apply appropriate throttling, round down and c9bdbdc0741d909 Brian Foster 2013-03-18 450 * cap the value to MAXEXTLEN. c9bdbdc0741d909 Brian Foster 2013-03-18 451 */ c9bdbdc0741d909 Brian Foster 2013-03-18 452 alloc_blocks = XFS_FILEOFF_MIN(roundup_pow_of_two(MAXEXTLEN), c9bdbdc0741d909 Brian Foster 2013-03-18 453 alloc_blocks); 055388a3188f566 Dave Chinner 2011-01-04 454 0d485ada404b361 Dave Chinner 2015-02-23 455 freesp = percpu_counter_read_positive(&mp->m_fdblocks); 055388a3188f566 Dave Chinner 2011-01-04 456 if (freesp < mp->m_low_space[XFS_LOWSP_5_PCNT]) { 055388a3188f566 Dave Chinner 2011-01-04 457 shift = 2; 055388a3188f566 Dave Chinner 2011-01-04 458 if (freesp < mp->m_low_space[XFS_LOWSP_4_PCNT]) 055388a3188f566 Dave Chinner 2011-01-04 459 shift++; 055388a3188f566 Dave Chinner 2011-01-04 460 if (freesp < mp->m_low_space[XFS_LOWSP_3_PCNT]) 055388a3188f566 Dave Chinner 2011-01-04 461 shift++; 055388a3188f566 Dave Chinner 2011-01-04 462 if (freesp < mp->m_low_space[XFS_LOWSP_2_PCNT]) 055388a3188f566 Dave Chinner 2011-01-04 463 shift++; 055388a3188f566 Dave Chinner 2011-01-04 464 if (freesp < mp->m_low_space[XFS_LOWSP_1_PCNT]) 055388a3188f566 Dave Chinner 2011-01-04 465 shift++; 055388a3188f566 Dave Chinner 2011-01-04 466 } 76a4202a388690e Brian Foster 2013-03-18 467 76a4202a388690e Brian Foster 2013-03-18 468 /* f074051ff550f9f Brian Foster 2014-07-24 469 * Check each quota to cap the prealloc size, provide a shift value to f074051ff550f9f Brian Foster 2014-07-24 470 * throttle with and adjust amount of available space. 76a4202a388690e Brian Foster 2013-03-18 471 */ 76a4202a388690e Brian Foster 2013-03-18 472 if (xfs_quota_need_throttle(ip, XFS_DQ_USER, alloc_blocks)) f074051ff550f9f Brian Foster 2014-07-24 473 xfs_quota_calc_throttle(ip, XFS_DQ_USER, &qblocks, &qshift, f074051ff550f9f Brian Foster 2014-07-24 474 &freesp); 76a4202a388690e Brian Foster 2013-03-18 475 if (xfs_quota_need_throttle(ip, XFS_DQ_GROUP, alloc_blocks)) f074051ff550f9f Brian Foster 2014-07-24 476 xfs_quota_calc_throttle(ip, XFS_DQ_GROUP, &qblocks, &qshift, f074051ff550f9f Brian Foster 2014-07-24 477 &freesp); 76a4202a388690e Brian Foster 2013-03-18 478 if (xfs_quota_need_throttle(ip, XFS_DQ_PROJ, alloc_blocks)) f074051ff550f9f Brian Foster 2014-07-24 479 xfs_quota_calc_throttle(ip, XFS_DQ_PROJ, &qblocks, &qshift, f074051ff550f9f Brian Foster 2014-07-24 480 &freesp); 76a4202a388690e Brian Foster 2013-03-18 481 76a4202a388690e Brian Foster 2013-03-18 482 /* 76a4202a388690e Brian Foster 2013-03-18 483 * The final prealloc size is set to the minimum of free space available 76a4202a388690e Brian Foster 2013-03-18 484 * in each of the quotas and the overall filesystem. 76a4202a388690e Brian Foster 2013-03-18 485 * 76a4202a388690e Brian Foster 2013-03-18 486 * The shift throttle value is set to the maximum value as determined by 76a4202a388690e Brian Foster 2013-03-18 487 * the global low free space values and per-quota low free space values. 76a4202a388690e Brian Foster 2013-03-18 488 */ 9bb54cb56ae8498 Dave Chinner 2018-06-07 489 alloc_blocks = min(alloc_blocks, qblocks); 9bb54cb56ae8498 Dave Chinner 2018-06-07 490 shift = max(shift, qshift); 76a4202a388690e Brian Foster 2013-03-18 491 055388a3188f566 Dave Chinner 2011-01-04 492 if (shift) 055388a3188f566 Dave Chinner 2011-01-04 493 alloc_blocks >>= shift; c9bdbdc0741d909 Brian Foster 2013-03-18 494 /* c9bdbdc0741d909 Brian Foster 2013-03-18 495 * rounddown_pow_of_two() returns an undefined result if we pass in c9bdbdc0741d909 Brian Foster 2013-03-18 496 * alloc_blocks = 0. c9bdbdc0741d909 Brian Foster 2013-03-18 497 */ c9bdbdc0741d909 Brian Foster 2013-03-18 498 if (alloc_blocks) c9bdbdc0741d909 Brian Foster 2013-03-18 499 alloc_blocks = rounddown_pow_of_two(alloc_blocks); c9bdbdc0741d909 Brian Foster 2013-03-18 500 if (alloc_blocks > MAXEXTLEN) c9bdbdc0741d909 Brian Foster 2013-03-18 501 alloc_blocks = MAXEXTLEN; 4d559a3bcb7383f Dave Chinner 2013-01-21 502 4d559a3bcb7383f Dave Chinner 2013-01-21 503 /* 4d559a3bcb7383f Dave Chinner 2013-01-21 504 * If we are still trying to allocate more space than is 4d559a3bcb7383f Dave Chinner 2013-01-21 505 * available, squash the prealloc hard. This can happen if we 4d559a3bcb7383f Dave Chinner 2013-01-21 506 * have a large file on a small filesystem and the above 4d559a3bcb7383f Dave Chinner 2013-01-21 507 * lowspace thresholds are smaller than MAXEXTLEN. 4d559a3bcb7383f Dave Chinner 2013-01-21 508 */ e78c420bfc2608b Brian Foster 2013-02-22 509 while (alloc_blocks && alloc_blocks >= freesp) 4d559a3bcb7383f Dave Chinner 2013-01-21 510 alloc_blocks >>= 4; 3c58b5f809eda8a Brian Foster 2013-03-18 511 check_writeio: 5da8a07c79e8a1c Christoph Hellwig 2019-10-28 512 if (alloc_blocks < mp->m_allocsize_blocks) 5da8a07c79e8a1c Christoph Hellwig 2019-10-28 513 alloc_blocks = mp->m_allocsize_blocks; 19cb7e3854c9afe Brian Foster 2013-03-18 514 trace_xfs_iomap_prealloc_size(ip, alloc_blocks, shift, 5da8a07c79e8a1c Christoph Hellwig 2019-10-28 515 mp->m_allocsize_blocks); 055388a3188f566 Dave Chinner 2011-01-04 516 return alloc_blocks; 055388a3188f566 Dave Chinner 2011-01-04 517 } 055388a3188f566 Dave Chinner 2011-01-04 518 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org
.config.gz
Description: application/gzip
_______________________________________________ kbuild mailing list -- kbuild@lists.01.org To unsubscribe send an email to kbuild-le...@lists.01.org