Hi Chao, On Fri, Mar 19, 2021 at 10:15:18AM +0800, Chao Yu wrote: > On 2021/3/6 12:04, Gao Xiang wrote:
... > > + (*last_block + 1 != current_block || !*eblks)) { > > Xiang, > > I found below function during checking bi_max_vecs usage in f2fs: > > /** > * bio_full - check if the bio is full > * @bio: bio to check > * @len: length of one segment to be added > * > * Return true if @bio is full and one segment with @len bytes can't be > * added to the bio, otherwise return false > */ > static inline bool bio_full(struct bio *bio, unsigned len) > { > if (bio->bi_vcnt >= bio->bi_max_vecs) > return true; > > if (bio->bi_iter.bi_size > UINT_MAX - len) > return true; > > return false; > } > > Could you please check that whether it will be better to use bio_full() > rather than using left-space-in-bio maintained by erofs itself? something > like: > > if (bio && (bio_full(bio, PAGE_SIZE) || > /* not continuous */ > (*last_block + 1 != current_block)) > > I'm thinking we need to decouple bio detail implementation as much as > possible, to avoid regression whenever bio used/max size definition > updates, though I've no idea how to fix f2fs case. Thanks for your suggestion. Not quite sure I understand the idea... The original problem was that when EROFS bio_alloc, the number of requested bvec also partially stood for remaining blocks of the current on-disk extent to limit the read length. but after that bio behavior change, bi_max_vec could be increased internally by block layer (e.g. 1 --> 4), so bi_max_vecs is no longer as what we expect (I mean passed-in). so could cause read request out-of-bound or hung. That's why I decided to record it manually (never rely on bio statistics anymore...) Also btw, AFAIK, Jianan is still investigating to use iomap instead (mainly resolve tail-packing inline path). And I'm also busy in big pcluster and LZMA new features for the next cycle. So I think we might leave it just as is and it would be replaced with iomap in the future. Thanks, Gao Xiang > > Let me know if you have other concern. > > Thanks,