"Liam R. Howlett" <[email protected]> writes: > > [...snip...] > >> +static u64 kvm_gmem_get_attributes(struct inode *inode, pgoff_t index) >> +{ >> + struct maple_tree *mt = &GMEM_I(inode)->attributes; >> + void *entry = mtree_load(mt, index); >> + >> + /* >> + * The lock _must_ be held for lookups, as some maple tree operations, >> + * e.g. append, are unsafe (return inaccurate information) with respect >> + * to concurrent RCU-protected lookups. >> + */ > > Can you please elaborate how you see inaccurate information and which > information is inaccurate? > > Your comment is incorrect and misleading as append will not be used in > rcu mode. Note that you have not set this tree up in rcu mode. >
My bad. Thanks for clarifying about usage of rcu mode. >> + lockdep_assert(mt_lock_is_held(mt)); >> + In the next revision I'll remove this lockdep and use RCU mode, and kvm_gmem_get_memory_attributes() should get a stable result. The other lookups using mt_for_each() in kvm_gmem_range_has_attributes() and kvm_gmem_get_invalidate_filter() retain the lockdep since those operate over multiple ranges. Those are called from paths that require holding the lock to exclude other operations anyway, so the lockdep requirement does not cost anything more. >> + return WARN_ON_ONCE(!entry) ? 0 : xa_to_value(entry); >> +} >> + >> >> [...snip...] >>
