Le 25/02/2020 à 19:18, Maxime Villard a écrit :
> Le 23/02/2020 à 23:19, Andrew Doran a écrit :
>> On Fri, Feb 21, 2020 at 02:14:31PM +0100, Kamil Rytarowski wrote:
>>> On 22.12.2019 20:47, Andrew Doran wrote:
>>>> Module Name:    src
>>>> Committed By:    ad
>>>> Date:        Sun Dec 22 19:47:35 UTC 2019
>>>> Modified Files:
>>>>     src/external/cddl/osnet/dist/uts/common/fs/zfs: zfs_ctldir.c
>>>>     src/sys/kern: vfs_mount.c vfs_subr.c vfs_syscalls.c
>>>>     src/sys/miscfs/genfs: genfs_vfsops.c
>>>>     src/sys/nfs: nfs_export.c
>>>>     src/sys/sys: mount.h vnode.h vnode_impl.h
>>>>     src/sys/ufs/lfs: ulfs_vfsops.c
>>>>     src/sys/ufs/ufs: ufs_vfsops.c ufs_wapbl.c
>>>> Log Message:
>>>> Make mntvnode_lock per-mount, and address false sharing of struct mount.
>>> This change broke kUBSan syzbot.
>>> The sanitizer is now very noisy as struct mount requires 64 byte alignment.
>>> http://netbsd.org/~kamil/kubsan/mount-alignment.txt
>> I had a look this weekend.  This is down to KMEM_SIZE messing up the
>> alignment, so is a DIAGNOSTIC thing.  The align_offset parameter to
>> pool_cache() would be a nice easy way to solve this but it seems someone
>> killed that off, so I'll need to give this some more thought.
>> Andrew
> kmem guarantees alignment on 8 bytes, but not more. Changing the backend
> allocators to enforce more alignment still may not result in aligned buffers,
> because kmem has the right to modify the buffers for debugging purposes, like
> with KMEM_SIZE.
> If you want a buffer aligned to a specific value, don't use kmem, rather a
> pool(_cache) with COHERENCY_UNIT in "align".
> If the goal is to have kmem really enforce COHERENCY_UNIT alignment, then this
> should be documented and the debugging features should be adapted to respect
> that constraint.
> "align_offset" got removed because it increased complexity in subr_pool for
> no reason (two users in all of the kernel, one was actually a bug).

Can we revert the "__aligned(COHERENCY_UNIT)" for now? There is no particular
hurry to fix this bug, however the KUBSAN instance has been down for more than
two months because of this, and it needs to be addressed.

Similarly, the KASAN instance is currently crashing hard on:
Dozens of thousands of times each day. This has been the case for two weeks,
and it too needs to be addressed.


Reply via email to