On 04/03/2018 05:59 PM, Andrey Konovalov wrote:
>>
>>
>>> void check_memory_region(unsigned long addr, size_t size, bool write,
>>> unsigned long ret_ip)
>>> {
>>> + u8 tag;
>>> + u8 *shadow_first, *shadow_last, *shadow;
>>> + void *untagged_addr;
>>> +
>>> + tag = get_tag((const void *)addr);
>>> +
>>> + /* Ignore accesses for pointers tagged with 0xff (native kernel
>>> + * pointer tag) to suppress false positives caused by kmap.
>>> + *
>>> + * Some kernel code was written to account for archs that don't keep
>>> + * high memory mapped all the time, but rather map and unmap
>>> particular
>>> + * pages when needed. Instead of storing a pointer to the kernel
>>> memory,
>>> + * this code saves the address of the page structure and offset within
>>> + * that page for later use. Those pages are then mapped and unmapped
>>> + * with kmap/kunmap when necessary and virt_to_page is used to get the
>>> + * virtual address of the page. For arm64 (that keeps the high memory
>>> + * mapped all the time), kmap is turned into a page_address call.
>>> +
>>> + * The issue is that with use of the page_address + virt_to_page
>>> + * sequence the top byte value of the original pointer gets lost (gets
>>> + * set to 0xff.
>>> + */
>>> + if (tag == 0xff)
>>> + return;
>>
>> You can save tag somewhere in page struct and make page_address() return
>> tagged address.
>>
>> I'm not sure it might be even possible to squeeze the tag into page->flags
>> on some configurations,
>> see include/linux/page-flags-layout.h
>
> One page can contain multiple objects with different tags, so we would
> need to save the tag for each of them.
What do you mean? Slab page? The per-page tag is needed only for !PageSlab
pages.
For slab pages we have kmalloc/kmem_cache_alloc() which already return properly
tagged address.
But the page allocator returns a pointer to struct page. One has to call
page_address(page)
to use that page. Returning 'ignore-me'-tagged address from page_address()
makes the whole
class of bugs invisible to KHWASAN. This is a serious downside comparing to
classic KASAN which can
detect missuses of page allocator API.
>>
>>
>>> void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>>> {
>>> + if (!READ_ONCE(khwasan_enabled))
>>> + return object;
>>
>> ...
>>
>>> void *kasan_kmalloc(struct kmem_cache *cache, const void *object,
>>> size_t size, gfp_t flags)
>>> {
>>
>>> + if (!READ_ONCE(khwasan_enabled))
>>> + return (void *)object;
>>> +
>>
>> ...
>>
>>> void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
>>> {
>>
>> ...
>>
>>> +
>>> + if (!READ_ONCE(khwasan_enabled))
>>> + return (void *)ptr;
>>> +
>>
>> I don't see any possible way of khwasan_enabled being 0 here.
>
> Can't kmem_cache_alloc be called for the temporary caches that are
> used before the slab allocator and kasan are initialized?
kasan_init() runs before allocators are initialized.
slab allocator obviously has to be initialized before it can be used.
_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm