Comment #7 on issue 348 by [email protected]: asan does not always
detect access beyond mmaped page
https://code.google.com/p/address-sanitizer/issues/detail?id=348
Thanks for commenting. I understand about priorities etc. Just some notes
about limits and otherwise:
I've measured my running firefox:
$ cat /proc/`pidof iceweasel`/maps | wc -l
852
for 850 mappings adding 2 red-zone pages would be 1700 pages = < 7MB of
address space and 1700 additional VMAs. Which I think is rather low on
64bit, given that it is only address space, not memory. Also it fits with
good backup into default limits for max VMA/process.
Even for all running processes _combined_ on my machine (x86_64 on Debian
testing, usual desktop) if such combined process could run:
# cat /proc/*/maps |wc -l
15053
that would result in only < 120MB of address space for red-zone pages and
~30000 additional VMAs, which to me justifies as low overhead for 64bit and
also fits into default limit for VMA/process.
So if a user creates special program which creates a lot of mappings he/she
should probably be already aware of
/proc/sys/vm/max_map_count
knob and _that_ is the flag to tune to run such a program with additional
mmap sanitizations.
So in ASAN I'd do it unconditionally.
~~~~
IMO we'd better not change the behavior of mmap(), because there's a
number of corner cases when the users don't want that
Could you please provide an example?
To me it looks like if mmap was without MAP_FIXED, the kernel is free to
place the mapping anywhere and discovering that corner cases clients
incorrectly rely on would be only good.
And if original mmap was with MAP_FIXED - just don't add red zone around it.
If we do that solely for libc (i.e. if __NR_mmap remains unintercepted)
we'll have all sorts of mixed {mmap,munmap}_{with,without}_redzone()
calls for the same pages that'll end up corrupting something.
On munmap see to which region it belongs, and if original region (which
could be later changed with MAP_FIXED mapping into it, but anyway) was
originally mmaped with red-zone, and we are unmapping adjacent to red-zone
(either part), just adjust the red zone.
And if kernel __NR_mmap is called with correct args directly overcoming
interceptor, it will just unmap some region inside original larger mapping
with red-zones, so red-zones will stay, and since they are mapped with
PROT_NONE, will catch later incorrect read/write access to them. The only
problem here is that red-zone mapping are leaking if unmap is called
directly.
Wouldn't that work?
--
You received this message because this project is configured to send all
issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings
--
You received this message because you are subscribed to the Google Groups
"address-sanitizer" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.