On Mon, Jan 2, 2012 at 4:49 PM, Colin McCabe <[email protected]> wrote:
>
> The problem is that there's no way for the programmer to distinguish
> "the data that really needs to be shared" from the data that shouldn't
> be shared between threads.  Even in C/C++, all you can do is insert
> padding and hope for the best.  And you'll never know how much to
> insert, because it's architecture specific.
>
>
For large chunks of data anyways, you can just go directly to mmap() for
memory and know it's on a completely different page than other allocations.
 Doesn't solve all cases of course.


> malloc doesn't allow you to specify which threads will be accessing
> the data.  It's quite possible that the memory you get back from
> malloc will be on the same cache line as another allocation that a
> different thread got back.  malloc implementations that use per-thread
> storage, like tcmalloc, can help a little bit here.  But even then,
> some allocations will be accessed by multiple threads, and they may
> experience false sharing with allocations that are intended to be used
> by only one thread.


This one is a really sore point for sure.  I really wish there were a
standard malloc interface where threads could allocate from pools that
nominally belong to other threads.  Sometimes you find yourself in a
situation where one thread has to do the allocating, but you know in
advance the memory will mostly "belong" to a different specific thread for
most of its life.  Being able to hint these (and other related) conditions
to something like tcmalloc() would be nice.


> Threads do have an advantage in that you won't be loading the ELF
> binary twice, which will save some memory.  However, even when using
> multiple processes, shared libraries will only be loaded once.
>

Usually even with procs, the readonly text segments of the ELF binaries
should be shared as well, right?

-- Brandon
_______________________________________________
libev mailing list
[email protected]
http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev

Reply via email to