I know that, but as you can read from my very first email I was planning on
running I threads, with I=number of cores, where each thread has 1 event
loop. My question now has got nothing to do with the threads vs events
debate. Marc is claiming that running I *processes* instead of I threads is
faster thanks to MMU stuff and I'm asking for clarification.

Sent from my Android phone.
Op 22 dec. 2011 14:44 schreef "Brandon Black" <[email protected]> het
volgende:

>
> On Thu, Dec 22, 2011 at 1:05 AM, Hongli Lai <[email protected]> wrote:
>
>> 2. Suppose the system has two cores and N = 4, so two processes or two
>> threads will be scheduled on a single core. A context switch to
>> another thread on the same core should be cheaper because 1) the MMU
>> register does not have to swapped and 2) no existing TLB entries have
>> to be invalidated.
>>
>>
> In the general case for real-world software, running N heavily loaded
> threads on one CPU core is going to be less efficient than using a single
> thread/process with a non-blocking event loop on that one CPU core.  That's
> the point of the (fairly well settled, IMHO) debate between
> many-blocking-threads-per-core and one-eventloop-thread-per-core.  So your
> point (2) doesn't really fall in favor of threads because you're talking
> about a scenario they're not optimal for to begin with.  The debate here is
> really about one process per core versus one thread per core (and the fine
> details of the differences between processes and threads, and how
> scheduling them is more or less efficient on a given CPU and OS given
> roughly nprocs == nthreads == ncores).
>
> -- Brandon
>
_______________________________________________
libev mailing list
[email protected]
http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev

Reply via email to