On 27/05/2011 5:07 AM, Rafael Ávila de Espíndola wrote:
They weren't approaching the same problem we are. They were just trying
to make "conventional threads go fast"; I agree they probably wound up
at the right place with 1:1 always (though weirdly, their IO model was
part of what forced many OS vendors to *make* 1:1 go fast). If we
weren't trying to make tasks a ubiquitous isolation (and IO
multiplexing) concept, I'd probably agree with you here. But I think we
are pursuing different goals.

I know I have lost this, but think I should at least make an historical
correction. NPTL walked over linuxthreads in the day it was out. And
that was in software that had been coded to use linuxthreads.

I never meant to suggest otherwise; I was there for it too (at RHAT when it was designed). I only meant to say that the Java "teach everyone to use thread-per-IO-request" model was part of the pressure (from customers) to make NPTL in the first place, and make it go so fast.

(That and, of course, the fact that linuxthreads had all manner of semantic bugs and limitations, didn't actually implement posix threads correctly)

Read the original NPTL proposal if you don't believe me:

--snip--

    Software Scalability

        Another use of threads is to solve sub-problems of the user
        application in separate execution contexts. In Java
        environments threads are used to implement the programming
        environment due to missing asynchronous operations. The result
        is the same: enormous amounts of threads can be created. The
        new implementation should ideally have no fixed limits on the
        number of threads or any other object.

--snip--

In the same way that NPTL != linuxthreads with m=n, so will be
implementing a 1:1 mapping not be that trivial for us. In fact, it will
probably be more work, as gcc didn't had to change any for the NPTL switch.

The situations are not analogous. I realize that the words "M:N" strike fear and loathing into the hearts of anyone who was around for that painful fight (decades long, every OS in the world involved). I respect that reaction. I'm not suggesting the conclusion reached during that struggle was at all wrong. When you are talking about two ways of implementing the same semantics (posix threads as used by C) then 1:1 is faster than M:N. This is demonstrable.

But we are not implementing posix threads. Our tasks have different semantics. I'm not suggesting M:N instead of 1:1, more like I'm suggesting 1:1:N where there's 1 kernel thread to 1 user-space thread but N things that are semantically different entities than C-style threads, running on that 1 thread. Tasks respect our language rules, not posix thread semantics. Specifically with respect to inter-task IO and cancellation. Which are central aspects of their operation.

The situation is more analogous to that other fine library, GCD. In GCD, a block is not a thread; the fact that blocks run on threads is not an argument to assign 1 block to 1 thread. Your argument is like saying "M:N is slow for posix threads, so we must use 1 thread per block". You could -- and might well do so in environments where threads are cheap enough -- but the 'block' abstraction is different from the concept of a thread -- runs on different rules -- so you'll have to have a certain amount of abstraction-management code that the thread jumps into at defined lifecycle points anyways. If you made it "just a thread" you'd be erasing the block as a separate concept. Kernel threads don't run that logic for you. It's not part of the thread abstraction.

Is that clearer?

-Graydon
_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to