You think Linux is not well-engineered? That statement just took the wind out of your sails. There are components that run *on top of *Linux (and similar Unix-like systems) that are poorly engineered, X.org chief among them, but that doesn't make the Linux kernel poorly engineered. Making intangible claims about Linux being both poorly designed and transient is a quick way to make your opinions fall into disregard. This is not the place for such a discussion, and I've never seen any substantive evidence to back up that train of thought. In fact, I don't think anyone has seen substantive evidence to back up that train of thought.
Lazily committed RAM is useful, and the logic behind "resource X is plentiful, therefore use it liberally" is the same logic that resulted in something we now know as Windows Vista. Hey, RAM is getting so big these days, why not use more of it! There are edge cases that can cause problems with Lazy Commit, but you really believe there aren't worse problems with a non-Lazy Commit system? The grass isn't greener on the other side, unless you live in an edge case... and that's a strange place to live. On Sun, Jul 7, 2013 at 4:01 PM, james <[email protected]> wrote: > On 05/07/2013 23:05, Daniel Micay wrote: > > On Fri, Jul 5, 2013 at 5:43 PM, james <[email protected]> > <[email protected]> wrote: > > > On 05/07/2013 08:37, Graydon Hoare wrote: > > >>>> I agree that it's higher than it seems it "needs to be". But it will>> > always be unnecessary overhead on x64; it really makes no sense there. The>> > address space is enormous and it's all lazily committed. > > >>> I don't think you can rely on 'lazily committed'. Not on a system that > is> properly engineered anyway. > > You can rely on it, it's the standard behaviour on Linux. The actual > consumed memory will be equal to the size of the pages that have been > touched. > > And that was entirely my point. As a Solaris programmer I thought the > lazy assignment and overcommit was a stupid idea on AIX and its still a > stupid idea on Linux. There is no good reason for an operating system to > fail a process for accessing memory that it has been told it can have, nor > for a process to be terminated because some other process uses memory. Its > not well-engineered, quite the opposite. And its particularly pointless > given that space is so cheap. Its not as if you have to actually wire up > backing store, or even wire up page tables - but you do have to make sure > that enough resources are available to make sure that you can do so later > on demand and won't run out. Preferably using swap that doesn't deadlock > when you try to use it. I don't think Linux is well engineered, and I > don't think the development process (if that word actually applies) will > ever fix its deficiencies. Even if it is standard on Linux, its a really > poor idea to assume thet the whole world does now and will continue to use > Linux as the only base for doing useful computing. > > > > _______________________________________________ > Rust-dev mailing list > [email protected] > https://mail.mozilla.org/listinfo/rust-dev > > -- Sincerely, Josh
_______________________________________________ Rust-dev mailing list [email protected] https://mail.mozilla.org/listinfo/rust-dev
