I wasn't aware that Linus Torvalds possessed time travel technology. Either way, to say that Linux, OSX and the Windows kernel are the same but with miniscule differences is a pretty broad statement. On 8 Jul 2013 13:06, "Bennie Kloosteman" <[email protected]> wrote:
> "You think Linux is not well-engineered?" > > Nope .. its the same piece of 1970s crap that all the other popular OS use > , with trivial differences people make a bit deal about.. You really think > the difference between Vista and Linux is the kernel when you complain > about X.org ? XP ,Vista , Windows8 all basically have the same kernel > some OS's people loved others they hated but the difference is never the > kernel. Load up a copy of minwin and you can see. > > That said it doesnt matter , the fact Runtimes like rust create tasks or > services like nginx manage there own threads and memory ( and even OS do > this with file systems) , basically means the kernel just becomes a big > device driver and the apps do what they like but the kernel really should > have been doing those things long ago. Instead it was optomized for > almost batch processing applications designed in the 70 where latency was > more important than throughput . Lazily committed RAM is very useful for > batch processing. > > That said i fired up uclinux for a project a few years ago ( with no VM > kernel support on 86_64 so no copy on write for fork ) so no lazy > allocation either and ran some heavy load web service like tests and it > was so much faster despite uclinux having many older algoritms / > techniques.. > > Ben > > > On Mon, Jul 8, 2013 at 5:21 AM, Josh Leverette <[email protected]> wrote: > >> You think Linux is not well-engineered? That statement just took the wind >> out of your sails. There are components that run *on top of *Linux (and >> similar Unix-like systems) that are poorly engineered, X.org chief among >> them, but that doesn't make the Linux kernel poorly engineered. Making >> intangible claims about Linux being both poorly designed and transient is a >> quick way to make your opinions fall into disregard. This is not the place >> for such a discussion, and I've never seen any substantive evidence to back >> up that train of thought. In fact, I don't think anyone has seen >> substantive evidence to back up that train of thought. >> >> Lazily committed RAM is useful, and the logic behind "resource X is >> plentiful, therefore use it liberally" is the same logic that resulted in >> something we now know as Windows Vista. Hey, RAM is getting so big these >> days, why not use more of it! >> >> There are edge cases that can cause problems with Lazy Commit, but you >> really believe there aren't worse problems with a non-Lazy Commit system? >> The grass isn't greener on the other side, unless you live in an edge >> case... and that's a strange place to live. >> >> >> On Sun, Jul 7, 2013 at 4:01 PM, james <[email protected]>wrote: >> >>> On 05/07/2013 23:05, Daniel Micay wrote: >>> >>> On Fri, Jul 5, 2013 at 5:43 PM, james <[email protected]> >>> <[email protected]> wrote: >>> >>> > On 05/07/2013 08:37, Graydon Hoare wrote: >>> >>> >>>> I agree that it's higher than it seems it "needs to be". But it >>> will>> always be unnecessary overhead on x64; it really makes no sense >>> there. The>> address space is enormous and it's all lazily committed. >>> >>> >>> I don't think you can rely on 'lazily committed'. Not on a system >>> that is> properly engineered anyway. >>> >>> You can rely on it, it's the standard behaviour on Linux. The actual >>> consumed memory will be equal to the size of the pages that have been >>> touched. >>> >>> And that was entirely my point. As a Solaris programmer I thought the >>> lazy assignment and overcommit was a stupid idea on AIX and its still a >>> stupid idea on Linux. There is no good reason for an operating system to >>> fail a process for accessing memory that it has been told it can have, nor >>> for a process to be terminated because some other process uses memory. Its >>> not well-engineered, quite the opposite. And its particularly pointless >>> given that space is so cheap. Its not as if you have to actually wire up >>> backing store, or even wire up page tables - but you do have to make sure >>> that enough resources are available to make sure that you can do so later >>> on demand and won't run out. Preferably using swap that doesn't deadlock >>> when you try to use it. I don't think Linux is well engineered, and I >>> don't think the development process (if that word actually applies) will >>> ever fix its deficiencies. Even if it is standard on Linux, its a really >>> poor idea to assume thet the whole world does now and will continue to use >>> Linux as the only base for doing useful computing. >>> >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> [email protected] >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> >> >> -- >> Sincerely, >> Josh >> >> _______________________________________________ >> Rust-dev mailing list >> [email protected] >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > [email protected] > https://mail.mozilla.org/listinfo/rust-dev > >
_______________________________________________ Rust-dev mailing list [email protected] https://mail.mozilla.org/listinfo/rust-dev
