On Sun, Jul 7, 2013 at 5:01 PM, james <[email protected]> wrote:
> On 05/07/2013 23:05, Daniel Micay wrote:
>
> On Fri, Jul 5, 2013 at 5:43 PM, james <[email protected]> wrote:
>
>> On 05/07/2013 08:37, Graydon Hoare wrote:
>
>>>
>>> I agree that it's higher than it seems it "needs to be". But it will
>>> always be unnecessary overhead on x64; it really makes no sense there.
>>> The
>>> address space is enormous and it's all lazily committed.
>
>>
>>
>> I don't think you can rely on 'lazily committed'.  Not on a system that is
>> properly engineered anyway.
>
> You can rely on it, it's the standard behaviour on Linux. The actual
> consumed memory will be equal to the size of the pages that have been
> touched.
>
> And that was entirely my point.  As a Solaris programmer I thought the lazy
> assignment and overcommit was a stupid idea on AIX and its still a stupid
> idea on Linux.  There is no good reason for an operating system to fail a
> process for accessing memory that it has been told it can have, nor for a
> process to be terminated because some other process uses memory.  Its not
> well-engineered, quite the opposite.  And its particularly pointless given
> that space is so cheap.  Its not as if you have to actually wire up backing
> store, or even wire up page tables - but you do have to make sure that
> enough resources are available to make sure that you can do so later on
> demand and won't run out.  Preferably using swap that doesn't deadlock when
> you try to use it.  I don't think Linux is well engineered, and I don't
> think the development process (if that word actually applies) will ever fix
> its deficiencies.  Even if it is standard on Linux, its a really poor idea
> to assume thet the whole world does now and will continue to use Linux as
> the only base for doing useful computing.
>
>

I'm fine with taking a 20-500% performance hit on what you regard as
well-engineered operating systems. We shouldn't be taking an
unnecessary performance hit on the poorly engineered ones where
processes can actually use large stacks and data structures like hash
tables and vectors depending on exponential reallocation without
wasting memory.
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to