On 12/06/2011 02:02 PM, Terrence J. Doyle wrote:
Michal Hlavinka wrote:
On 12/05/2011 11:13 PM, Glenn Fowler wrote:

thanks for the report

this is an artifact of the underlying sfio buffered io library

the sfgetr() function that reads one record used to allow unlimited
line length
that worked fine until someone used it on a>   1Tib file that
contained no newline
and it proceeded to choke the system trying to allocate a>   1Tib buffer

we then added a firm limit (modifiable, but not via an sf*() function)
that was too small: 256*1024 = 256Kib for 64 bit systems
currently the default limit for 64 bit systems is 4*1024*1024 = 4Mib

we may have to revisit this given enough feedback
but the upshot is that the seemingly good idea of "no limits" is not
always a good idea
proof of concept: try this with bash on a system that you can reboot
with a physical button:

     bash -c 'read line<   /dev/zero'

btw, this test on the latest (to be released before the new year) ksh
exposeds a byte-at-a-time read(2) loop that should be optimized to
buffer-at-a-time
this will be fixed in the next release

discussion/opinions on "too small" vs "too big" vs shouldn't be any"
for the sfgetr() record limit welcome

I know both no-limit/limit can cause problems, but I think no-limit is
better option. I know a few users who use ksh for things I could not
imagine (or even consider sane). With limits I know I can expect
questions from them sooner or later: "Hey, I have 64GiB RAM, why I
can't read 3 GiB file? I don't care if it's wrong idea or not. It was
working for ages." So I vote for no limits option :) Anyway, 4Mib (I
guess you meant 4MiB) is way too small for "these people".

Michal
_______________________________________________
ast-users mailing list
[email protected]
https://mailman.research.att.com/mailman/listinfo/ast-users



In the real world there are always limits. Ignoring them can lead to
security problems like the denial-of-service demonstrated in Glenn's
bash example. I'm in favor of the safer approach. Besides, as I
demonstrated in my last posting, there is a (hopefully safe) way around
this limit.

I disagree. In any C application, I can use as much memory I want. Why should be ksh limited? If you want to limit any process, you can already do it via ulimit. There's no reason to add any extra limit in ksh itself. Also with limits there's one big problem - their values. It can make sense to set the limit to 100 MiB on "usual desktop machine", but does not make sense to have the same limit on server with 128 GiB ram. When limit is reached, your application will behave differently and it will need extra code to check these places and checking it in huge old code is very very difficult. I know at least one company which has their server manager sw managing hundreds of servers written as complex ksh scripts (heavy demanding on cpu/ram resources) with over 400 000+ lines of code. Good luck checking that code for limits compatibility.

Anyway, I don't like options that does not scale and limits are one of them.
_______________________________________________
ast-users mailing list
[email protected]
https://mailman.research.att.com/mailman/listinfo/ast-users

Reply via email to