On 12/05/2011 11:13 PM, Glenn Fowler wrote:

thanks for the report

this is an artifact of the underlying sfio buffered io library

the sfgetr() function that reads one record used to allow unlimited line length
that worked fine until someone used it on a>  1Tib file that contained no 
newline
and it proceeded to choke the system trying to allocate a>  1Tib buffer

we then added a firm limit (modifiable, but not via an sf*() function)
that was too small: 256*1024 = 256Kib for 64 bit systems
currently the default limit for 64 bit systems is 4*1024*1024 = 4Mib

we may have to revisit this given enough feedback
but the upshot is that the seemingly good idea of "no limits" is not always a 
good idea
proof of concept: try this with bash on a system that you can reboot with a 
physical button:

        bash -c 'read line<  /dev/zero'

btw, this test on the latest (to be released before the new year) ksh
exposeds a byte-at-a-time read(2) loop that should be optimized to 
buffer-at-a-time
this will be fixed in the next release

discussion/opinions on "too small" vs "too big" vs shouldn't be any"
for the sfgetr() record limit welcome

I know both no-limit/limit can cause problems, but I think no-limit is better option. I know a few users who use ksh for things I could not imagine (or even consider sane). With limits I know I can expect questions from them sooner or later: "Hey, I have 64GiB RAM, why I can't read 3 GiB file? I don't care if it's wrong idea or not. It was working for ages." So I vote for no limits option :) Anyway, 4Mib (I guess you meant 4MiB) is way too small for "these people".

Michal
_______________________________________________
ast-users mailing list
[email protected]
https://mailman.research.att.com/mailman/listinfo/ast-users

Reply via email to