On Mon, 05 Dec 2011 23:13:44 +0100, Glenn Fowler <[email protected]>
wrote:
thanks for the report
this is an artifact of the underlying sfio buffered io library
the sfgetr() function that reads one record used to allow unlimited line
length
that worked fine until someone used it on a > 1Tib file that contained
no newline
and it proceeded to choke the system trying to allocate a > 1Tib buffer
we then added a firm limit (modifiable, but not via an sf*() function)
that was too small: 256*1024 = 256Kib for 64 bit systems
currently the default limit for 64 bit systems is 4*1024*1024 = 4Mib
we may have to revisit this given enough feedback
but the upshot is that the seemingly good idea of "no limits" is not
always a good idea
proof of concept: try this with bash on a system that you can reboot
with a physical button:
bash -c 'read line < /dev/zero'
I see what you mean but was curious whether bash actually blocks/crashes
the system. at least under current ubuntu it marshes along for a while and
than the bash process simply aborts with an error message to the extent
that it can't allocate 2^64 bytes. maybe bash's using a strategy similar
to what phong vo is referring to in his mail:
8<-----------------
Message: 4
Date: Tue, 6 Dec 2011 11:56:40 -0500
From: Phong Vo <[email protected]>
Subject: Re: [ast-users] ksh93: possible bug in `read' builtin?
To: [email protected], [email protected]
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii
Sfio is not doing this today but it is an easy extension to cause
a stream with a discipline to raise an event via the discipline
event handler whenever it calls malloc to extend the sfgetr buffer.
Then, the application can dynamically choose to either continue
the extension or abort the operation.
8<-----------------
so with bash there seems to be no real harm done or do I miss your point?
joerg
_______________________________________________
ast-users mailing list
[email protected]
https://mailman.research.att.com/mailman/listinfo/ast-users