"Clem Taylor" <[EMAIL PROTECTED]> wrote:
 > I'm trying to reduce the amount of forking in my startup scripts, so I
 > started converting various: "var=$(cat filename)" to: "read var <
 > filename".
 > 
 > The ash read builtin doesn't like reading from /proc, only the first
 > character ends up in the variable. read is only reading one character
 > at a time from /proc and the second read call is returning 0.
 > 
 > Should the 'read' builtin be reading a buffer at a time instead of
 > just one character?

ash is certainly doing character-at-a-time reads.  looking at the
bash code, it appears to do buffered input if it can.  (though
it's not clear to me it will always get a whole line -- i.e., i
think it would break if the /proc entry were quite long.)  i
assume you've tried this with bash, and it works?  it's
non-trivial to fix, since the current algorithm in ash handles
the -n and -t options trivially.

i'm actually a little surprised that buffered reads can work in
all cases (for bash).  what if the shell buffers more than the
read builtin needs?  if it reads too much, what happens to the
extra input that another program should have gotten?  (but i'm
not fully caffeinnated yet -- maybe it's obvious.)

(also, it's because /proc entries are artificial and implemented in
a primitive way that your script breaks -- it's not ash's fault. 
if you were reading a real file, ash would work just fine.  but i
suspect you realize that.)

paul
=---------------------
 paul fox, [EMAIL PROTECTED]
_______________________________________________
busybox mailing list
[email protected]
http://busybox.net/cgi-bin/mailman/listinfo/busybox

Reply via email to