On 5 May 2012 02:36, Eliot Miranda <eliot.mira...@gmail.com> wrote: > > > On Fri, May 4, 2012 at 5:24 PM, Igor Stasenko <siguc...@gmail.com> wrote: >> >> On 5 May 2012 00:56, Eliot Miranda <eliot.mira...@gmail.com> wrote: >> > >> > >> > On Fri, May 4, 2012 at 3:28 PM, Igor Stasenko <siguc...@gmail.com> >> > wrote: >> >> >> >> On 5 May 2012 00:21, Sean P. DeNigris <s...@clipperadams.com> wrote: >> >> > >> >> > Sean P. DeNigris wrote >> >> >> >> >> >> PipeableOSProcess>>#upToEnd eventually calls >> >> >> AttachableFileStream>>#upToEnd, which tries to perform a buffered >> >> >> read >> >> >> by >> >> >> "self nextInto: 1000" (which eventually calls primitiveFileRead >> >> >> which >> >> >> calls sqFileReadIntoAt which calls fread with count arg of 1000). >> >> >> >> >> > >> >> > After further investigation, it seems to me that blocking #upToEnd is >> >> > functionally the same as #upToEndOfFile, because its test to stop >> >> > reading >> >> > data is StandardFileStream>>atEnd, which calls feof(). Therefore, if >> >> > there >> >> > is no EOF, it will keep reading until the pipe is out of data, and >> >> > then >> >> > hang >> >> > in fread on the following iteration. >> >> > >> >> imo >> >> stdin upToEnd >> >> makes no sense.. >> >> one should not expect that data which comes to stdin has any notion of >> >> "end", >> >> therefore he should never use this method on such stream. >> > >> > >> > Um, no. See below. >> > >> >> >> >> stdin/out are unbound (endless) streams , and use things like eof(), >> >> and other of such sort >> >> should be discouraged.. since it is same as asking >> >> infinity atEnd. >> > >> > >> > Um, no. One can redirect a file to stdin. One can type EOF to stdin. >> > EOF >> > definitely *does* make sense for stdin. >> >> what? like putc(EOF)? > > > No. One types an EOF character to the shell (see stty) and the shell > responds by closing the pipe to the process. The process then detects an > eof condition once it has read all the data from the pipe. There are no EOF > characters in a stream on unix. > whatever. its only a shell convention. it just closes the stream by handling input from user. if you look at general case (between two unfamiliar processes), you cannot assume anything.
>> but that is a convention between two ends. if another end does not >> recognizing EOF character >> as an "end of input" signal, it will keep waiting for more data. >> since these streams are natually binary, i would be really surprised >> if some characters is reserved >> for special purposes. >> >> > stdin. stdout and stderr are merely >> > well-defined stream names. they can be bound to arbitrary streams, >> > infinite >> > or otherewise. In unix shells piping is built using dup with fork & >> > exec to >> > arrange that some program reads and writes to specific pipe files in a >> > full >> > pipe. >> >> right. but as you gave an example, imagine that i used dup() >> and one fork keeps writing to stream, while other closes own copy, >> do receiving side receives any "EOF" signal? i doubt. > > > Um, that's not how it is used. One process (e.g. parent) holds the write > end of the pipe and it can close that end. The other process (e.g. child) > holds teh read end and it can detect eof when it consumes all available > input. > but how you can detect end of available input if other side never closing own end? and it is free to do so. that's why i say that using upToEnd for stdin is bad practice... because i can always redirect endless stream as input for stdin .. and so your program will run in endless loop. > >> >> Here the excert from feof man page: >> The function feof() tests the end-of-file indicator for the stream >> pointed to by stream, returning non-zero if it is set. The >> end-of-file indicator may be cleared by explic- >> itly calling clearerr(), or as a side-effect of other operations, >> e.g. fseek(). >> >> which means that actually, eof is nothing else than error which >> captured while attempting to read moar from stream, nicely converted >> to eof() by higher abstraction level functions (fxxxx C functions). > > > Right. > >> >> >> but if you look at basic infrastructure, supported by kernel (read(), >> write()), there is no notion of "eof" for descriptors. all you can >> have is an error while attempting to read from or write to descriptor, >> and then you can decide how to handle that error by either treating it >> as end-of-file or signaling exception etc. > > > So for files, where eof is an attempt to read beyond end-of-file but not so > for pipes and socket streams there is a notion of eof, which is when all > data has been read and the write side of the pipe/socketstream is closed. > See pipe(2), & socketpair(2) but its again about handling error status. it even says so in man page: --- A pipe whose read or write end has been closed is considered widowed. Writing on such a pipe causes the writing process to receive a SIGPIPE signal. Widowing a pipe is the only way to deliver end-of-file to a reader: after the reader consumes any buffered data, reading a widowed pipe returns a zero count. --- but as you can see, for system there is still no notion of end-of-file. it is again, convention where you can assume that upon receiving SIGPIPE you meet the end of input, except from cases.. when such condition is unexpectable.. like reading from /dev/random :) one application can treat SIGPIPE as end of input, while another can treat it as "there something wrong with process, which delivering data to me", and so he will attempt to reconnect. -- Best regards, Igor Stasenko.