On Sat, Apr 25, 2015 at 12:23 PM, Martin Ling <martin-sig...@earth.li> wrote: > On Fri, Apr 24, 2015 at 07:04:49PM -0400, Cody P Schafer wrote: >> >> So in posix I'd do something like the following (buffer fullness & >> error checking omitted): >> >> int b_fd = open_and_configure_port_as_blocking(); >> uint8_t buf[1024]; >> size_t used = 0; >> for (;;) { >> ssize_t r = read(b_fd, buf + used, sizeof(buf) - used); >> /* scan for 'X', if found break */ >> if (memchr(buf + used, 'X', r)) >> break; >> used += r; >> } > > I get the impression here that you're relying not just on the read() > blocking indefinitely until the first byte appears, but also on > returning not long after that, despite having read much fewer than the > 1024 bytes in the initial request. > > POSIX requires the former, but merely permits the latter.
For normal files, I suppose that all depends on how one reads: http://pubs.opengroup.org/onlinepubs/9699919799/ If O_NONBLOCK is clear, read() shall block the calling thread until some data becomes available. But for serial devices, I've got min = 1 & time = 0. The idea here is to get _any_ data as soon as it is available (with grabbing more than one byte possible but not requested). I'm not looking for any timeouts. I'm just trying to get bytes as soon as they are available in an efficient manner. > A different > implementation would be entirely justified in blocking until 1024 bytes > had been received, unless you set the termios VTIME value to specify a > timeout. You're depending on the fact that on the system you're using > happens to have a default there that suits you (I think 0.1 seconds is > common), or that the serial driver has an internal timeout that has the > same effect. 0.1 seconds doesn't appear common. Linux's default is VTIME=0, VMIN=1, which causes terminals to match the behavior of regular files. FreeBSD (from some source code grepping) appears to do the same. I don't have any other posix-like systems around, but I'd be impressed if they went with non-common behavior. > So this code as shown can't be relied on, even on POSIX. And the "block > until first byte, then start the timeout" semantics aren't available on > other systems (e.g. Windows), so it's not a portable idiom. I think you're misunderstanding what I'm after, as I've mentioned above. What I'm looking for is easily obtainable on posix-like systems (and is often the default). The key part here seems to be realizing I'm not looking for any added timeouts. > Also to get those semantics, you need to open the port without > O_NONBLOCK, which has the side effect that it's impossible to then make > any non-blocking read/write calls later without closing and reopening > the port. > > Rather than wanting the primitives to work like those on a given system > in a given mode, I think it's much clearer to express what you actually > want: > > 1. A blocking wait until a byte is received, followed by > 2. A blocking read of bytes received up to some timeout after that. > > And the way to express that in libserialport seems logical enough to me: > > 1. sp_wait(&set_with_rx_ready_event, 0); > 2. sp_blocking_read(buf, length, your_preferred_timeout); > > Which is almost what you have in your following example: > >> With libserialport to get the same behaviour I need to: >> >> struct sp_port *p = open_and_configure_port(); >> uint8_t buf[1024]; >> size_t used = 0; >> struct sp_event_set *ev; >> sp_new_event_set(&ev); >> sp_add_port_events(ev, p, SP_EVENT_RX_READY); >> for (;;) { >> sp_wait(ev, 0); >> sp_nonblocking_read(p, buf + used, sizeof(buf) - used); >> /* scan for 'X', if found break */ >> if (memchr(buf + used, 'X', r)) >> break; >> used += r; >> } > > ...except you should change the nonblocking read for a blocking one with > whatever short timeout value you want to use. As mentioned above, this is not what I was after. > >> Because sp_blocking_read() doesn't return early (like posix read does). > > It does, but you have to tell it what timeout to use, rather than > relying on some system/driver default. > > And if you want to wait indefinitely for a character first, you need to > do that as a separate call, which seems reasonable to me. > > I like the principle of least astonishment. Your first example depends > on a lot that is implicit in the port setup or the system details. The > second is very clear what it wants. I think that's the best way to do > things in a cross-platform API. You're right, I'm used to my read() functions having certain semantics available. And on posix systems I've used, they always have those semantics. While it's unfortunate that windows doesn't have similarly sane defaults (as https://msdn.microsoft.com/en-us/library/windows/desktop/aa365467%28v=vs.85%29.aspx appears to imply, search for COMMTIMEOUTS), that isn't necessarily an excuse not to provide those sane defaults via a cross platform library. The windows docs for COMMTIMEOUTS even imply it can get exactly the posix-like VTIME=0, VMIN=1 behavior ( https://msdn.microsoft.com/en-us/library/windows/desktop/aa363190(v=vs.85).aspx , see the "Remarks" section). Given all this, it still isn't clear why using events should be required. ------------------------------------------------------------------------------ One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y _______________________________________________ sigrok-devel mailing list sigrok-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/sigrok-devel