On Sun, Apr 26, 2015 at 3:15 PM, Martin Ling <martin-sig...@earth.li> wrote:
> On Sat, Apr 25, 2015 at 06:09:16PM -0400, Cody P Schafer wrote:
>>
>> This assumes I'm using something that is normal serial hardware.
>>
>> Many devices that present as serial ports aren't really serial ports.
>> USB devices are especially prone to this.
>> Further, even if a real serial port is on the other end, nothing
>> guarantees that device isn't implementing it's own version of waiting
>> that I have no control over.
>>
>> As a result, the buffer will often contain more than 1 byte.
>
> That's true, but if your VMIN=1 code is going to read a byte at a time
> whenever it's plugged into a "real" serial port, then I don't see why
> it's an approach we should be encouraging, if the motivation is efficiency.
>
> What are you actually trying to optimise for?
>
> Rather than efficiency, you seem to be focused on minimising latency:
> getting bytes as soon as they are available.  That's fundamentally
> mutually exclusive with maximising efficiency.

That really depends on ones' definition of efficiency. For many serial
protocols there is only a single outstanding "request" (of sorts). As
a result, additional latency results in decrease throughput. This
means my transfer has to run longer.

In any case: It doesn't make sense to argue that "you're already not
perfectly effecent in # of syscalls/user-time, so there is no need to
try to be in any way efficient in # of syscalls/user-time".

> You say you don't want any timeouts, but think about *why* you get
> multiple bytes at a time with a USB serial adapter. It's because there's
> a timeout in effect. The driver only polls the USB device every N
> milliseconds, so whenever it does so you get all the bytes that arrived
> wihin that time. On FTDI devices for example, the default latency timer
> value is 16ms.

As I mentioned. "even if a real serial port is on the other end".

Note:
 - we don't have control over that latency
 - breaking the use of posix-like blocking doesn't get rid of that latency

Some things don't have another "real serial port" on the other end
(like a cortex-m3 implementing USB serial in software and shipping
entire frames with responses back to me). Serial is just the operating
system API. Trying to argue about physical characteristics may have
made sense a while back (years), but it doesn't make sense today.

> You seem happy with this arbitrary timeout giving you a gain on
> efficiency, by getting you multiple bytes on each read. But you don't
> want to improve efficiency any further by setting what timeout you're
> actually willing to tolerate.

I'm absolutely not happy with timeouts I can't control. What gave you that idea?

That said: some timeouts exist because they are necessary (usb only
has so much bus space, there is header overhead, and it really makes
sense not to trigger a bunch of single byte transfers).

What I'm absolutely not happy with is adding unnecessary (& arbitrary) timeouts.

> If you really want to minimise latency, you should either avoid using a
> USB adapter entirely, or tune the latency timer down to the minimum in
> the USB device driver settings. Either way, you should then be getting
> only a single byte per loop cycle, so there's no advantage to using
> anything other than a single-byte read call. If you're not already
> getting at most byte per loop cycle, then extra syscall overhead in
> reading each one is not your problem.
>
> If you really want to maximise efficiency, then you need to avoid making
> the OS return every byte as soon as it's available. Make large read
> requests with a big buffer and a long timeout, and let the OS get on
> with things.

I not having my programs take X-times more wall-clock time to run.

> If you want a balance somewhere in between, then use sp_blocking_read()
> with a short timeout that you're willing to tolerate. And you'll then
> also get consistent behaviour regardless of what serial device you use,
> rather than depending on the device happening to have a suitable timeout.

In my particular case, I've got a USB device that just happens to
pretend to be serial. Again, talking about physical characteristics
doesn't make sense here.

>> I'm not currently seeking any behavior other than VMIN=1. While I
>> could see other values being convenient for protocol implementations,
>> as you mention portability for that would be tricky (or deeper than
>> adjusting port flags, one would have to actually compose a single
>> lib operation on some platforms into more than 1 system call).
>
> Note that some library calls already involve more than one system call.
> For one thing, sp_blocking_read() on Unix platforms works by running
> select() for the block and then read() to get the data. This is
> necessary because the port has to be opened in O_NONBLOCK mode to be
> able to support nonblocking operations as well.
>
> This also means that if we add a function for the behaviour you want, it
> would still be doing a wait followed by a nonblocking read internally.

Perhaps on windows it would. On linux "read-at-least-x" can be done
almost entirely in the kernel (via setting VMIN)
My point is that if there was a higher level API the library could
choose to use os-specific impls if avaliable (and if they made sense),
otherwise fall back to a generic method that uses the existing API.
It's a bit difficult to implement those types of things external to
libserialport (though probably not impossible if one extracts the os
specific repr using provided functions). Anyhow, this is all
secondary.

>> The VMIN=1 case, on the other hand seems like it'd be rather possible.
>
> I agree that it's possible to add a function that provides these
> semantics. I don't currently see a good reason to. It would be exactly
> equivalent to sp_wait() for RX followed by sp_nonblocking_read().
>
> Your goal seems to be to minimise latency. But if you're getting more
> than one byte received per loop cycle, then the time cost of making
> multiple syscalls to read them will be absolutely negligible compared to
> the latency you're already getting through the rest of the system. So I
> don't see the point in trying to optimise the code for this scenario.

I like optimizing for a lot of things, and you're right, latency is
one of them. So is throughput, efficiency, etc.

Wanting to optimize any of them doesn't mean we can simply ignore the others.

And past those things, the reason I looked for a function that
provided that behavior in the first place is that it matches the same
pattern I'm used to using when not using libserialport. I'm rather
used to using "normal" reads, not `read_exactly()` :) , and avoiding
the use of events/poll/select where it isn't needed.

Further aside: If I wanted to use events, I'd probably go ahead and
use an event loop, at which point I'd wonder why I even bothered using
a portable serial port lib in the first place (I'd need to pull the os
repr of the serial port out of libserialport to give to the event
loop).

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
sigrok-devel mailing list
sigrok-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sigrok-devel

Reply via email to