I am experimenting with my own firmware on a USB development board. Currently, it does full-speed bulk transfers with the Linux generic usb-serial driver.
It looks like I lose data when the device sends data faster than the application handles it. In other words, there is no flow control. I have some questions about this; hopefully someone on this list knows the answers. * The USB specs seem to suggest that there is a built-in notion of flow control: A device can always throttle the host by sending NAK in response to OUT transactions. Likewise, the host can always throttle a device by not doing IN transactions. Is this a correct interpretation of the standard? * The Linux generic usb-serial driver never throttles the device in this sense. It always sets up a new read URB immediately after receiving a packet, even when the tty layer has no room to store more data, resulting in possible loss of data. Would it make sense to postpone setting up a new read URB until the tty layer has room available? It seems that some of the other drivers (digi_accelport for example) do this in response to a throttle message from the tty. * Is there perhaps a better way to achieve what I want? My goal is a simple, reliable, flow controlled data stream over a bulk endpoint, without writing my own device driver. Thanks for any help, Joris. ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ Linux-usb-users@lists.sourceforge.net To unsubscribe, use the last form field at: https://lists.sourceforge.net/lists/listinfo/linux-usb-users