> All members of the EZ-USB family (i.e. AN21xx, FX and FX2) come with a > default device identity (descriptor set) and a full default implementation > of the EP0 standard requests. For many simple applications like the > usbtest firmware this is basically all you need besides the real > functional work.
Yes, and I can agree with your comment about that making it a lot easier to get test firmware up. Plus since the default devices have nontrivial configuration -- they have several altsettings -- they support better testing for control requests. Of course it's also nice that the test firmware is so small, too! Given that, it might not be very useful to try having the same code running on multiple devices ... except to simplify usermode tools that'll manage that firmware. > IIRC the a3load is meant to load stuff into external memory, right? Yes, such as for two-stage loading. My point was just that it's possible to have code (including EP0 support) that works on all types of EZ-USB. >>>- always send max packet size (64 byte for FS), continous data stream >> >>Eventually we'd want to test all maxpacket sizes, but that sort of >>stuff can IMO wait for a while. After all, we're running into problems >>with urb queueing even before we start to look at the data ... :) > > > Good point. But unfortunately this means we have to provide our own > descriptors - thus we need to implement the whole EP0 thing - see above. I think that we can't really get away from needing EP0 implementations in the general case. Given what you said later, I think you agree, but would be glad to defer such work. >>>- sending occasional short packages (1...63) at random >>>- sending max size packets, but with a zero packet inserted at some rate >> >>Not at random ... at most, according to some pseudo-random sequence known >>in advance to both host and device. They should know in advance what packet >>will be coming. > > > I was thinking about putting the actual length in the first byte... Can't hurt, but eventually I want to see test cases where that's not enough. For example, we know what should happen if the host issues a read for exactly N != maxpacket bytes, and the device sends fewer (or more) bytes. Those cases should get tested, we need to know that all the HCDs fail in the same ways. >>We'll need to test short packet I/O ... so something as simple as a data >>stream option to make the packet sizes go MAX, MAX-1, ... 2, 1, 0 would >>be enough to test with. (Instead of a default that sends only MAX.) > > > ... but you are right: it's sufficient and easier if we just send a > defined sequence of lengths like this. I'd do both, actually ... :) > Idea is to let the firmware use the frame counter as timestamp to > indicate when the IN packet was sourced. The host has the same > synchronized timebase making it easy to measure latencies. Such measurements can be a related focus. Given those basic time measurement, it'll be practical for motivated people to put together charts'n'graphs showing interesting behaviors. >>As for requesting faults (like endpoint halts), I had thought those would >>be done with control requests. > > ... > > If we want correct set/clear stall behavior triggered by the corresponding > EP0 request we need our own EP0 implementation - see above. Yes. Though I won't start worrying about such things until after we get all the HCDs behaving smoothly (no system lockups or strange errors) on the basic I/O tests (which is all the tests we have now). >>>My loopback has the IN and OUT ep's both double buffered ... >>> >>>The whole thing is interrupt driven and the transfers between flip buffers >>>and the ringbuffer is done via DMA. As long as the host keeps the reads >>>and writes nearly balanced you'll never get a NAK from the device. I've >>>measured 1.19e6 bytes/sec sustained throughput with usb-ohci and queueing. >> >>Well, that sounds exactly like what I want to know works for _all_ host >>controllers!! But I'd make sure the host unbalances the reads and writes, >>in some test modes, to make sure interesting behaviors get triggered! > > > IMHO the easiest way to do so would be two chardev's in userland, one > connected to the IN the other one to the OUT queue. Just start dd'ing > stuff from/to these dev and then: use different blocksize, ^Z the reader > or writer, different scheduling priorities, put some VM pressure, > concurrent high interrupt load (ping -f over 100Tx link turned out to be a > good test case). The problem with that test mode is it's not a hands-off test. I think that "hands-off" is important, since I want us to be able to run overnight (or longer!) tests. > In general, I'm wondering whether it wouldn't be better not to have the > testcases implemented in the driver module. Just forwarding the raw > datastream to some chardev with ioctl might be an alternative - and people > could write testcases in userland. We've already got one ioctl, why would we need another one? :) I don't have anything against tests from userland, but the things I'm most concerned with wouldn't be as visible from there as they would be to tests that only work inside the kernel. To some extent I want to have unit tests for kernel APIs. Of course as I've said before, lots of things need to be tested, and this particular set of tools shouldn't be the only one. >>>And if you think about running the test firmware downloaded to arbitrary >>>ezusb-family based devices, there are further concerns even for a >>>particular device: >> >>Most of which we should design to avoid. There's a lot that can be >>done in just 4KBytes (in a byte-efficient ASM!), and running on every >>possible hardware would mean revision-specific bug workarounds as >>well as "how did vendor X use it" issues. Some modes are mainstream, >>and others we can/should ignore. > > ... > > Maybe I'm overdoing here wrt. what you can expect from consumer-grade > devices' - my FX sits in an application with safety concerns so we have to > be paranoid (and 3rd party firmware wouldn't be happy there because it > doesn't know how to keep the watchdog from resetting the hardware once > every second ;-) Well then, those devices shouldn't be recommended for testing! :) But there are devices that don't have such concerns. >>The reason to try handling lots of devices is to make it easier for >>people to use this without needing to get hardware-developer kits >>(such as those at http://csx.jp/~fex/ezusb/buy-en.html) ... easier to >>just go to the corner store and get a usb serial adapter to re-purpose, >>but not the only choice. (I did once see an FX2 based kit, but I lost >>its URL.) > > > Ok, if it's about people buying cheap hardware, dedicated - probably > permanent - to burn^H^H^H^Htest them, we could relax somewhat about > concerns. But I don't see why we would need a lot of people doing these > tests. Mostly we want to make sure it's easy for people to run these tests because they're likely to be the closest thing we have to diagnostics for Linux users. In the same way people are told "run memtest" to see if their memory has problems, we'd sometimes benefit from being able to tell people to "run USB tests" to see if their host has problems. Plus, the easier it is for anyone to run those tests, the more realistic it becomes to expect that they'll get serious use. It's the sort of thing that a distro vendor should consider, as well as folk submitting patches to usbcore or maintaining HCDs. New Linux platforms/archs can use that sort of testing help too. - Dave ------------------------------------------------------- This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. http://thinkgeek.com/sf _______________________________________________ [EMAIL PROTECTED] To unsubscribe, use the last form field at: https://lists.sourceforge.net/lists/listinfo/linux-usb-devel