S?bastien Bourdeauducq wrote:
> Didn't work. Also tried deglitching (patch attached) without any
> visible improvement.

I'll have to give it a try. Meanwhile, I looked for evidence with the
regular SoC that "clock compression" events do indeed happen. First,
a view of what packets normally look like:

http://downloads.qi-hardware.com/people/werner/m1/usb/usb-D1rx_active-D15rx_pending.png

D1 is rx_active. It looks a bit glitchy, but that's Rigol's fault.
D15 is rx_pending. The analog channels CH1 and CH2 are on USBB_VP and
USBB_VM, so they're nice clean digital signals, even if they don't
always look like that. Not sure why CH2 has so much ringing. The
packet shown is a NAK.

We can see that rx_pending normally rises after the last bit, just
about when the SE0 begins. rx_active drops upon seeing the end of
the SE0.

Nothing obviously wrong here, though I wonder if we couldn't make
rx_pending happen a little sooner. Every nanosecond counts :-)

Now, the same signal with a bit of patience and a pattern trigger set
to rx_active & rx_pending:

http://downloads.qi-hardware.com/people/werner/m1/usb/usb-drift-1.png

rx_pending now rises one bit time earlier. rx_active behaves normally.
So it seems that the sample clock has jumped one bit time.

I then added duration to the pattern trigger and went for the longest
time I could find. Two bits seems to be the worst I can find this
way:

http://downloads.qi-hardware.com/people/werner/m1/usb/usb-drift-2.png

So the problem is real. I don't have a good estimate of the rate at
which clock compression happens, but it seems to be quite frequent.
I.e., if I set the duration such that it also catches 1 bit jumps,
the scope triggers faster than I can count.

- Werner
_______________________________________________
http://lists.milkymist.org/listinfo.cgi/devel-milkymist.org
IRC: #milkymist@Freenode

Reply via email to