On 1/1/07, Dieter <[EMAIL PROTECTED]> wrote:

Good point about minimizing other PCI traffic.  So it would be preferred
to use a disk controller in the chipset (or PCIe?), run the system
under test in single user mode, etc.

Exactly.


I haven't debugged a PCI bus, but back in the dark ages when I was doing
this sort of thing, I often ran into cases where I ran out of resolution
in the logic analyser, and the logic states displayed were logically impossible.
I had to switch over to the oscilloscope and find the glitch, or the bogus
Voltage level that the logic analyser doesn't show you.  So if the
hardware can sense any of this non-binary stuff, it would be very useful
to provide it to the analyzer.

In the future, perhaps we can design another board that is capable of
sensing a few intermediate levels, not a full-blown ADC.  But that
would be a separate board, not OGD1, and right now, all we have is
OGD1.

> > tim> The easiest interface (and arguably the most cross-platform) would be
> > tim> a simple terminal interface on the serial line.  Fire up minicom or
> > tim> hyperterminal or whatever...  to 8N1 @ 152000 and go.
> > tim>
> > tim> Easy to program, easy to use.  No GUI needed.
> >
> > The command interface could probably be CLI.  While the data display
> > could also be text based, you can display a lot more data using
> > graphics, so the data display should probably be graphical.
>
> Remember that 19200 bits/sec isn't very fast.  I'd hate to slow it
> down by using an ASCII interface.

A CLI or text based display does not imply a 19200 bits/sec limit.
I can send text to an xterm window way faster than 19200 bits/sec.

The limit is on the RS-232 connection to the tracer.  I had thought
the suggestion was to put the CLI into the tracer.  I much prefer
putting the CLI into some software on the workstation.


The data capture running on the OGD can only talk to the analyzer
software at RS-232 speeds, (which are officially capped at 20,000
although most systems these days can run much faster) but the analyzer
software is likely to have a faster path to the display, most likely
either a video card in the same machine or an Ethernet connection to a
machine with the display.

The path from the software to the display is much faster than we need.
It's LOCAL.


My suggestion is to do the time honored "build one to throw away".
A CLI would be quick to get running, and using something like
gnuplot for graphical output would be easy as well.

With certain distributed applications I've written, I have made the
protocol between nodes ASCII CLI based.  That was a great way to get
things tested and off the ground before I had a GUI.  Of course, in
this case, the ASCII protocol wasn't a throughput burden, because all
of the high-traffic data was sent binary.  All ASCII did was make it a
hell of a lot easier to debug.

With the tracer, I don't think we need this, however.

> > Question is what format to use for data transfer.  Can we assume an
> > 8 bit data path with hardware (e.g. RTS/CTS) flow control?  If so,
> > then data transfers will be faster.  Or do we need to allow for a
> > 7 bit data path, and reserve control-s and control-q for flow control?
>
> We're in complete control over this.  We don't even need flow control.

Flow control is your friend.  Flow control avoids many headaches.
We need flow control.  But that's easy, UARTS have flowcontrol builtin.

I'm not saying that we shouldn't use it.  I'm just saying that we
could still make it work without if we had to.

But if someone is using this across some cu/telnet/ssh/whatever link,
are they likely to have an 8-bit clean line?

It doesn't matter.  It's just a serial line from the tracer to a
workstation, and on the workstation is an application that talks
directly (well, via the OS driver) to the serial port.  As long as we
stick to 8-bit bytes, we're set.  No one will be using cu/telnet/ssh.
They might try using tip or some terminal program.  That would talk to
the serial port.  But they'd be disappointed with the "garbage"
characters they got.  :)

No one should ever try using a terminal program on the serial port to
talk to the tracer.  If they want an ASCII interface, they need an
application to talk to the serial port and provide an ASCII interface.

We could split the application into a get the data from the OGD part, and
a display the data part.  The oddball link could go between these two parts,
reducing complexity in the OGD.  Downside is that you'd need a support
computer with the system under test, but we're likely to have that anyway.

Always assume there's a support computer, and assume it has a serial port.

The get the data from the OGD part could be a simple CLI program.
It talks to the OGD over RS-232, and outputs data to stdout.
For working remotely, just redirect stdout to a disk file,
then transfer the file via ftp, uucp, or whatever.

The display the data part could start out as a CLI program and
later be rewritten as a GUI.

Consider what you type and see in the CLI to be "unrelated" to what
data goes back and forth between your workstation and the tracer.


Once the file format is designed, a "fake" data file could be
generated, and the display application could be developed using that.
So the display application developer doesn't need access to the
hardware.

I hadn't thought of that.  There should be a way to dump the entire
trace to a file and then examine it later.  Now, THERE's an instance
when some simple compression would come in handy.

--
Timothy Miller
http://www.cse.ohio-state.edu/~millerti
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to