BY: Rob MacAulay
DATE: 2007-07-20 08:23
SUBJECT: System level testing

Dear Suraj,

I have been having a quick play around with your Ruby-Vpi tools.
First of all, I'll just say that you seem to have been able to
implement a remarkable amount using a fairly lightweight framework -
always a good sign!

I have a few general queries that are fairly broad in scope - so
broad, in fact, that you may ignore them if you wish! I know you are
busy with another release at the moment.

1) is it possible to use the ruby models of other parts of the design?

It is currently possible to use a ruby prototype for the block under
test, but what about a larger system?

It might, for example, be useful to implement testbench models in ruby.

There would seem to be two problems here:

1.a Currently Ruby-VPI is in control, and then passes control over
to the single verilog instantiation.

If there are multiple ruby models, can they live in the master
ruby-vpi thread?

If so, I suppose they would need to be registered with the 'main'
control instance, which would need to call a 'simulate' method in
each independent module.

1.b At the moment there is an all-or-nothing PROTOTYPE switch - this
would have to have finer grained control such as a PROTOTYPE level
(or see later)

2) How does one cope with multiple clocks?

I guess this might be achieved by advancing time on a granularity
determined by the fastest clock.

Incidentally, where is the time advance increment set? I haven't
spotted it anywhere so far.

3) Is it possible to have multiple verilog builds?

I guess this would be done by having multiple targets in the main
rake file, so that one could have rtl, post-layout sdf with min, max
timings, etc.. These would replace the cver target

4) At the moment, the use of the xUnit and RSpec frameworks is
really only oriented towards block-level testing.
(well, that's my impression at the moment, though I haven't really
played enough yet..)

What are your thoughts on system-level tests?

5) Is there a methodology for hierarchical application of
tests/specifications ?

What I'm thinking of here, is that teams may develop IP blocks, and
their associated RSpec, RUnit tests. However, eventually, these need
to be assembled into a chip.

It would be useful to have some means whereby these tests could be
incorporated into the system level tests. There seems to be a
mechanism for this in RSpec, where one can specify behaviours as
being shared.

However, this may not be so simple here, since one may have some
block connected to a peripheral-type bus, which is in turn driven by
a system bus, which is in turn driven by code running on a
processor. Hence the tests must be 'propagated' upwards to run in an
environment that may be different from the original.


Plus, here are a few things you might like to check out:

A) I note that in RSpec 1.05 #context has been replace by #describe,
and #specify by #it

B) You might have a look at RushCheck, which can be used along with
RSpec.

http://rushcheck.rubyforge.org/index.html

RushCheck automatically generates a set of stimuli, or test cases,
according to various rules, which you can specify.

RushCheck is an implementation in Ruby of QuickCheck, a test
generator written in Haskell.

In fact, QuickCheck was originally developed as part of a tool for
describing hardware, called Lava.

http://www.cs.chalmers.se/~rjmh/QuickCheck/
http://www.cs.chalmers.se/~koen/Lava/index.html

C) Have a look at Teal and Truss:

http://www.trusster.com/

Notice anything familiar?

The Teal VPI interface seems to cover a lot of the ground that
Ruby-VPI does.

I'd say it seems to be interfaced a little less elegantly, but it
does provide support for running multiple testbench threads, which
are used in the Truss framework. The latter is basically a
replacement for a verilog testbench - it is used to hook together
stimulators written in C++ to the device under test.

This is all fine and dandy, but what interests me is how to provide
better verification, not just to provide a better framework for
doing verification (if you see what I mean..)

Hence the interest ins somehow promoting your test methodology up to
system level.

Anyway, enough ramblings for now..

Best Regards

Rob MacAulay

Reply via email to