On 8/21/07, Calvin Wong (cawong) <[EMAIL PROTECTED]> wrote:
> There also may be cases where the IP you are instantiating have a smaller
> timescale than what you are currently working on.  And that means that
> you may have to decrease the timescale on the advance_time method which
> in turn will increase the number of posted events to the verilog simulator
> as well.

Interesting.  I hadn't considered a case where multiple timescales
would be in use.  What actually happens in this situation?  Does the
simulator partition the simulation into different "timescale domains"
and simulate them independently?

Or does it define the length of a time step to be the smallest
timescale in use?  In which case, Verilog code that uses a larger
timescale will have to wait for more time steps to occur?  For
example:

module very_fast_domain;
  timescale 1ns;
  initial forever #1 $display("very_fast: %d", $time);
endmodule;

module normal_speed_domain;
  timescale 1ms;
  initial forever #1 $display("normal_speed: %d", $time);
endmodule;

Here would you expect to see 1000 very_fast messages for every 1
normal_speed message?

> > The main reason for this assumption is that it makes the threading
> > model simpler:
> >
> >  1. All threads execute in parallel within the *same* time step.
> >
> >  2. We only advance the entire simulation to the next time step
> >     when *all* threads are finished with the current time step.
> >
> > In your threading model, the rules are:
> >
> >  1. All threads (effectively) execute in parallel within the
> >     *same* time step.
> >
>
> Each thread executes within their own timestep.  The scheduler only
> ensures that each thread executes in the proper order.  For cases where
> two threads needs to respond to the same triggering event, then those
> two threads can operate in parallel till another blocking event such as
> atXEdge or a simTimeWait occurs.

I disagree because any thread has the ability to move the simulation
time forward without having to wait for the other threads.  For
example, consider this scenario:

Simulation time is 5.

Thread 1 is executing and is paused halfway.

Thread 2 calls advance_time() and becomes paused.

Simulation time is now 6.

Thread 1 resumes execution (the second half is now executing in time 6
whereas the first half executed in time 5).

Thread 2 resumes execution.

> >  2. Any thread may advance the entire simulation to the next time
> >     step, without having to wait for the other threads to finish.
> >     This may cause race conditions where some threads initially
> >     see one picture of the simulation database (the current time
> >     step) and later see another picture (a future time step)--all
> >     the while thinking that they are executing inside the same
> >     time step.
> >
> > Any thoughts on this?
>
> To answer the race condition issue, the verilog simulator itself should
> resolve any race conditions.  In our testbench environment, all signal
> changes always occurs with some time delay after the atXEdge statement.
> This insures that other threads will sample the same value at the same
> edge.

I see this race condition as design flaw of the threading model rather
than a problem in the runtime scheduling & execution of threads.  (See
the scenario above.)

> To handle multiple DUT, one can employ the same idea of using the
> global thread manager.  Each prototype will just have to execute
> the simulation in a thread using the common atXEdge, atPosEdge,
> atNegEdge,  and wait routines.

Good.  I arrived at the same conclusion yesterday when I discovered
that, due to the threading model, the feign! method can be eliminated
from the prototype.  As a result, I can simply load multiple
prototypes, each having their own set of VPI::process() blocks, into
the simulation and just let them run.

> The main thrust behind using this model is that I was trying to use
> Ruby as an extension of Verilog while keeping most of the Verilog
> behavioral framework intact.

Understood.

> I was hestitant to learn Ruby at first coming from a PERL background, but
> after 2 days I consider myself pretty proficient at it.  It's a really good
> object  oriented language and I hope more people will adopt it.

I'm in the opposite situation right now, coming from a Ruby background
and having to learn PERL -- while being just as hesitant, of course.
;-)

Reply via email to