On Mon, 2009-07-13 at 08:58 +0200, [email protected] wrote:
> Quoting Thomas Sailer <[email protected]>:
> > I will convert these blocks to sensitized processes, and report back
> > with the results.
> 
> VITAL simulations are *very* slow with GHDL because there is no optimized
> version
> of the VITAL library.  Feel free to post your improvments.

That's not the problem in this case. It's a pre-synthesis testbench,
with just a few explicitly instantiated vital gates that are mostly
inactive.

I have changed the Vital models to use sensitized processes for the
signal/wire delays. I now have 31 nonsensitized processes remaining.

But still, grt consumes 54% of the cycles iterating over the
non-sensitized processes in State_Wait, deciding they do not yet need to
be resumed.

Any vital specific procedure does not even show up in the profile (i.e.
it uses less than 0.01%).

So the root problem IMO seems to me that the data structures used in grt
cause non-sensitized processes to use simulation time even when they're
idle (i.e. waiting for some event). grt should IMO keep with every
signal a list of non-sensitized processes to wake up in case of an event
(as it does for sensitized processes, but unlike the sensitized case it
must be changed dynamically at runtime) and a sorted list of wakeup
times. That way, wait would be somewhat more expensive (as it has to
update those two data structures), but there would be no need to
continually iterate over the list of non-sensitized processes, thus
inactive non-sensitized processes would not consume any CPU at all.

I would try to do this if my Ada knowledge was not nonexistent.

Tom

PS: another 26% goes into grt__stack2__allocate, which is called mostly
by std_ulogic_vector concatenation and numeric_std. Roughly 10% goes
into grt__signals__find_next_time, the rest is mostly std_logic_1164 and
numeric_std stuff.



_______________________________________________
Ghdl-discuss mailing list
[email protected]
https://mail.gna.org/listinfo/ghdl-discuss

Reply via email to