Hi all,

I have a simulation sending packets through a system.  It works very
well until we start to use some Xilinx primitives. If I break the
system down and leave out the module that has Xilinx primitives, the
simulation runs at least an order of magnitude faster and elapsed time
increases linearly with increasing simulation time. Once we
instantiate code that contains Xilinx primitives, doubling the
simulation time (i.e. doubling the number of packets sent) increases
the simulation time by about a factor of 6.

i.e. - sending 128 packets takes about 2 minutes of sim time. 265 =>
12 minutes, 512 packets => ~72 minutes and 1024 packets takes longer
than I was willing to wait (jobs were killed after running for more
than 350 minutes).

So my question is, has anyone else observed this behavior?

Is there any way for me to profile the program and find where it is
spending it's time?


Best Regards, Thomas D
-- 
Sent from Tom's Fortress of Solitude!

_______________________________________________
Ghdl-discuss mailing list
Ghdl-discuss@gna.org
https://mail.gna.org/listinfo/ghdl-discuss

Reply via email to