On Tue, 2016-08-09 at 11:00 +1000, Thomas Dejanovic wrote:
> Hi all,
> 
> I have a simulation sending packets through a system.  It works very
> well until we start to use some Xilinx primitives. If I break the
> system down and leave out the module that has Xilinx primitives, the
> simulation runs at least an order of magnitude faster and elapsed
> time
> increases linearly with increasing simulation time. Once we
> instantiate code that contains Xilinx primitives, doubling the
> simulation time (i.e. doubling the number of packets sent) increases
> the simulation time by about a factor of 6.

How far can you simplify the design and still observe this behaviour?

For example, can you reduce that module to instantiate a single Xilinx
primitive, and hammer that with a loop in a minimal testbench?

Such a testcase would be useful.

-- Brian


_______________________________________________
Ghdl-discuss mailing list
Ghdl-discuss@gna.org
https://mail.gna.org/listinfo/ghdl-discuss

Reply via email to