On 16 Mar 10 11:00 PM, Kevin Steffensen wrote:

> 
> I'm currently writing a makefile for simulating a large processor 
> system. I want to take advantage of my processor being multicored and
> being able to compile several files at once using the 'make -j' command.
> 
> I ran into the following problem: when compiling different entities into 
> the same library, the object .o files will get written correctly while
> the library summary .cf files will be overwritten with the last compiled
> entity causing a simple 'make' command to work fine but a 'make -j' to
> fail. The attached tarball has an example of this.
> Any opinions on how to solve this? 
> 

Different target libraries, makefile targets organized by target libraries.
Most large designs can have subsets targeted as interfaces in a hierarchy,
an abstraction mechanism.  The same mechanism allows multiple designers on
big designs easily and can provide project management visibility.  The
granularity depends on the number of people you have working on it and/or
the number of different roles.

I'd expect that the lack of multi-core advantaged simulation would be a
bigger issue.  Run times generally might be expected to outweigh compile
times.

I once worked on a Verilog design for a processor where we got an amazing 2
seconds per vector (instruction).   We'd of course throw stimulus (vectors)
back and forth across hierarchical boundaries and do a lot of independent
testing with test benches arbitrating stimulus consumption (and verifying
results).  The same thing can be done in VHDL.  You can also see the impetus
for faster simulation or co-simulation.  A project can get stretched out
long enough in time it becomes obsolete.






_______________________________________________
Ghdl-discuss mailing list
[email protected]
https://mail.gna.org/listinfo/ghdl-discuss

Reply via email to