Agreed.

On Wed, Jan 19, 2011 at 1:50 PM, Ali Saidi <sa...@umich.edu> wrote:

>
> I would not complain if the build times went up slightly but I didn't need
> 8GB of RAM to do a -j 6 build. ;)
>
> Ali
>
>
>
> On Wed, 19 Jan 2011 09:02:13 -0800, nathan binkert <n...@binkert.org>
> wrote:
>
>> I don't think anyone would have any problems if you did it, no?  I've
>> done many things because they annoyed *me*.  The question is, if it is
>> worth it.  For someone that just rebuilds ISAs all the time, I can
>> imagine that it is worth it even if it did increase overall build time
>> slightly.  I think it's a fair trade-off to have a neutral build time
>> or even a slight increase if it meant that you could get your machine
>> to work on a file in a better way. My guess is that the major concern
>> was compiling the same thing many time. One thing to consider when
>> you're doing this is to use precompiled headers.  I'm pretty sure that
>> gcc supports them and they could in this instance make a difference.
>> (Of course, we need to make it work without PCH).
>>
>>  Nate
>>
>> On Wed, Jan 19, 2011 at 2:21 AM, Gabe Black <gbl...@eecs.umich.edu>
>> wrote:
>>
>>>        I know we've talked about this before, but another reason for
>>> breaking
>>> up ISA generated files occurred to me as I'm waiting for X86_SE to
>>> build. On a machine with a moderate amount of memory, compiling, say, 8
>>> way parallel works just fine since the memory footprint fits and there's
>>> enough work for the CPUs to do. If you happen to hit 2 or 3 (or 4 or 5)
>>> of the monster ISA files, though, the memory requirements go up
>>> substantially and you could end up blowing out your memory. This has
>>> happened to me in the past, and I've end up swapping so bad I had to ssh
>>> in from another machine to kill the build since X stopped responding.
>>> Needless to say the build time suffers as well. By breaking these really
>>> big files into smaller chunks, the memory ceiling of builds stays at a
>>> reasonable level throughout. Otherwise if you hit that bottleneck, the
>>> only way to move past it is to reduce parallelism for the whole build
>>> which makes large parts of it suboptimal. Of course this all sounds nice
>>> but is a pain or at least a lot of work to implement, and there was a
>>> sentiment (which is quite possibly justified) that the shorter compile
>>> times * more compiles comes out in the wash. I think this issue,
>>> however, is harder to ignore and a good reason to give this more thought.
>>>
>>> Gabe
>>> _______________________________________________
>>> m5-dev mailing list
>>> m5-dev@m5sim.org
>>> http://m5sim.org/mailman/listinfo/m5-dev
>>>
>>>
>>>  _______________________________________________
>> m5-dev mailing list
>> m5-dev@m5sim.org
>> http://m5sim.org/mailman/listinfo/m5-dev
>>
>
> _______________________________________________
> m5-dev mailing list
> m5-dev@m5sim.org
> http://m5sim.org/mailman/listinfo/m5-dev
>
>
_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev

Reply via email to