> It seems that the parallel make fails on 8 GB machines.

I think your first sentence overstates the determinism of the problem a bit.
I ran a normal, default build on 8GB machine last week and had no problem.
There must be an environmental problem, but I don't think we fully
understand it yet.

The aggressiveness of the make parallelism is set in core/sqf/sqenvcom.sh.
It sets the parallel factor based on how many CPUs are on your machine:

# Set default build parallelism
# Can be overridden on make commandline
cpucnt=$(grep processor /proc/cpuinfo | wc -l)
#     no number means unlimited, and will swamp the system
export MAKEFLAGS="-j$cpucnt"

If that calculation is wrong, maybe that could cause a problem.

--Steve


> -----Original Message-----
> From: Gunnar Tapper [mailto:[email protected]]
> Sent: Monday, March 7, 2016 9:35 AM
> To: [email protected]
> Subject: Parallel Make Failures
>
> Hi,
>
> It seems that the parallel make fails on 8 GB machines. At least, Nitin
> and
> I both ran into make failures that did not appear when running serial
> make.
> I've also seen similar failures when building the code on 12 GB machines.
>
> Based on previous discussions, the Trafodion Contributor Guide recommends
> rerunning make a few times if running issues.
>
> I most wonder if there's a way to reduce the aggressiveness of the make in
> general. Could we, for example, come up with a table that correlates
> system
> size to define the -l option or something similar?
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*

Reply via email to