On Thu, 2010-07-15 at 08:21 -0400, Jeff Squyres wrote:
> On Jul 15, 2010, at 8:22 AM, nadia.derbey wrote:
>
> > So the solution is:
> > 1. leave the intermediate event_type declared as an int.
> > 2. then:
> > . either cast it to ibv_event_type when c
--output-filename is global to the job: even if it
is given on any single line of an application context, with different
values, the last value is the one that is actually taken as an output
file prefix.
Regards,
Nadia
--
nadia.derbey <nadia.der...@bull.net>
Hi,
When using the carto/file module with a syntactically incorrect carto
file, we get stuck into opal_carto_base_select().
The attached trivial patch fixes the issue.
Regards,
Nadia
--
nadia.derbey <nadia.der...@bull.net>
Fix a hang in carto_base_select if carto_module_init fails
d
Hi list,
I'm hitting a limitation with paffinity/hwloc with cpu numbers >= 64.
In opal/mca/paffinity/hwloc/paffinity_hwloc_module.c, module_set() is
the routine that sets the calling process affinity to the mask given as
parameter. Note that "mask" is a opal_paffinity_base_cpu_set_t (so we
allow
Hi,
In v1.5, when mpirun is called with both the "-bind-to-core" and
"-npersocket" options, and the npersocket value leads to less procs than
sockets allocated on one node, we get a segfault
Testing environment:
openmpi v1.5
2 nodes with 4 8-cores sockets each
mpirun -n 10 -bind-to-core
Hi,
If a job is launched using "srun --resv-ports --cpu_bind:..." and slurm
is configured with:
TaskPlugin=task/affinity
TaskPluginParam=Cpusets
each rank of that job is in a cpuset that contains a single CPU.
Now, if we use carto on top of this, the following happens in