So default
slots = "CPU"s/2
but there is some leeway in what is considered to be a CPU? So far I
have not found the actual formula documented anywhere.
Thanks,
David Mathog
On Mon, Jun 8, 2020 at 11:37 AM Ralph Castain via users
wrote:
>
> Note that you can also resolve it
: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
$ gcc --version
gcc (GCC) 8.3.1 20190507 (Red Hat 8.3.1-4)
Thanks,
David Mathog
On 30-Apr-2019 11:39, George Bosilca wrote:
Depending on the alignment of the different types there might be small
holes in the low-level headers we exchange between processes It should
not
be a concern for users.
valgrind should not stop on the first detected issue except
if --exit-on-first-e
ession file supplied with the release does not prevent that.
How do I work around this?
Thank you,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
___
users mailing list
users@lists.open-mpi.org
https
m32
on the gcc commmand lines. Generally one does that by something like:
export CFLAGS=-m32
before running configure to generate Makefiles.
Regards,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
ong with a 32 bit version of OpenMPI. At least that
approach has worked so far for me.
Regards,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
_FLT4(a)(((__uni16)(a)).f4)
fprintf(stderr,"DEBUG xEV values %f %f %f
%f\n",EMM_FLT4(xEv)[0],EMM_FLT4(xEv)[1],EMM_FLT4(xEv)[2],EMM_FLT4(xEv)[3]);fflush(stderr);
Thanks!
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
n emms is not invoked after an MMX run
can be very strange. Grasping at straws here though, presumably both
the OS and MPI (it it does this at all) preserve the state of all
registers when swapping processes around on a machine.
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis
chine (that's why the -m32 is there) or a 32
bit machine gives the same result.
If any of you have seen something like this before and can suggest a way
to proceed I would be very grateful.
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
t does now.
Or are we the only site where quick high priority jobs must run on the
same nodes where long term low priority jobs are also running?
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
Is it possible using mpirun to specify the nice value for each program
run on the worker nodes? It looks like some MPI implementations allow
this, but "mpirun --help" doesn't say anything about it.
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, B
algrind.
Is there a suppression file for these versions that will shut down all
messages under PMPI_INit but still allow the messages from the program
being tested???
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
eceding loop be replaced by
MPI_Get_Counts
(allocate memory as needed)
MPI_Gatherv
although I guess even that wouldn't be very efficient with memory,
because there would usually be huge holes in the recv buffer.
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facili
two approaches? That is, does MPI_Bcast really
broadcast, daisy chain, or use other similar methods to reduce bandwidth
use when distributing its message, or does it just go ahead and run
MPI_Send in a loop anyway, but hide the details from the programmer?
Thanks,
David Mathog
mat...@caltech.edu
0.0, 48.9, 5.1, 0.0]
Top is even less help, it just shows the worker process on each node at
>98% CPU.
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
Ashley Pittman wrote:
> For a much simpler approach you could also use these two environment
variables, this is on my current system which is 1.5 based, YMMV of course.
>
> OMPI_COMM_WORLD_LOCAL_RANK
> OMPI_COMM_WORLD_LOCAL_SIZE
That is simpler. It works on OMPI 1.4.3 too:
cat >/usr/common/bin
ted mapping:
Verify that you have mapped the allocated resources properly using the
--host or --hostfile specification.
------
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
t of
rank numbers of all the local workers. The array llist must be of size
lmax.
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
ugly fast,
especially if they don't all start at the same time.
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
compilers should generate
faster code than gcc, and I would prefer to use them if possible.
Thanks,
David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
m_iFuncStatSortFlags is not
accessible from Statistics::FuncStat_struct::operator<(const
Statistics::FuncStat_struct&) const.
1 Error(s) detected.
This error didn't go away when -DVT_OMP was removed, nor when -xopenmp
was also taken away, and so the final score is: working OpenMPI 1.4.1
on So
21 matches
Mail list logo