Hi Diego,
I don't know what CPU/compiler you are using and what -r8
option means, but DISPLACEMENTS(2) and DISPLACEMENTS(3) is
incorrect if integer is 4 bytes and real is 8 bytes.
In this case, usually there is a gap between ip and RP.
See description about datatype alignment in the MPI Standard.
Hi Siegmar,
mpiexec and java run as distinct processes. Your JRE message
says java process raises SEGV. So you should trace the java
process, not the mpiexec process. And more, your JRE message
says the crash happened outside the Java Virtual Machine in
native code. So usual Java program debugger
Hi Siegmar,
> I think that it must have to do with MPI, because everything
> works fine on Linux and my Java program works fine with an older
> MPI version (openmpi-1.8.2a1r31804) as well.
Yes. I also think it must have to do with MPI.
But java process side, not mpiexec process side.
When you
Hi Siegmar,
The attached JRE log shows very important information.
When JRE loads the MPI class, JNI_OnLoad function in
libmpi_java.so (Open MPI library; written in C) is called.
And probably mca_base_var_cache_files function passes NULL
to asprintf function. I don't know how this situation
Siegmar, Oscar,
I suspect that the problem is calling mca_base_var_register
without initializing OPAL in JNI_OnLoad.
ompi/mpi/java/c/mpi_MPI.c:
jint JNI_OnLoad(JavaVM *vm, void *reserved)
{
libmpi = dlopen("libmpi."
Hi Siegmar,
Heterogeneous environment is not supported officially.
README of Open MPI master says:
--enable-heterogeneous
Enable support for running on heterogeneous clusters (e.g., machines
with different endian representations). Heterogeneous support is
disabled by default because it
t; truth is there are *very* limited resources (both human and hardware)
> maintaining heterogeneous
> support, but that does not mean heterogeneous support should not be
> used, nor that bug report
> will be ignored.
>
> Cheers,
>
> Gilles
>
> On 2
Hi,
> > >>> I have installed openmpi-1.8.2rc2 with gcc-4.9.0 on Solaris
> > >>> 10 Sparc and I receive a bus error, if I run a small program.
I've finally reproduced the bus error in my SPARC environment.
#0 0x00db4740 (__waitpid_nocancel + 0x44)
ion about a similar issue in the
> datatype engine: on the SPARC architecture the 64 bits integers must be
> aligned on a 64bits boundary or you get a bus error.
>
> Takahiro you can confirm this by printing the value of data when signal is
> raised.
>
> George.
>
>
&
Siegmar, Ralph,
I'm sorry to response so late since last week.
Ralph fixed the problem in r32459 and it was merged to v1.8
in r32474. But in v1.8 an additional custom patch is needed
because the db/dstore source codes are different between trunk
and v1.8.
I'm preparing and testing the custom
Hi Ralph,
Your commit r32459 fixed the bus error by correcting
opal/dss/dss_copy.c. It's OK for trunk because mca_dstore_hash
calls dss to copy data. But it's insufficient for v1.8 because
mca_db_hash doesn't call dss and copies data itself.
The attached patch is the minimum patch to fix it in
Hi,
> How to cross compile *openmpi *for* arm *on* x86_64 pc.*
>
> *Kindly provide configure options for above...*
You should pass your arm architecture name to the --host option.
Example of my configure options for Open MPI, run on sparc64,
built on x86_64:
--prefix=...
$ ls
> >> > a.out armhelloworld helloworld.c openmpi-1.10.3
> >> openmpi-1.10.3.tar.gz
> >> >
> >> > But ,while i run using mpirun on target board as below
> >> >
> >> > root@OpenWrt:/# mpirun --allow-run-as-root -np 1 armhelloworld
&
Hello,
> i would like to know, can i run java-mpi program with my pi without
> configuration myself? i see java bindings is on perhaps?
>Java bindings: yes
Yes. This line shows Java bindings are available.
You can compile your Java program using mpijavac command.
mpijavac
Hi,
Open MPI 2.1.3 and 2.1.4 have a bug in shared memory communication.
Open MPI community is preparing 2.1.5 to fix it.
https://github.com/open-mpi/ompi/pull/5536
Could you try this patch?
https://github.com/open-mpi/ompi/commit/6086b52719ed02725dfa5e91c0d12c3c66a8e168
Or, use the
Hi,
Fujitsu has resumed the work of running MTT on our machines.
Could you give me username/password to upload the report to the server?
I suppose the username is "fujitsu".
Our machines are not connected to the Internet directly.
Is there a document for uploading the report outside the MTT
,
> instead. That's where 95% of ongoing development is occurring.
>
> If all goes well, I plan to sit down with Howard + Ralph next week and try
> converting Cisco's Perl config to use the Python client.
>
>
> > On Mar 12, 2018, at 10:23 PM, Kawashima, Takahiro
&g
ing it up for OMPI testing) give
> us a presentation on it? I'll add it to the agenda. The initial hurdle of
> getting started has prevented me from taking the leap, but maybe next week
> is a good opportunity to do so.
>
> On Tue, Mar 13, 2018 at 9:14 PM, Kawashima, Takahiro <
> t-k
Siegmar,
Thanks for your report. But it is a known issue and will be fixed in Open MPI
v3.0.2.
https://github.com/open-mpi/ompi/pull/5029
If you want it now in v3.0.x series, try the latest nightly snapshot.
https://www.open-mpi.org/nightly/v3.0.x/
Regards,
Takahiro Kawashima,
MPI
Hi Siegmar,
According to the man page of cc in Oracle Developer Studio 12.6,
any atomics support runtime library is not linked by default.
Probably you'll need -xatomic=studio, -xatomic=gcc, or -latomic option.
I'm not sure whether the option should be added by Open MPI configure or by
users.
XPMEM moved to GitLab.
https://gitlab.com/hjelmn/xpmem
Thanks,
Takahiro Kawashima,
Fujitsu
> Hello Bert,
>
> What OS are you running on your notebook?
>
> If you are running Linux, and you have root access to your system, then
> you should be able to resolve the Open SHMEM support issue by
Siegmar,
I think you are using Java 11 and it changed the default output HTML version.
I'll take a look. But downloading OpenJDK 11 takes time...
Probably the following patch will resolve the issue.
diff --git
What version of Java are you using?
Could you type "java -version" and show the output?
Takahiro Kawashima,
Fujitsu
> today I've tried to build openmpi-v4.0.x-201810090241-2124192 and
> openmpi-master-201810090329-e9e4d2a on my "SUSE Linux Enterprise Server
> 12.3 (x86_64)" with Sun C 5.15, gcc
I confirmed the patch resolved the issue with OpenJDK 11.
I've created a PR for the master branch.
I'll also create PRs for release branches.
Thanks to your bug report!
https://github.com/open-mpi/ompi/pull/5870
Takahiro Kawashima,
Fujitsu
> Siegmar,
>
> I think you are using Java 11 and it
Hello Adam,
IMB had a bug related to Reduce_scatter.
https://github.com/intel/mpi-benchmarks/pull/11
I'm not sure this bug is the cause but you can try the patch.
https://github.com/intel/mpi-benchmarks/commit/841446d8cf4ca1f607c0f24b9a424ee39ee1f569
Thanks,
Takahiro Kawashima,
Fujitsu
Benson,
I think OpenJDK 9 is enough. I tested Open MPI Java bindings with OpenJDK 8, 9,
10, and 11 several months ago.
Compiling your MPI Java program requires mpi.jar, which should have been
installed in $OMPI_INSTALL_DIR/lib. Probably your compilation error message
indicates mpi.jar is not
26 matches
Mail list logo