Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-02 Thread Kawashima, Takahiro
Hi Diego, I don't know what CPU/compiler you are using and what -r8 option means, but DISPLACEMENTS(2) and DISPLACEMENTS(3) is incorrect if integer is 4 bytes and real is 8 bytes. In this case, usually there is a gap between ip and RP. See description about datatype alignment in the MPI Standard.

Re: [OMPI users] which info is needed for SIGSEGV in Java for openmpi-dev-124-g91e9686 on Solaris

2014-10-22 Thread Kawashima, Takahiro
Hi Siegmar, mpiexec and java run as distinct processes. Your JRE message says java process raises SEGV. So you should trace the java process, not the mpiexec process. And more, your JRE message says the crash happened outside the Java Virtual Machine in native code. So usual Java program debugger

Re: [OMPI users] which info is needed for SIGSEGV in Java for openmpi-dev-124-g91e9686 on Solaris

2014-10-23 Thread Kawashima, Takahiro
Hi Siegmar, > I think that it must have to do with MPI, because everything > works fine on Linux and my Java program works fine with an older > MPI version (openmpi-1.8.2a1r31804) as well. Yes. I also think it must have to do with MPI. But java process side, not mpiexec process side. When you

Re: [OMPI users] which info is needed for SIGSEGV in Java foropenmpi-dev-124-g91e9686 on Solaris

2014-10-23 Thread Kawashima, Takahiro
Hi Siegmar, The attached JRE log shows very important information. When JRE loads the MPI class, JNI_OnLoad function in libmpi_java.so (Open MPI library; written in C) is called. And probably mca_base_var_cache_files function passes NULL to asprintf function. I don't know how this situation

Re: [OMPI users] which info is needed for SIGSEGV in Java foropenmpi-dev-124-g91e9686on Solaris

2014-10-26 Thread Kawashima, Takahiro
Siegmar, Oscar, I suspect that the problem is calling mca_base_var_register without initializing OPAL in JNI_OnLoad. ompi/mpi/java/c/mpi_MPI.c: jint JNI_OnLoad(JavaVM *vm, void *reserved) { libmpi = dlopen("libmpi."

Re: [OMPI users] processes hang with openmpi-dev-602-g82c02b4

2014-12-23 Thread Kawashima, Takahiro
Hi Siegmar, Heterogeneous environment is not supported officially. README of Open MPI master says: --enable-heterogeneous Enable support for running on heterogeneous clusters (e.g., machines with different endian representations). Heterogeneous support is disabled by default because it

Re: [OMPI users] processes hang with openmpi-dev-602-g82c02b4

2014-12-24 Thread Kawashima, Takahiro
t; truth is there are *very* limited resources (both human and hardware) > maintaining heterogeneous > support, but that does not mean heterogeneous support should not be > used, nor that bug report > will be ignored. > > Cheers, > > Gilles > > On 2

Re: [OMPI users] bus error with openmpi-1.8.2rc2 on Solaris 10 Sparc

2014-08-08 Thread Kawashima, Takahiro
Hi, > > >>> I have installed openmpi-1.8.2rc2 with gcc-4.9.0 on Solaris > > >>> 10 Sparc and I receive a bus error, if I run a small program. I've finally reproduced the bus error in my SPARC environment. #0 0x00db4740 (__waitpid_nocancel + 0x44)

Re: [OMPI users] [OMPI devel] bus error with openmpi-1.8.2rc2 on Solaris 10 Sparc

2014-08-08 Thread Kawashima, Takahiro
ion about a similar issue in the > datatype engine: on the SPARC architecture the 64 bits integers must be > aligned on a 64bits boundary or you get a bus error. > > Takahiro you can confirm this by printing the value of data when signal is > raised. > > George. > > &

Re: [OMPI users] bus error with openmpi-1.8.2rc4r32485 and gcc-4.9.0

2014-08-11 Thread Kawashima, Takahiro
Siegmar, Ralph, I'm sorry to response so late since last week. Ralph fixed the problem in r32459 and it was merged to v1.8 in r32474. But in v1.8 an additional custom patch is needed because the db/dstore source codes are different between trunk and v1.8. I'm preparing and testing the custom

Re: [OMPI users] bus error with openmpi-1.8.2rc4r32485 and gcc-4.9.0

2014-08-11 Thread Kawashima, Takahiro
Hi Ralph, Your commit r32459 fixed the bus error by correcting opal/dss/dss_copy.c. It's OK for trunk because mca_dstore_hash calls dss to copy data. But it's insufficient for v1.8 because mca_db_hash doesn't call dss and copies data itself. The attached patch is the minimum patch to fix it in

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Kawashima, Takahiro
Hi, > How to cross compile *openmpi *for* arm *on* x86_64 pc.* > > *Kindly provide configure options for above...* You should pass your arm architecture name to the --host option. Example of my configure options for Open MPI, run on sparc64, built on x86_64: --prefix=...

Re: [OMPI users] openmpi-1.10.3 cross compile configure options for arm-openwrt-linux-muslgnueabi on x86_64-linux-gnu

2016-10-18 Thread Kawashima, Takahiro
$ ls > >> > a.out armhelloworld helloworld.c openmpi-1.10.3 > >> openmpi-1.10.3.tar.gz > >> > > >> > But ,while i run using mpirun on target board as below > >> > > >> > root@OpenWrt:/# mpirun --allow-run-as-root -np 1 armhelloworld &

Re: [OMPI users] running mpi-java program on arm-architektur raspberrypi 3 model b

2017-11-15 Thread Kawashima, Takahiro
Hello, > i would like to know, can i run java-mpi program with my pi without > configuration myself? i see java bindings is on perhaps? >Java bindings: yes Yes. This line shows Java bindings are available. You can compile your Java program using mpijavac command. mpijavac

Re: [OMPI users] ARM HPC Compiler 18.4.0 / OpenMPI 2.1.4 Hang for IMB All Reduce Test on 4 Ranks

2018-08-15 Thread Kawashima, Takahiro
Hi, Open MPI 2.1.3 and 2.1.4 have a bug in shared memory communication. Open MPI community is preparing 2.1.5 to fix it. https://github.com/open-mpi/ompi/pull/5536 Could you try this patch? https://github.com/open-mpi/ompi/commit/6086b52719ed02725dfa5e91c0d12c3c66a8e168 Or, use the

[MTT users] MTT username/password and report upload

2018-03-12 Thread Kawashima, Takahiro
Hi, Fujitsu has resumed the work of running MTT on our machines. Could you give me username/password to upload the report to the server? I suppose the username is "fujitsu". Our machines are not connected to the Internet directly. Is there a document for uploading the report outside the MTT

Re: [MTT users] MTT username/password and report upload

2018-03-13 Thread Kawashima, Takahiro
, > instead. That's where 95% of ongoing development is occurring. > > If all goes well, I plan to sit down with Howard + Ralph next week and try > converting Cisco's Perl config to use the Python client. > > > > On Mar 12, 2018, at 10:23 PM, Kawashima, Takahiro &g

Re: [MTT users] MTT username/password and report upload

2018-03-14 Thread Kawashima, Takahiro
ing it up for OMPI testing) give > us a presentation on it? I'll add it to the agenda. The initial hurdle of > getting started has prevented me from taking the leap, but maybe next week > is a good opportunity to do so. > > On Tue, Mar 13, 2018 at 9:14 PM, Kawashima, Takahiro < > t-k

Re: [OMPI users] error building openmpi-3.0.1 on Linux with gcc or Sun C and Java-10

2018-04-12 Thread Kawashima, Takahiro
Siegmar, Thanks for your report. But it is a known issue and will be fixed in Open MPI v3.0.2. https://github.com/open-mpi/ompi/pull/5029 If you want it now in v3.0.x series, try the latest nightly snapshot. https://www.open-mpi.org/nightly/v3.0.x/ Regards, Takahiro Kawashima, MPI

Re: [OMPI users] error building openmpi-master-201810310352-a1e85b0 on Linux with Sun C

2018-10-31 Thread Kawashima, Takahiro
Hi Siegmar, According to the man page of cc in Oracle Developer Studio 12.6, any atomics support runtime library is not linked by default. Probably you'll need -xatomic=studio, -xatomic=gcc, or -latomic option. I'm not sure whether the option should be added by Open MPI configure or by users.

Re: [OMPI users] [Open MPI Announce] Open MPI 4.0.0 Released

2018-11-13 Thread Kawashima, Takahiro
XPMEM moved to GitLab. https://gitlab.com/hjelmn/xpmem Thanks, Takahiro Kawashima, Fujitsu > Hello Bert, > > What OS are you running on your notebook? > > If you are running Linux, and you have root access to your system, then > you should be able to resolve the Open SHMEM support issue by

Re: [OMPI users] error building Java api for openmpi-v4.0.x-201810090241-2124192 and openmpi-master-201810090329-e9e4d2a

2018-10-09 Thread Kawashima, Takahiro
Siegmar, I think you are using Java 11 and it changed the default output HTML version. I'll take a look. But downloading OpenJDK 11 takes time... Probably the following patch will resolve the issue. diff --git

Re: [OMPI users] error building Java api for openmpi-v4.0.x-201810090241-2124192 and openmpi-master-201810090329-e9e4d2a

2018-10-09 Thread Kawashima, Takahiro
What version of Java are you using? Could you type "java -version" and show the output? Takahiro Kawashima, Fujitsu > today I've tried to build openmpi-v4.0.x-201810090241-2124192 and > openmpi-master-201810090329-e9e4d2a on my "SUSE Linux Enterprise Server > 12.3 (x86_64)" with Sun C 5.15, gcc

Re: [OMPI users] error building Java api for openmpi-v4.0.x-201810090241-2124192 and openmpi-master-201810090329-e9e4d2a

2018-10-09 Thread Kawashima, Takahiro
I confirmed the patch resolved the issue with OpenJDK 11. I've created a PR for the master branch. I'll also create PRs for release branches. Thanks to your bug report! https://github.com/open-mpi/ompi/pull/5870 Takahiro Kawashima, Fujitsu > Siegmar, > > I think you are using Java 11 and it

Re: [OMPI users] OpenMPI v4.0.0 signal 11 (Segmentation fault)

2019-02-20 Thread Kawashima, Takahiro
Hello Adam, IMB had a bug related to Reduce_scatter. https://github.com/intel/mpi-benchmarks/pull/11 I'm not sure this bug is the cause but you can try the patch. https://github.com/intel/mpi-benchmarks/commit/841446d8cf4ca1f607c0f24b9a424ee39ee1f569 Thanks, Takahiro Kawashima, Fujitsu

Re: [OMPI users] Error when Building an MPI Java program

2019-04-11 Thread Kawashima, Takahiro
Benson, I think OpenJDK 9 is enough. I tested Open MPI Java bindings with OpenJDK 8, 9, 10, and 11 several months ago. Compiling your MPI Java program requires mpi.jar, which should have been installed in $OMPI_INSTALL_DIR/lib. Probably your compilation error message indicates mpi.jar is not