Dear Steven, dear users,

thanks for providing this tool and so much useful information. This mailing 
list and of course the wiki always helped to find quick solutions, but now I 
stumbled on a problem, I'm not able to solve. I've been using meep in serial 
mode for some time. Running on an Ubuntu machine I conveniently installed meep 
from the repositories.
Now I got access to this multicore rhel system and since a couple of days I'm 
trying to make meep-mpi work. Here is what I did:

Mainly I followed the instructions from this link:
http://www.doe.carleton.ca/~kmedri/research/centosmeepinstall.html

I obtained the binaries from Epel and from here:
http://www.elders.princeton.edu/data/PU_IAS/5/en/os/x86_64/Workstation/

HDF5 was compiled from source:
CC=/path/to/openmpi/bin/mpicc ./configure --prefix=/path/to/hdf5 
--enable-parallel 
make
make install

Same for meep:
./configure --with-mpi --with-hdf5=/path/to/hdf5/

But make check already spits out errors:
PASS: bench
PASS: bragg_transmission
FAIL: convergence_cyl_waveguide
PASS: cylindrical
PASS: flux
PASS: harmonics
PASS: integrate
FAIL: known_results
PASS: one_dimensional
PASS: physical
FAIL: symmetry
FAIL: three_d
PASS: two_dimensional
PASS: 2D_convergence
PASS: h5test
PASS: pml

Nevertheless I installed meep and I'm able to calculate some of the examples, 
but not all of them:

mpirun -np 12 meep-mpi bend-flux.ctl

librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
--------------------------------------------------------------------------
[[8627,1],1]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: host

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
Using MPI version 2.1, 12 processes
-----------
Initializing structure...
Working in 2D dimensions.
Computational cell is 16 x 32 x 0 with resolution 10
     block, center = (-2,-11.5,0)
          size (12,1,1e+20)
          axes (1,0,0), (0,1,0), (0,0,1)
          dielectric constant epsilon diagonal = (12,12,12)
     block, center = (3.5,2,0)
          size (1,28,1e+20)
          axes (1,0,0), (0,1,0), (0,0,1)
          dielectric constant epsilon diagonal = (12,12,12)
time for set_epsilon = 0.00918102 s
-----------
meep: meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 4 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.


I get the same results with a serial HDF5 from a rpm package. All tests are 
failing with a manually compiled openmpi --without-openib.

Any hints? Thanks for the time...


-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

_______________________________________________
meep-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Reply via email to