[Meep-discuss] MEEP-MPI taking same time as before

2017-07-19 Thread Jitendra

Hello,
I installed meep-mpi manually by building almost everything from the 
source. However my simulation time has not decreased but the subpixel 
averaging time is cut by more than half. Moreover the GUILE warning is 
appearing four times now. So does this mean that meep-mpi is not 
installed properly?


My machine has 4-core intel-i3-3120M, 2.50 GHz, 4 GB RAM and 64-Bit 
LINUX-4.8.0-58-generic Ubuntu 16.04 LTS OS.
No previous versions of meep, meep-mpi or meep-openmpi are installed. I 
got a huge number of warnings in the make step of parallel-hdf5, h5utils 
and meep, but no errors.


The steps that I followed are attached in a .txt file and also given 
below:-

For g77
sudo apt-get install gfortran

For f77
sudo apt-get install fort77

For BLAS
sudo apt-get install libblas-dev checkinstall
sudo apt-get install libblas-doc checkinstall

For LAPACK
sudo apt-get install liblapack-dev checkinstall
sudo apt-get install liblapack-doc checkinstall

For Harminv
Downloaded from- http://ab-initio.mit.edu/wiki/index.php/Harminv
./configure
make
sudo make install

For Guile
Checked if there is already a version of guile installed on the 
system.No previous version.

sudo apt-get -f install *guile-2.0

For libctl
Downloaded from- http://ab-initio.mit.edu/wiki/index.php/Libctl
./configure LIBS="-lm"
make
sudo make install

For MPICH
Downloadeded from- http://www.mpich.org/downloads/
./configure
make
sudo make install
OR
sudo apt-get install mpich

For GNU M4
sudo apt-get -f install m4

For Parallel-HDF5
Downloaded from- 
https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.10/hdf5-1.10.1/src/
./configure --prefix=/usr/local/hdf5 --enable-parallel CC=mpicc 
CXX=mpich

make
sudo make install

For H5utils
Downloaded from- http://ab-initio.mit.edu/wiki/index.php/H5utils
export LDFLAGS="-L/usr/local/hdf5/lib"
export CDFLAGS="-I/usr/local/hdf5/include"
./configure CFLAGS=-I/usr/include/mpich
make
sudo make install

For GNU GSL
Downloaded from- http://infinity.kmeacollege.ac.in/gnu/gsl/
./configure
make
sudo make install

For zlib
sudo apt-get install libpng-dev

For MEEP
Downloaded from- http://ab-initio.mit.edu/wiki/index.php/Meep_download
export LIBS="-L/usr/local/hdf5/lib"
export CPPFLAGS="-I/usr/local/hdf5/include"
./configure --with-mpi
make
sudo make install


The code on which I am trying meep-mpi is attached in a .ctl file and 
also given below:-


(reset-meep)

(set-param! resolution 10)

(define-param ra 10)
(define-param ha 20)

(define-param rm 12)
(define-param hm 24)

(set! geometry-lattice (make lattice (size (+ rm rm 1) (+ rm rm 1) (+ 
hm 2


(set! geometry(list
		(make cylinder(center 0 0 0)(axis (vector3 0 0 1))(height hm)(radius 
rm)(material metal))
		(make cylinder(center 0 0 0)(axis (vector3 0 0 1))(height ha)(radius 
ra)(material air))

))

(define-param fcen 0.0659)
(define-param df 0.01)

(set! sources (list
(make source(src (make gaussian-src (frequency fcen) (fwidth 
df)))
 (center 0 0 0)(size 0 0 0)(component 
Hz))
))

(set! symmetries(list(make mirror-sym (direction Z

(init-fields)
(define (f_e r ex ey ez)
(sqrt (+ (* ex ex) (* ey ey) (* ez ez)))
)
(define (electric-output) (output-field-function "electric-function" 
(list Ex Ey Ez) f_e))


(define (f_m r hx hy hz)
(sqrt (+ (* hx hx) (* hy hy) (* hz hz)))
)
(define (magnetic-output) (output-field-function "magnetic-function" 
(list Hx Hy Hz) f_m))


(run-sources+ 200 (at-beginning output-epsilon)(after-sources (harminv 
Hz (vector3 0 0 0) fcen df))(at-end electric-output magnetic-output))


(define freq (car (map harminv-freq-re harminv-results)))
(print "frequency:" (* 299.79245 freq) " GHz\n")

Executed via:- mpirun -np 4 meep-mpi TE011.ctl
The results are good, just the time of simulation has not decreased for 
me using meep-mpi.

--This warning message appears four times after execution--
Some deprecated features have been used.  Set the environment
variable GUILE_WARN_DEPRECATED to "detailed" and rerun the
program to get more information.  Set it to "no" to suppress
this message.


Any sort of help is highly appreciated.

Jitendra









(reset-meep)

(set-param! resolution 10)

(define-param ra 10)  
(define-param ha 20)

(define-param rm 12) 
(define-param hm 24)

(set! geometry-lattice (make lattice (size (+ rm rm 1) (+ rm rm 1) (+ hm 2

(set! geometry(list
(make cylinder(center 0 0 0)(axis (vector3 0 0 1))(height 
hm)(radius rm)(material metal))
(make cylinder(center 0 0 0)(axis (vector3 0 0 1))(height 
ha)(radius ra)(material air))
))

(define-param fcen 0.0659) 
(define-param df 0.01) 

(set! sources (list
(make source(src (make gaussian-src (frequency fcen) (fwidth 
df)))
 (center 0 0 0)(size 0 0 0)(component 
Hz))
))

(set! symmetries(list(make mirror-sym (direction Z

(init-fields)
(define (f_e r ex ey ez) 
 

[Meep-discuss] meep-mpi on latest version of ubuntu

2015-08-27 Thread Sophia Fox

Hello,

So our IT department are saying they are having problems installing  
meep-mpi on Ubuntu version 14.04.3 LTS. I was just wondering is anyone  
else having problems with this?


Sophia



___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] meep-mpi doesn't print anything in terminal

2014-04-10 Thread Ehsan Saei
Dear all,

I try to run a ctl file using meep-mpi  on a ubuntu machine with the following 
command line:

mpirun -np 8 meep-mpi test.ctl  test.out

but It doesn't print anything in terminal. What other option is required for 
printing the progress in terminal?

thanks in advance,
Ehsan 
  ___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] meep-mpi doesn't print anything in terminal

2014-04-10 Thread Filip Dominec
Dear Ehsan,
on Linux, you can use the 'tee' command:

mpirun -np 8 meep-mpi test.ctl| tee   test.out

Did it help?
Filip

2014-04-10 8:39 GMT+02:00, Ehsan Saei e.s...@hotmail.com:
 Dear all,

 I try to run a ctl file using meep-mpi  on a ubuntu machine with the
 following command line:

 mpirun -np 8 meep-mpi test.ctl  test.out

 but It doesn't print anything in terminal. What other option is required for
 printing the progress in terminal?

 thanks in advance,
 Ehsan


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


[Meep-discuss] meep mpi harminv problem

2014-03-12 Thread Oosten, D. van (Dries)
Hi guys,

I have been struggling with the following issue. When I use harminv to find 
eigenmodes in meep, it is often unreliable when I use it through mpirun. This 
is especially the case when the workstation we run meep on is under heavy load. 
It seems to me that the process get killed by mpirun before they can give their 
results. Could this be the case and if so, what tests can I run to track this 
problem down?

Thanks in advance!

best,
Dries
___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] meep mpi harminv problem

2014-03-12 Thread Oosten, D. van (Dries)
Sorry guys, but just to clarify, mpirun often says things like

--
mpirun has exited due to process rank 5 with PID 8389 on
node workstation exiting without calling finalize. This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--

does that clarify things for anyone?



From: Oosten, D. van (Dries)
Sent: Wednesday, March 12, 2014 8:20 PM
To: meep-discuss@ab-initio.mit.edu
Subject: meep mpi harminv problem

Hi guys,

I have been struggling with the following issue. When I use harminv to find 
eigenmodes in meep, it is often unreliable when I use it through mpirun. This 
is especially the case when the workstation we run meep on is under heavy load. 
It seems to me that the process get killed by mpirun before they can give their 
results. Could this be the case and if so, what tests can I run to track this 
problem down?

Thanks in advance!

best,
Dries

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


[Meep-discuss] MEEP-MPI

2013-01-14 Thread Rita Ribeiro
Hi,

I had installed python meep -mpi in Ubuntu 12.10 and when i execute  the
command


mpirun -np 4 python name.py

instead of getting one result i get 4. So each core is running all the code
instead of splitting it for the 4 cores.

Best,
RitaR.
___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] MEEP-MPI

2013-01-14 Thread Wu
Hi RitaR,

i did not use meep-python, but there is my general suggest:
to check when is MPI initialized.

Best,
Chr. Wu



Am 14.01.2013 12:08, schrieb Rita Ribeiro:
 Hi,
 
 I had installed python meep -mpi in Ubuntu 12.10 and when i execute  the
 command 
 
 
 mpirun -np 4 python name.py
 
 instead of getting one result i get 4. So each core is running all the
 code instead of splitting it for the 4 cores.
 
 Best,
 RitaR.
 
 
 ___
 meep-discuss mailing list
 meep-discuss@ab-initio.mit.edu
 http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss
 




signature.asc
Description: OpenPGP digital signature
___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] MEEP-MPI

2013-01-14 Thread Filip Dominec
Hi, first of all a trivial question, do you import meep-mpi or just meep ?
F.

2013/1/14, Rita Ribeiro ritbe...@gmail.com:
 Hi,

 I had installed python meep -mpi in Ubuntu 12.10 and when i execute  the
 command


 mpirun -np 4 python name.py

 instead of getting one result i get 4. So each core is running all the code
 instead of splitting it for the 4 cores.

 Best,
 RitaR.


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] MEEP-MPI

2013-01-14 Thread Zi-Lan Deng
Maybe the MPI version which you install meep with is not as the same as the
MPI version of mpirun

-- 
Best Regards,
Zilan
___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] meep-mpi install pass in make check but fail in simple test

2011-08-03 Thread Maicon Faria
I have a fresh meep-mpi with open-mpi that are working correctly, also i
compile HFS5 with enable parallel,  the meep-mpi pass in all tests on make
check but fail in a simple example like:

///
maicon@lcepof01:~$ cat bend-flux.ctl
; From the Meep tutorial: transmission around a 90-degree waveguide
; bend in 2d.

(define-param sx 16) ; size of cell in X direction
(define-param sy 32) ; size of cell in Y direction
(set! geometry-lattice (make lattice (size sx sy no-size)))

(define-param pad 4) ; padding distance between waveguide and cell edge
(define-param w 1) ; width of waveguide

(define wvg-ycen (* -0.5 (- sy w (* 2 pad ; y center of horiz. wvg
(define wvg-xcen (* 0.5 (- sx w (* 2 pad ; x center of vert. wvg

(define-param no-bend? false) ; if true, have straight waveguide, not bend

(set! geometry
  (if no-bend?
  (list
   (make block
 (center 0 wvg-ycen)
 (size infinity w infinity)
 (material (make dielectric (epsilon 12)
  (list
   (make block
 (center (* -0.5 pad) wvg-ycen)
 (size (- sx pad) w infinity)
 (material (make dielectric (epsilon 12
   (make block
 (center wvg-xcen (* 0.5 pad))
 (size w (- sy pad) infinity)
 (material (make dielectric (epsilon 12)))

(define-param fcen 0.15) ; pulse center frequency
(define-param df 0.1)  ; pulse width (in frequency)
(set! sources (list
   (make source
 (src (make gaussian-src (frequency fcen) (fwidth df)))
 (component Ez)
 (center (+ 1 (* -0.5 sx)) wvg-ycen)
 (size 0 w

(set! pml-layers (list (make pml (thickness 1.0
(set-param! resolution 10)

(define-param nfreq 100) ; number of frequencies at which to compute flux
(define trans ; transmitted flux
  (add-flux fcen df nfreq
(if no-bend?
(make flux-region
  (center (- (/ sx 2) 1.5) wvg-ycen) (size 0 (* w 2)))
(make flux-region
  (center wvg-xcen (- (/ sy 2) 1.5)) (size (* w 2) 0)
(define refl ; reflected flux
  (add-flux fcen df nfreq
(make flux-region
  (center (+ (* -0.5 sx) 1.5) wvg-ycen) (size 0 (* w 2)

; for normal run, load negated fields to subtract incident from refl. fields
(if (not no-bend?) (load-minus-flux refl-flux refl))

(run-sources+
 (stop-when-fields-decayed 50 Ez
   (if no-bend?
   (vector3 (- (/ sx 2) 1.5) wvg-ycen)
   (vector3 wvg-xcen (- (/ sy 2) 1.5)))
   1e-3)
 (at-beginning output-epsilon))

; for normalization run, save flux fields for refl. plane
(if no-bend? (save-flux refl-flux refl))

(display-fluxes trans refl)
-//


When I execute meep-mpi I get

///
maicon@lcepof01:~$ mpirun -np 1 --machinefile .mpi_hostfile
/opt/meep/bin/meep-mpi bend-flux.ctl
Using MPI version 2.1, 1 processes
---
Initializing structure...
Working in 2D dimensions.
Computational cell is 16 x 32 x 0 with resolution 10
 block, center = (-2,-11.5,0)
  size (12,1,1e+20)
  axes (1,0,0), (0,1,0), (0,0,1)
  dielectric constant epsilon diagonal = (12,12,12)
 block, center = (3.5,2,0)
  size (1,28,1e+20)
  axes (1,0,0), (0,1,0), (0,0,1)
  dielectric constant epsilon diagonal = (12,12,12)
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5
./bend-flux-refl-flux.h5:ey_dft
time for set_epsilon = 0.305609 s
---
--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--

Some deprecated features have been used.  Set the environment
variable GUILE_WARN_DEPRECATED to detailed and rerun the
program to get more information.  Set it to no to suppress
this message.
--
mpirun has exited due to process rank 0 with PID 6679 on
node lcepof01 exiting without calling finalize. This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--///

Anybody has a clue about what is going wrong ?

Thanks !
___
meep-discuss mailing list

[Meep-discuss] meep-mpi fails during make check

2011-03-25 Thread Martin
Dear Steven, dear users,

thanks for providing this tool and so much useful information. This mailing 
list and of course the wiki always helped to find quick solutions, but now I 
stumbled on a problem, I'm not able to solve. I've been using meep in serial 
mode for some time. Running on an Ubuntu machine I conveniently installed meep 
from the repositories.
Now I got access to this multicore rhel system and since a couple of days I'm 
trying to make meep-mpi work. Here is what I did:

Mainly I followed the instructions from this link:
http://www.doe.carleton.ca/~kmedri/research/centosmeepinstall.html

I obtained the binaries from Epel and from here:
http://www.elders.princeton.edu/data/PU_IAS/5/en/os/x86_64/Workstation/

HDF5 was compiled from source:
CC=/path/to/openmpi/bin/mpicc ./configure --prefix=/path/to/hdf5 
--enable-parallel 
make
make install

Same for meep:
./configure --with-mpi --with-hdf5=/path/to/hdf5/

But make check already spits out errors:
PASS: bench
PASS: bragg_transmission
FAIL: convergence_cyl_waveguide
PASS: cylindrical
PASS: flux
PASS: harmonics
PASS: integrate
FAIL: known_results
PASS: one_dimensional
PASS: physical
FAIL: symmetry
FAIL: three_d
PASS: two_dimensional
PASS: 2D_convergence
PASS: h5test
PASS: pml

Nevertheless I installed meep and I'm able to calculate some of the examples, 
but not all of them:

mpirun -np 12 meep-mpi bend-flux.ctl

librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
--
[[8627,1],1]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: host

Another transport will be used instead, although this may result in
lower performance.
--
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
Using MPI version 2.1, 12 processes
---
Initializing structure...
Working in 2D dimensions.
Computational cell is 16 x 32 x 0 with resolution 10
 block, center = (-2,-11.5,0)
  size (12,1,1e+20)
  axes (1,0,0), (0,1,0), (0,0,1)
  dielectric constant epsilon diagonal = (12,12,12)
 block, center = (3.5,2,0)
  size (1,28,1e+20)
  axes (1,0,0), (0,1,0), (0,0,1)
  dielectric constant epsilon diagonal = (12,12,12)
time for set_epsilon = 0.00918102 s
---
meep: meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
meep: incorrect dataset size (0 vs. 8800) in load_dft_hdf5 
./bend-flux-refl-flux.h5:ey_dft
--
MPI_ABORT was invoked on rank 4 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.


I get the same results with a serial HDF5 from a rpm package. All tests are 
failing with a manually compiled openmpi --without-openib.

Any hints? Thanks for the 

Re: [Meep-discuss] meep-mpi

2011-01-27 Thread David Lively
Hi Stefan,

You'll need to run meep using the mpi launcher (typically mpirun or
mpirun_rsh). On our system, (using mvapich) I use a script like so:

/usr/mpi/gcc/mvapich2-1.4.1/
bin/mpirun_rsh -ssh -np 128 -hostfile $MPI_FOLDER/machines
$MEEP_INSTALL_FOLDER/bin/meep-mpi $1

You'll need to make the following adjustments:

Replace /usr/mpi/gcc/mvapich2-1.4.1/bin/mpirun_rsh with the path to your
own mpirun or mpirun_rsh command (whichever you are using, depends on your
MPI configuration). If this is already in your path you can probably remove
the /usr// and just use the name of the executable.

Replace -np 128 with the number of processes you really want to run. This
is typically (number of machines)*(number of processor cores).

Replace $MPI_FOLDER/machines with the path to the list of machines that
participate in your cluster.

Replace $MEEP_INSTALL_FOLDER/bin/meep-mpi with the path to meep_mpi in
your machine.

In my installation, I have configured the $MPI_FOLDER and
$MEEP_INSTALL_FOLDER environment variables with the paths to the machines
file and the folder where meep-mpi is installed, respectively.

I placed this command in a shell script called go. When I need to start a
run, I enter

go mySim.ctl

and off it goes. I also placed that script in a folder accessible to all of
the users that need to run meep, included it in their path, and added the
$MPI_FOLDER and $MEEP_INSTALL_FOLDER variables to their shell startup
script.

Good luck.


On Wed, Jan 26, 2011 at 5:28 PM, Stefan Kapser stefan.kap...@ph.tum.dewrote:

 Dear meep-users,
 I would like to use meep-mpi for my computations now and have a cluster
 with meep-mpi installed. Do I have to add any additional command in the
 libctl code as I would have to in C++ (initialize mpi(argc, argv);)?
 Thanks a lot,
 Stefan

 ___
 meep-discuss mailing list
 meep-discuss@ab-initio.mit.edu
 http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] meep-mpi

2011-01-26 Thread Stefan Kapser

Dear meep-users,
I would like to use meep-mpi for my computations now and have a cluster 
with meep-mpi installed. Do I have to add any additional command in the 
libctl code as I would have to in C++ (initialize mpi(argc, argv);)?

Thanks a lot,
Stefan

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] Meep-mpi: hard limit of ~ 2e9 points?

2011-01-03 Thread Georg Wachter
Hello,

While it is good to see that many users are interested in adaptive
resolutions for meep, what I meant was actually Is there any interest *by
the developers* to implement such a feature somewhen?.

Mischa wrote:

you could consider doing a coordinate transformation that magnifies the
 region that you are interested in, and compensate for that by choosing an
 appropriately modified epsilon profile:
 http://www.mail-archive.com/meep-discuss@ab-initio.mit.edu/msg03029.html


The exploit of scaling of Maxwell's Equations which you suggest is very
interesting, I hadn't thought about that!
Would it also work with the current implementation of dispersive materials?
(Since only the \sigma can be a function of position?)

gpipc wrote:
 what resolution you need?


I am not yet sure I am finished with the convergence study. Right now it
looks like this:
My nanostructure has a characteristic radius of curvature of ~ 50 nm, but is
much larger than that (um range). Since I'm only interested in an estimate
of the effects of  beam propagation near the structure, I obtain reasonable
looking results for quite coarse resolutions of ~ 12.5 nm. The maximum I can
afford is ~ 4 nm, calculating on a cluster, with a simulation box of ~ 5*5*3
um and dispersive materials.

I get decent results for a 20 nm diameter nanosphere in two
 dimensions (so it is actually a cylinder) with 0.5 nm resolution, but they
 do not seem to be enough for 3d (I am trying a calculation with 0.25 nm
 resolution now). I compare the FDTD scattering efficiencies with Mie
 theory.


I would think needing a better resolution for 3d than for 2d is the
intuitive result, no?

I personally would be extremely careful with such high resolutions. At 0.25
nm you are in the range where you have a single atom per pixel/voxel.
Maxwell's Equations (in matter) are a macroscopic theory formulated for
fields which should be considered averaged over many atom distances. I would
not expect any classical electromagnetic theory to accurately describe
experiments on this length scale.

Georg



On Wed, Dec 29, 2010 at 1:02 PM, gpipc gp...@cup.uni-muenchen.de wrote:

 On Wed, 29 Dec 2010 01:36:54 +0100, Georg Wachter georgwach...@gmail.com
 wrote:

 
  PS.: I would really like to see a possibility for having regions of
  different resolution in meep. This is The One Feature (for me) that
  commercial competitors have over Meep or any free electromagnetism
  software.
  Is there any interest that this might be implemented soonishly?


 I, for one, would be interested. But I must admit that I do not have any
 clue on how much effort it could cost developers to code this.

 --
 
 Giovanni Piredda
 Postdoc - AK Hartschuh

 Phone: ++49 - (0) 89/2180-77601
 Fax.: ++49 – (0) 89/2180-77188
 Room: E2.062
 
 Message sent by Cup Webmail (Roundcube)


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] Meep-mpi: hard limit of ~ 2e9 points?

2010-12-31 Thread gpipc
On Wed, 29 Dec 2010 01:36:54 +0100, Georg Wachter georgwach...@gmail.com
wrote:
 Hello,
 
 For the simulation of small metal structures with meep, I unfortunately
 need
 a ridiculously high resolution, 


Hi Georg,
since we are working with a similar FDTD problem (simulation of small
metal structures) and the same program - Meep - would you mind telling a
bit more precisely what resolution you need? I am doing some
experimentation with scattering from a nanosphere and I have got the
impression that simulating accurately a *spectrally* sharper resonance one
needs a higher *spatial* resolution, does this make any sense? The only
bit
of information that I have at the moment regarding this is that if in a
Drude model for a metal I decrease the damping, then I need a better
spatial resolution to simulate with the same accuracy the scattering at
the
plasmon resonance. I haven't yet tried with structures different from a
sphere. Also, I have noticed that for the same model of the metal (same
Drude- Lorentz parameters) a 3d simulation requires higher resolution than
a 2d simulation.
Right now I get decent results for a 20 nm diameter nanosphere in two
dimensions (so it is actually a cylinder) with 0.5 nm resolution, but they
do not seem to be enough for 3d (I am trying a calculation with 0.25 nm
resolution now). I compare the FDTD scattering efficiencies with Mie
theory.

The point of all of this is figuring out what is the coarsest resolution
which gives decent results, and doing the actual simulations with that.


Giovanni





-- 

Giovanni Piredda
Postdoc - AK Hartschuh

Phone: ++49 - (0) 89/2180-77601
Fax.: ++49 – (0) 89/2180-77188
Room: E2.062

Message sent by Cup Webmail (Roundcube)


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] Meep-mpi: hard limit of ~ 2e9 points?

2010-12-29 Thread gpipc
On Wed, 29 Dec 2010 01:36:54 +0100, Georg Wachter georgwach...@gmail.com
wrote:

 
 PS.: I would really like to see a possibility for having regions of
 different resolution in meep. This is The One Feature (for me) that
 commercial competitors have over Meep or any free electromagnetism
 software.
 Is there any interest that this might be implemented soonishly?


I, for one, would be interested. But I must admit that I do not have any
clue on how much effort it could cost developers to code this.

-- 

Giovanni Piredda
Postdoc - AK Hartschuh

Phone: ++49 - (0) 89/2180-77601
Fax.: ++49 – (0) 89/2180-77188
Room: E2.062

Message sent by Cup Webmail (Roundcube)


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] Meep-mpi: hard limit of ~ 2e9 points?

2010-12-28 Thread Georg Wachter
Hello,

For the simulation of small metal structures with meep, I unfortunately need
a ridiculously high resolution, leading to a lot of points in my simulation
box (around 2.2e9). On trying to run a 512-core job, I ran into a strange
error:

meep_highres.e489139:meep: Cannot split -2080123546 grid points into 512
parts

In the source code, it says [structure.cpp - void structure::check_chunks()
]:
  // FIXME: should use 'long long' else will fail if grid  2e9 points
So it appears to be due to the range of the integer variable.

Is this an easy fix, or does it have side effects? (You can tell that I'm a
FORTRAN user.)

Best regards,
  Georg

PS.: I would really like to see a possibility for having regions of
different resolution in meep. This is The One Feature (for me) that
commercial competitors have over Meep or any free electromagnetism software.
Is there any interest that this might be implemented soonishly?
___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] meep-mpi error

2010-10-26 Thread David Lively
When starting meep-mpi on a new installation (Scientific Linux), I receive
the following errors:

libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes.
This will severely limit memory registrations.
Fatal error in MPI_Init:
Other MPI error, error stack:
MPIR_Init_thread(311)...: Initialization failed
MPID_Init(191)..: channel initialization failed
MPIDI_CH3_Init(163).:
MPIDI_CH3I_RDMA_init(146)...:
rdma_get_control_parameters(545):
get_hca_type(324)...:
hcaNameToType(231)..: Unable to get InfiniBand device Rate.


Anyone have any suggestions? Will the 32KB limit really keep Meep from
starting at all?
___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] meep-mpi 3d ring transmission problem

2010-07-27 Thread markos calderon
Hi everyone,

Currently, I'm working on setting a cluster using amazon ec2. We are using
Meep-mpi with OpenMPI. We are doing some testing with exercises and we are
having some issues with the following exercise *3d_ring_transmission.ctl*

Our problem is that the simulation doesn't improve the time duration as we
increase the numbers of nodes of the cluster, I mean, with 2,4,6,... nodes,
the time instead of decrease is increasing. We don't know exactly what
happens. We suppose like the exercise does a lot of writings ((to-appended
hz (at-every 0.5 output-hfield-z))), it takes more time, because of the
AmazonEC2 nodes virtualization. But We are not sure about this. The final
output is 20 GB.

Also, I paste two outputs running with 2 and 4 nodes. These outputs are just
the beggining not the entire, as you can see in the outputs the time between
each step actually is better with 4 nodes than 2 but not in all, in the
timesteps 1,41,81,... the time is less with 2 nodes than with 4 nodes. And
in the end, the time is better with 2 nodes =/

I'm not related to FDTD simulations, in fact I'm related to computer science
field, but I would like to understand a little better about this. My
question is: my assuption is correct about the virtualization? can we use
the ram memory for the writings (at-every 0.5) and in the end write to the
hard drive?
For run the ctl is:
meep no-ring?=true 3d_ring_transmission.ctl | tee transmission0.out

thanks!
*
*
*3d_ring_transmission.ctl*

;Simulation of a 3D ring resonator in add-drop configuration

;OUTPUT: Transmission flux from thru port and drop port

;General parameters
(define-param nc  3.03) ;core refractive index
(define-param ns  1.67) ;substrate refractive index
(define-param r1  2) ;inner radius of ring in microns
(define-param r2  2.5) ;outer radius of ring
(define-param w  0.55) ;waveguide width in microns
(define-param gap  0.2) ; gap between ring and straight waveguide in microns
(define-param pad  0.75) ;padding distance in microns
(define-param h 0.405) ; height of waveguides in microns
(define-param hs  0.6) ;height of substrate
(define-param dpml  1) ;pml thickness

;Cell structure definition
(define sx (* 2 (+ r2 gap w pad dpml))) ; X direction cell size
;(define sy (* 2 (+ r2 pad dpml))) ; Y direction cell size
(define sy (* 2 (+ r2 pad ))) ; Y direction cell size
(define sz (* 2 (+ hs dpml))) ; Z direction cell size
(set! geometry-lattice (make lattice (size sx sy sz)))

;FLAG to denote complete structure = false, waveguide only = true
(define-param no-ring? true) ;default simulate waveguides only

;Construct waveguide and ring resonator
(define subs-width (* 2 (+ r2 gap w pad)))
(define subs-length (* 2 (+ r2 pad)))

(if no-ring?
(begin
  (set! geometry (list
  ;substrate
  (make block
(center 0 0 (* -1 (/ hs 2))) ; center of substrate
;(size (+ subs-width dpml) (+ subs-length
dpml) hs) ; dimension of substrate is (wxLxh) = 2*(r2+gap+w+pad) x
2*(r2+pad) x hs
(size  subs-width  subs-length hs)
(material (make dielectric (index ns
  ;waveguides
  (make block
(center (+ r2 gap (/ w 2)) 0 (/ h 2))
;right waveguide
;(size  w (+ subs-length dpml) h)
(size  w subs-length h)
(material (make dielectric (index nc
  (make block
(center  (* -1 (+ r2 gap (/ w 2))) 0 (/ h 2))
;(size w (+ subs-length dpml) h)
(size  w subs-length h)
(material (make dielectric (index nc
  )))
;else condition
(begin
  (set! geometry (list
  ;substrate
  (make block
(center 0 0 (* -1 (/ hs 2))) ; center of substrate
;(size (+ subs-width dpml) (+ subs-length
dpml) hs) ; dimension of substrate is (wxLxh) = 2*(r2+gap+w+pad) x
2*(r2+pad) x hs
(size  subs-width  subs-length hs)
(material (make dielectric (index ns
  ;waveguides
  (make block
(center (+ r2 gap (/ w 2)) 0  (/ h 2))
;right waveguide
;(size  w (+ subs-length dpml)h h)
(size  w subs-length h)
(material (make dielectric (index nc
  (make block
(center  (* -1 (+ r2 gap (/ w 2))) 0 (/ h 2))
;(size w (+ subs-length dpml) h)
(size  w subs-length h)
(material (make dielectric (index nc
  ;ring resonator 

Re: [Meep-discuss] meep-mpi scaling perf with more than 2 processors

2010-07-22 Thread gdemesy
 a))

(define-param mod-pores(*  0.05 a))



(define-param sx  (* 1. a) )

(define-param sy  (* 1. a) )

(define-param sz  (+ hsuper hsubs hpc))

(define-param dpml a)



(define-param kx (/  0. a) )

(define-param ky (/ -0.2 a) )



(define-param kpara (sqrt (+ (* kx kx) (* ky ky) ) ) )

(define-param Exampl (* -1 ky (/ 1 kpara) ) )

(define-param Eyampl (*kx (/ 1 kpara) ) )



(define-param theta (asin (/ kx fcen) ) )

(define-param phi   (atan (/ ky kx) ) )

(define-param theta_deg (* 180. (/ 1 pi) (asin (/ kx fcen))) )





(define szpml (+ sz (* 2. dpml)))

(set! geometry-lattice (make lattice (size sx sy szpml)))

(set! pml-layers (list (make pml (thickness dpml) (direction Z

(set! ensure-periodicity true)

(set! k-point (vector3 kx ky 0))

(define (my-amp-func p) (* (exp (* 0+2i pi kx (vector3-x p)))  (exp (*  
0+2i pi ky (vector3-y p)))   ) )




(set! sources (list

   (make source

 (src (make gaussian-src (frequency fcen) (fwidth df)))

 (component Ex)

 (center 0. 0. (+ zpcmax (* 0.9 hsuper)) ) (size sx sy 0)

 (amp-func my-amp-func)

 (amplitude Exampl ))

)

)

(run-until runtime0)

_

Best regards,


Guillaume


meep-discuss-requ...@ab-initio.mit.edu a écrit :


Send meep-discuss mailing list submissions to
meep-discuss@ab-initio.mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss
or, via email, send a message with subject or body 'help' to
meep-discuss-requ...@ab-initio.mit.edu

You can reach the person managing the list at
meep-discuss-ow...@ab-initio.mit.edu

When replying, please edit your Subject line so it is more specific
than Re: Contents of meep-discuss digest...


Today's Topics:

   1. Re: meep-discuss Digest, Vol 53, Issue 12 (Alex McLeod)


--

Message: 1
Date: Sat, 17 Jul 2010 04:20:02 -0700
From: Alex McLeod alexmcl...@sbcglobal.net
Subject: Re: [Meep-discuss] meep-discuss Digest, Vol 53, Issue 12
To: meep-discuss@ab-initio.mit.edu
Message-ID: 3b73dc65-4173-4815-a552-b531c4f6f...@sbcglobal.net
Content-Type: text/plain; charset=iso-8859-1; Format=flowed;
DelSp=yes

Nizamov, Guillaume,

I can speak from my own off-hand experience using meep-mpi on our
cluster, whose technical details I list below:

vulcan.lbl.gov (1936 PE)
Dell PowerEdge R610 Cluster
242 dual-socket, quad-core Intel 2.4Ghz Nehalem processor nodes
5808GB aggregate memory
48TB Bluearc NFS storage
60TB DDN S2A6620 Lustre storage
Qlogic QDR Infiniband interconnect
18.5 TF (theoretical peak)

We have compiled meep-mpi with OpenMPI-intel-1.4.1 and against HDF5
1.8.4p1-intel-serial.  When running massive 3D volume calculations
with PMLs on all boundaries and with frequent heavy HDF5 I/O, I
achieve the fastest calculation speeds with around 16 processors while
using 4 processors per node.  In all cases I observe pretty even
memory distribution, so long as the simulation volume in voxel units
divides evenly by 16.

The HDF5 I/O actually slows the overall calculation by a factor of 2
on account of overhead associated with HDF5 calls.  For our use case,
we found this overhead to be even greater with parallel HDF5, which is
evidently optimized for writing of datasets far larger than we have
the capacity to compute with FDTD.  So, we have stuck with serial
HDF5.  In the complete absence of HDF5 I/O, we found meep-mpi to show
near optimal scaling out to 32 processors or more.

Guillaume, what benchmark are you running exactly?  I.e., are you
using HDF5 output, and if so, how frequently and over what volumes, or
any additional field computations, flux volumes, etc.?

Best,
Alex


Alexander S. McLeod
B.A. Physics and Astrophysics - University of California at Berkeley
Simulation Engineer - Theory Group, Molecular Foundry (LBNL)
Site Lead - Network for Computational Nanotechnology at Berkeley / MIT
asmcl...@lbl.gov707-853-0716


On Jul 16, 2010, at 5:54 AM, meep-discuss-requ...@ab-initio.mit.edu
wrote:


From: gdem...@physics.utoronto.ca
Date: July 16, 2010 5:54:32 AM PDT
To: Nizamov Shawkat nizamov.shaw...@gmail.com
Cc: meep-discuss@ab-initio.mit.edu
Subject: Re: [Meep-discuss] meep-mpi scaling perf with more than 2
processors


Hi Nizamov,

Thanks for your comments. I should mention the fact that the previous
job correspond to normalization run, where you only have freespace 
PMLs. My I have only one source plane term and a set of Bloch
conditions. I
really don't know how meep splits the domain into chunks, but I was
figuring that this was done along the propagation direction.
You are right, I may have to look

Re: [Meep-discuss] meep-mpi scaling perf with more than 2 processors

2010-07-16 Thread Nizamov Shawkat
 In your case, have you witnessed this kind of unbalanced behavior (unbalanced 
 memory, I
 mean)?

Sorry, I do not remember exact details.

Let's see once again:

1817525   0  353m 221m 6080 R  99.8  1.4   1:10.41  1  meep-mpi
1817425   0  354m 222m 6388 R 100.2  1.4   1:10.41  6  meep-mpi
1817225   0 1140m 1.0g 7016 R  99.8  6.3   1:10.41  2  meep-mpi
1817325   0 1140m 1.0g 6804 R  99.5  6.3   1:10.40  4  meep-mpi

Tasks: 228 total,   5 running, 222 sleeping,   0 stopped,   1 zombie
Cpu1  : 23.9%us, 76.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu6  : 23.3%us, 76.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 99.7%us,  0.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu4  : 99.7%us,  0.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Well, it may be possible, that simulation space is divided unevenly.
In this case, results seem quite natural - bigger simulation volumes
(cpu2 and cpu4) run at their full speed, 3-4 times smaller volumes
(cpu1 and cpu6) complete their simulation steps circa 3 times faster
and waste the time waiting for two other cores.

If this is correct interpretation, then there is nothing wrong with
you setup and:

1) it should mean that splitting of overall simulation volume onto
separate per core simulation volumes was not performed optimally by
meep. Any meep developer to comment ? I remember that splitting
algorithms took into account the structure and optimized
correspondingly the splitting volumes. E.g., cores 1 and 6 may be
actually simulating the slab volume, while cores 2 and 4 are
calculating the free space/PML. Try without slab to see if in that
case the distribution will be even.

2) scaling might be much better when you further increase the  number
of cores, because simulation volume may be divided more evenly.  Can
you try it ?

Actually, it would be interesting to compare how simulation volume is
divided at different number of processor cores, with and without slab,
and this may give a clue how splitting works. Another option is to
look at the sources :)

With best regards
Shawkat Nizamov

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] meep-mpi scaling perf with more than 2 processors

2010-07-16 Thread gdemesy

Hi Nizamov,

Thanks for your comments. I should mention the fact that the previous
job correspond to normalization run, where you only have freespace 
PMLs. My I have only one source plane term and a set of Bloch conditions. I
really don't know how meep splits the domain into chunks, but I was
figuring that this was done along the propagation direction.
You are right, I may have to look at the source :\

Below are the results for 8 procs.

Tasks: 237 total,   9 running, 227 sleeping,   0 stopped,   1 zombie
Cpu1  : 55.3%us, 44.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 54.4%us, 45.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu5  : 52.8%us, 47.2%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu6  : 52.6%us, 47.4%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu8  : 40.4%us, 59.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu15 : 54.0%us, 46.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  : 99.1%us,  0.9%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu4  : 99.1%us,  0.9%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  16411088k total, 11423856k used,  4987232k free,  256k buffers
Swap:0k total,0k used,0k free,   275088k cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  5907 gdemesy   25   0  946m 751m 6840 R 104.9  4.7   2:06.09 meep-mpi
  5909 gdemesy   25   0  949m 755m 7000 R 104.9  4.7   2:06.11 meep-mpi
  5908 gdemesy   25   0  946m 751m 6856 R 104.6  4.7   2:06.12 meep-mpi
  5906 gdemesy   25   0  946m 751m 7068 R 102.1  4.7   2:06.02 meep-mpi
  5902 gdemesy   25   0  949m 755m 7036 R 101.8  4.7   2:06.02 meep-mpi
  5903 gdemesy   25   0  946m 751m 6892 R 101.8  4.7   2:06.02 meep-mpi
  5905 gdemesy   25   0 2798m 2.5g 6992 R 101.8 16.3   2:06.02 meep-mpi
  5904 gdemesy   25   0 2794m 2.5g 7096 R 101.5 16.2   2:06.02 meep-mpi

Again, my 10Gb load is not evenly split... And the run is even longer  
than with 4 processors.
If we modify the structure, say by removing Bloch conditions, the load  
is again unevenly dispatched:


Cpu3  : 99.4%us,  0.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu4  : 99.4%us,  0.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu0  : 47.3%us, 52.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  : 47.9%us, 52.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 47.6%us, 52.4%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu5  : 47.9%us, 52.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu6  : 48.1%us, 51.9%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu7  : 47.1%us, 52.9%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  16411088k total,  8093036k used,  8318052k free,  256k buffers
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 9116 gdemesy   25   0 2330m 2.1g 6224 R 100.9 13.3   0:35.67 meep-mpi
 9117 gdemesy   25   0 2332m 2.1g 6152 R 100.9 13.3   0:35.66 meep-mpi
 9119 gdemesy   25   0  561m 366m 6088 R 101.3  2.3   0:35.67 meep-mpi
 9118 gdemesy   25   0  561m 366m 6204 R 100.9  2.3   0:35.66 meep-mpi
 9120 gdemesy   25   0  558m 363m 5788 R 100.9  2.3   0:35.66 meep-mpi
 9114 gdemesy   25   0  563m 368m 6088 R 100.6  2.3   0:35.66 meep-mpi
 9115 gdemesy   25   0  561m 366m 6164 R 100.6  2.3   0:35.66 meep-mpi
 9121 gdemesy   25   0  560m 365m 5776 R 100.6  2.3   0:35.65 meep-mpi

Now let's add the slab to this dummy job... Does'nt change much:
Cpu11 : 99.7%us,  0.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu4  : 99.7%us,  0.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu0  : 46.8%us, 53.2%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 47.3%us, 52.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu7  : 43.9%us, 56.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu9  : 47.1%us, 52.9%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu13 : 47.4%us, 52.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu14 : 48.2%us, 51.8%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  16411088k total,  8586420k used,  7824668k free,  256k buffers
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 9732 gdemesy   25   0  561m 367m 6884 R 100.3  2.3   1:19.13 meep-mpi
 9733 gdemesy   25   0 2571m 2.3g 7020 R 100.3 14.8   1:19.12 meep-mpi
 9731 gdemesy   25   0  563m 368m 6636 R 100.0  2.3   1:19.12 meep-mpi
 9734 gdemesy   25   0 2573m 2.3g 6832 R 100.0 14.8   1:19.12 meep-mpi
 9735 gdemesy   25   0  561m 367m 6900 R 100.0  2.3   1:19.12 meep-mpi
 9736 gdemesy   25   0  561m 367m 6624 R 100.0  2.3   1:19.11 meep-mpi
 9737 gdemesy   25   0  560m 365m 6152 R 100.0  2.3   1:19.12 meep-mpi
 9738 gdemesy   25   0  558m 363m 6116 R 100.0  2.3   1:19.13 meep-mpi

Thanks for your help anyway... I will keep you posted if I manage to  
get better perf.


Best,

Guillaume


[Meep-discuss] meep-mpi scaling perf with more than 2 processors

2010-07-15 Thread gdemesy

Dear Meep users and developer,

I'm getting strange scaling performance using meep-mpi compiled with  
IntelMPI on our cluster. When I go from 1 to 2 processors, I'm getting  
an almost ideal scaling (i.e. runtime is divided by almost 2 as shown  
below for various problem sizes), but the scaling becomes very weak  
when using more than 2 processors. I should say that meep-mpi results  
agree with the one I am getting on my PC with meep-serial (in other  
words, our compilation seems all right).


nb_proc  runtime-res=20   runtime-res=40 runtime-res=60  runtime-res=80
1  20.5 135449 1086
2  11.47 73230  551
4  11.52 68219  530
8  12.9  67222  528

Let's go for some more details with a job size of ~3Gb (3D stuff). I  
am showing below the stats obtained when requesting 4 processors:

mpirun -np 4 meep-mpi res=100 runtime0=2 norm-run?=true slab3D.ctl

-
Mem:  16411088k total,  4015216k used, 12395872k free,  256k buffers
Swap:0k total,0k used,0k free,   283692k cached
PIDPR  NI  VIRT  RES  SHR S %CPU %MEMTIME+P  COMMAND
1817525   0  353m 221m 6080 R  99.8  1.4   1:10.41  1  meep-mpi
1817425   0  354m 222m 6388 R 100.2  1.4   1:10.41  6  meep-mpi
1817225   0 1140m 1.0g 7016 R  99.8  6.3   1:10.41  2  meep-mpi
1817325   0 1140m 1.0g 6804 R  99.5  6.3   1:10.40  4  meep-mpi

Tasks: 228 total,   5 running, 222 sleeping,   0 stopped,   1 zombie
Cpu1  : 23.9%us, 76.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu6  : 23.3%us, 76.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 99.7%us,  0.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu4  : 99.7%us,  0.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
[...]
-

So what we see here is that while the processors are all running flat  
out, for CPU 1 and 6 (which are the two running processes light on  
memory) only 1/4 of the time is in user code, and 3/4 is in system  
time -- normally I/O, but here probably MPI communications. It  
explains why I don't get shorter runtimes with more than 2 processors.


So we have a fairly clear load-balance issue; Have you experienced  
this kind of situation? I was wondering if there may be meep-mpi  
parameters I can set to affect the domain decomposition into chunks in  
a helpful way.


I can send more details if needed.

Thanks in advance!

Best regards,

Guillaume Demésy


This message was sent using IMP, the Internet Messaging Program.



___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] meep-mpi scaling perf with more than 2 processors

2010-07-15 Thread Nizamov Shawkat
1) You didn't provide any details on the layout of your cluster. It is
hard to guess if you have a  several dual-cores with 16Gb memory, or
they are 6 cores and you are running only on one of them.
2) I resemble that using 4 (or was it 8?) core Athlon in single PC
there were no acceleration beyond 3 cores. In my case the limitation
was most probably memory bandwidth, but it is not  your case. I would
anyway see in my case all cores running at almost 100%.
3) Are you sure that you are accounting for actual simulation and not
the initialization? Populating the memory with epsilons is not
parallel, I mean, that every core populates only some simulation
space. If it is uniformly filled it completes fast. If it has some
structure, especially if subpixel averaging is turned on,  it may take
much longer time, during which other cores will just simply wait.
Print some debug information like structure initialization
simulation started etc and compare the timing distribution. From
runtime0=2 I conclude that your simulation time is actually rather
short.

With best regards,
Shawkat Nizamov

2010/7/15, gdem...@physics.utoronto.ca gdem...@physics.utoronto.ca:
 Dear Meep users and developer,

 I'm getting strange scaling performance using meep-mpi compiled with
 IntelMPI on our cluster. When I go from 1 to 2 processors, I'm getting
 an almost ideal scaling (i.e. runtime is divided by almost 2 as shown
 below for various problem sizes), but the scaling becomes very weak
 when using more than 2 processors. I should say that meep-mpi results
 agree with the one I am getting on my PC with meep-serial (in other
 words, our compilation seems all right).

 nb_proc  runtime-res=20   runtime-res=40 runtime-res=60  runtime-res=80
  1  20.5 135449 1086
  2  11.47 73230  551
  4  11.52 68219  530
  8  12.9  67222  528

 Let's go for some more details with a job size of ~3Gb (3D stuff). I
 am showing below the stats obtained when requesting 4 processors:
 mpirun -np 4 meep-mpi res=100 runtime0=2 norm-run?=true slab3D.ctl

 -
 Mem:  16411088k total,  4015216k used, 12395872k free,  256k buffers
 Swap:0k total,0k used,0k free,   283692k cached
  PIDPR  NI  VIRT  RES  SHR S %CPU %MEMTIME+P  COMMAND
 1817525   0  353m 221m 6080 R  99.8  1.4   1:10.41  1  meep-mpi
 1817425   0  354m 222m 6388 R 100.2  1.4   1:10.41  6  meep-mpi
 1817225   0 1140m 1.0g 7016 R  99.8  6.3   1:10.41  2  meep-mpi
 1817325   0 1140m 1.0g 6804 R  99.5  6.3   1:10.40  4  meep-mpi

 Tasks: 228 total,   5 running, 222 sleeping,   0 stopped,   1 zombie
 Cpu1  : 23.9%us, 76.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,
 0.0%st
 Cpu6  : 23.3%us, 76.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,
 0.0%st
 Cpu2  : 99.7%us,  0.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,
 0.0%st
 Cpu4  : 99.7%us,  0.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,
 0.0%st
 [...]
 -

 So what we see here is that while the processors are all running flat
 out, for CPU 1 and 6 (which are the two running processes light on
 memory) only 1/4 of the time is in user code, and 3/4 is in system
 time -- normally I/O, but here probably MPI communications. It
 explains why I don't get shorter runtimes with more than 2 processors.

 So we have a fairly clear load-balance issue; Have you experienced
 this kind of situation? I was wondering if there may be meep-mpi
 parameters I can set to affect the domain decomposition into chunks in
 a helpful way.

 I can send more details if needed.

 Thanks in advance!

 Best regards,

 Guillaume Demésy

 
 This message was sent using IMP, the Internet Messaging Program.



 ___
 meep-discuss mailing list
 meep-discuss@ab-initio.mit.edu
 http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


[Meep-discuss] meep-mpi on mac

2010-07-06 Thread Ajith R
Dear Steven,
I have installed meep-mpi on mac using fink.
It works fine with single core.
But when I try to run it parallel, it shows error message.
I couldn't understand the problem. The error message is given below.
Please help me.
Thanking you in advance,
Ajith



Administrators-Mac-Pro:1 admin$ mpirun -np 8 meep-mpi a.ctl | tee a.out
[Administrators-Mac-Pro.local:01265] Error: unknown option --bootproxy
input in flex scanner failed
[Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
in file 
/SourceCache/openmpi/openmpi-8/openmpi/orte/mca/pls/base/pls_base_orted_cmds.c
at line 275
[Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
in file /SourceCache/openmpi/openmpi-8/openmpi/orte/mca/pls/rsh/pls_rsh_module.c
at line 1158
[Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
in file /SourceCache/openmpi/openmpi-8/openmpi/orte/mca/errmgr/hnp/errmgr_hnp.c
at line 90
[Administrators-Mac-Pro.local:01263] ERROR: A daemon on node
Administrators-Mac-Pro.local failed to start as expected.
[Administrators-Mac-Pro.local:01263] ERROR: There may be more
information available from
[Administrators-Mac-Pro.local:01263] ERROR: the remote shell (see above).
[Administrators-Mac-Pro.local:01263] ERROR: The daemon exited
unexpectedly with status 2.
[Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
in file 
/SourceCache/openmpi/openmpi-8/openmpi/orte/mca/pls/base/pls_base_orted_cmds.c
at line 188
[Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
in file /SourceCache/openmpi/openmpi-8/openmpi/orte/mca/pls/rsh/pls_rsh_module.c
at line 1190
--
mpirun was unable to cleanly terminate the daemons for this job.
Returned value Timeout instead of ORTE_SUCCESS.
--
Administrators-Mac-Pro:1 admin$

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] meep-mpi on mac

2010-07-06 Thread Nizamov Shawkat
My guess is that mpirun tries to setup the connection to all nodes
(every cpu core is counted as a single node) and fails. OpenMPI may
use different methods for connection to nodes - ssh, rsh, shared
memory etc. In your case it looks like trying rsh - this is AFAIK
default OpenMPI behaviour. Read OpenMPI docs about running with e.g.
shared memory (should be fastest) or over ssh (but then you have to
install and configure ssh). Or you can also install rsh.

Hope it helps,
Shawkat Nizamov

2010/7/6, Ajith R ajiths...@gmail.com:
 Dear Steven,
 I have installed meep-mpi on mac using fink.
 It works fine with single core.
 But when I try to run it parallel, it shows error message.
 I couldn't understand the problem. The error message is given below.
 Please help me.
 Thanking you in advance,
 Ajith



 Administrators-Mac-Pro:1 admin$ mpirun -np 8 meep-mpi a.ctl | tee a.out
 [Administrators-Mac-Pro.local:01265] Error: unknown option --bootproxy
 input in flex scanner failed
 [Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
 in file
 /SourceCache/openmpi/openmpi-8/openmpi/orte/mca/pls/base/pls_base_orted_cmds.c
 at line 275
 [Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
 in file
 /SourceCache/openmpi/openmpi-8/openmpi/orte/mca/pls/rsh/pls_rsh_module.c
 at line 1158
 [Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
 in file
 /SourceCache/openmpi/openmpi-8/openmpi/orte/mca/errmgr/hnp/errmgr_hnp.c
 at line 90
 [Administrators-Mac-Pro.local:01263] ERROR: A daemon on node
 Administrators-Mac-Pro.local failed to start as expected.
 [Administrators-Mac-Pro.local:01263] ERROR: There may be more
 information available from
 [Administrators-Mac-Pro.local:01263] ERROR: the remote shell (see above).
 [Administrators-Mac-Pro.local:01263] ERROR: The daemon exited
 unexpectedly with status 2.
 [Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
 in file
 /SourceCache/openmpi/openmpi-8/openmpi/orte/mca/pls/base/pls_base_orted_cmds.c
 at line 188
 [Administrators-Mac-Pro.local:01263] [0,0,0] ORTE_ERROR_LOG: Timeout
 in file
 /SourceCache/openmpi/openmpi-8/openmpi/orte/mca/pls/rsh/pls_rsh_module.c
 at line 1190
 --
 mpirun was unable to cleanly terminate the daemons for this job.
 Returned value Timeout instead of ORTE_SUCCESS.
 --
 Administrators-Mac-Pro:1 admin$

 ___
 meep-discuss mailing list
 meep-discuss@ab-initio.mit.edu
 http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] meep-mpi and NFS: output/simulation error

2009-09-22 Thread Steven G. Johnson

On Sep 14, 2009, at 8:33 AM, Paul Muellner wrote:
Thank you for the hint with openMPI. We changed our MPI installation  
from mpich to openMPI and recompiled meep-mpi. Now everything seems  
to work fine.


Just for all other users that want to switch from mpich to openMPI:
You have to type MPICXX=mpiCC when using openMPI (instead of  
MPICXX=mpicc for mpich). It took us a whole day to find out...


I'm glad it's working for you!  Note that the suggestion to set MPICXX  
is documented in the installation manual:


	Note that the configure script attempts to automatically detect how  
to compile MPI programs, but this may fail if you have an unusual  
version of MPI or if you have several versions of MPI installed and  
you want to select a particular one. You can control the version of  
MPI selected by setting the MPICXX variable to the name of the  
compiler to use .



Steven

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] meep-mpi and NFS: output/simulation error

2009-09-14 Thread Paul Muellner
Dear Steven, dear other meep users!

Thank you for the hint with openMPI. We changed our MPI installation from mpich 
to openMPI and recompiled meep-mpi. Now everything seems to work fine.

Just for all other users that want to switch from mpich to openMPI:
You have to type MPICXX=mpiCC when using openMPI (instead of MPICXX=mpicc for 
mpich). It took us a whole day to find out...

Thanks and best regards,
Paul  Roman
 Original-Nachricht 
 Datum: Wed, 9 Sep 2009 11:38:40 -0400
 Von: Steven G. Johnson stevenj@gmail.com
 An: meep-discuss Discuss meep-discuss@ab-initio.mit.edu
 Betreff: Re: [Meep-discuss] meep-mpi and NFS: output/simulation error

 
 On Sep 9, 2009, at 10:43 AM, Paul Muellner wrote:
  We upgraded our meep installation from 0.20.3 (from debian package)  
  to 1.0.3 (self compiled with mpich and parallel HDF5, all  
  dependencies from debian package system).
 
 Did the same inputs work okay with 0.20.3?
 
 This might be a problem with MPI-IO in your MPI implementation...  I  
 would try OpenMPI, which I've generally found to be higher quality  
 than MPICH.
 
 Steven
 
 ___
 meep-discuss mailing list
 meep-discuss@ab-initio.mit.edu
 http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

-- 
Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox 3 -
sicherer, schneller und einfacher! http://portal.gmx.net/de/go/chbrowser

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] meep-mpi and mvapich

2009-06-21 Thread Steven G. Johnson

Meep should compile with any standard-conforming MPI implementation.

Don't worry about compiler warnings; can you give an example of a  
compiler error?


On Jun 20, 2009, at 1:52 AM, liu wrote:

  I want to know is there anyone installing meep-mpi with mvapich?
  When I install meep-mpi, the system has already installed mvapich,  
so I use mvapich to install meep-mpi, configure is no problem, but  
when I type make  command, there jump a lot of warning, and errors.

  Does mvapich support meep-mpi ?





___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


[Meep-discuss] meep-mpi and mvapich

2009-06-19 Thread liu
Hi,
  I want to know is there anyone installing meep-mpi with mvapich? 
  When I install meep-mpi, the system has already installed mvapich, so I use 
mvapich to install meep-mpi, configure is no problem, but when I type make  
command, there jump a lot of warning, and errors. 
  Does mvapich support meep-mpi ?

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] Meep-mpi installation help

2009-05-18 Thread Andrew Galdes

Apologies. The LDD command correction. I tested the wrong file.

 ldd /usr/lib64/libguile.so
   libgmp.so.3 = /usr/lib64/libgmp.so.3 (0x2b4507097000)
   libcrypt.so.1 = /lib64/libcrypt.so.1 (0x2b45071cf000)
   libm.so.6 = /lib64/libm.so.6 (0x2b4507308000)
   libltdl.so.3 = /usr/lib64/libltdl.so.3 (0x2b450745e000)
   libpthread.so.0 = /lib64/libpthread.so.0 (0x2b4507565000)
   libc.so.6 = /lib64/libc.so.6 (0x2b450767c000)
   /lib64/ld-linux-x86-64.so.2 (0x4000)
   libdl.so.2 = /lib64/libdl.so.2 (0x2b45078ae000)

-AG


Andrew Galdes wrote:

Hi Steven,

Thanks. The prefix/--prefix was correct in the real run. I have 
moved on now. The configure compeltes ok. However, i have another 
problem:


For the record (for others who might ask the same original question as 
i have) the process to get past ./configure not working was to 
recompile the guile and guile-dev[evl] packages. But moving on,


make
...
/usr/lib64/libguile.so: file not recognized: File format not recognized
collect2: ld returned 1 exit status
make[3]: *** [meep_mpi] Error 1
make[3]: Leaving directory `/tmp/meep-1.0/libctl'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/tmp/meep-1.0/libctl'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/tmp/meep-1.0'
make: *** [all] Error 2
corvus meep-1.0 ldd /usr/lib64/libltdl.so
   libdl.so.2 = /lib64/libdl.so.2 (0x2b96909e6000)
   libc.so.6 = /lib64/libc.so.6 (0x2b9690aea000)
   /lib64/ld-linux-x86-64.so.2 (0x4000)

You can see that libguile is not recognised but later in the above 
output you can see the ldd command can get output from that file. I 
don't know how useful that little test is though. Any ideas?


Agian, SLES 10. 64bit

Thanks all,
-Andrew G


Steven G. Johnson wrote:


The meep or meep-mpi executables require libctl.  If you 
configure --without-libctl, it will only install the C++-callable 
library.


(Note that it should be --prefix=, not prefix=)

This line doesn't complete and ends wtih the guile-config is 
broken error:

./configure prefix=/opt/shared/meep/1.0 --with-mpi


Again, you need to look in the config.log file to see exactly what 
the error was when configure tried linking a small test program with 
guile.


Steven




___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss



___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] Meep-mpi installation help

2009-05-18 Thread Steven G. Johnson


On May 18, 2009, at 7:10 PM, Andrew Galdes wrote:
/opt/shared/openmpi/1.2.6/gnu/lib/libopen-pal.so -lnsl -lutil /opt/ 
shared/gnu/gcc/3.4.6/lib/../lib64/libstdc++.so -L/lib/../lib64 -L/ 
usr/lib/../lib64 -lc -lgcc_s /usr/lib64/libctl.a -L/usr/lib64 /usr/ 
lib64/libguile.so /usr/lib64/libguile-ltdl.so -L/builddir/build/ 
BUILD/guile-1.6.7/libguile/.libs -L/opt/shared/guile-devel/lib /opt/ 
shared/guile-devel/lib/libguile.so /usr/lib64/libgmp.so -lcrypt /usr/ 
lib64/libltdl.so -ldl -lhdf5 -lz -lm -pthread -Wl,-rpath -Wl,/opt/ 
shared/openmpi/1.2.6/gnu/lib -Wl,-rpath -Wl,/usr/lib64/gcc/x86_64- 
suse-linux/4.1.2 -Wl,-rpath -Wl,/opt/shared/gnu/gcc/3.4.6/lib/../ 
lib64 -Wl,-rpath -Wl,/opt/shared/guile-devel/lib -Wl,-rpath -Wl,/opt/ 
shared/openmpi/1.2.6/gnu/lib -Wl,-rpath -Wl,/usr/lib64/gcc/x86_64- 
suse-linux/4.1.2 -Wl,-rpath -Wl,/opt/shared/gnu/gcc/3.4.6/lib/../ 
lib64 -Wl,-rpath -Wl,/opt/shared/guile-devel/lib



The fact that it is linking two completely separate versions of guile  
seriously concerns me.  You should only have one version of Guile  
installed on your system or you are asking for trouble.


I strongly recommend installing the guile-devel package that comes  
with your system and goes with the guile package that comes with your  
system.


Regarding your other difficulties, try adding the flag --without-gcc- 
arch to the configure options, to see if the misidentified - 
mpentiumpro flag is causing trouble.


Steven

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] Meep-mpi installation help

2009-05-18 Thread Steven G. Johnson

On May 18, 2009, at 9:20 PM, Andrew Galdes wrote:
libtool: link: mpic++ -O3 -fstrict-aliasing -o meep_mpi meep.o  
structure.o meep_wrap.o main.o geom.o ctl-io.o -pthread  ../ 
src/.libs/libmeep_mpi.a -lhdf5 -L/tmp/build/gcc-3.4.6/x86_64-unknown- 
linux-gnu/libstdc++-v3/src -L/tmp/build/gcc-3.4.6/x86_64-unknown- 
linux-gnu/libstdc++-v3/src/.libs -L/tmp/build/gcc-3.4.6/gcc /opt/ 
shared/openmpi/1.2.6/gnu/lib/libmpi_cxx.so /usr/lib64/gcc/x86_64- 
suse-linux/4.1.2/libstdc++.so /opt/shared/openmpi/1.2.6/gnu/lib/ 
libmpi.so /opt/shared/openmpi/1.2.6/gnu/lib/libopen-rte.so /opt/ 
shared/openmpi/1.2.6/gnu/lib/libopen-pal.so -lnsl -lutil /opt/shared/ 
gnu/gcc/3.4.6/lib/../lib64/libstdc++.so -L/lib/../lib64 -L/usr/ 
lib/../lib64 -lc -lgcc_s /usr/local/lib/libctl.a -L/usr/lib64 /usr/ 
lib64/libguile.so /usr/lib64/libguile-ltdl.so -L/builddir/build/ 
BUILD/guile-1.6.7/libguile/.libs /usr/local/lib/libguile.so /usr/lib/ 
libltdl.so -L/usr/local/lib /usr/lib64/libgmp.so -lcrypt /usr/lib64/ 
libltdl.so -ldl /usr/lib/libhdf5.so -lz -lm -pthread -Wl,-rpath -Wl,/ 
opt/shared/openmpi/1.2.6/gnu/lib -Wl,-rpath -Wl,/usr/lib64/gcc/ 
x86_64-suse-linux/4.1.2 -Wl,-rpath -Wl,/opt/shared/gnu/gcc/3.4.6/ 
lib/../lib64 -Wl,-rpath -Wl,/opt/shared/openmpi/1.2.6/gnu/lib -Wl,- 
rpath -Wl,/usr/lib64/gcc/x86_64-suse-linux/4.1.2 -Wl,-rpath -Wl,/opt/ 
shared/gnu/gcc/3.4.6/lib/../lib64
/usr/lib64/libguile.so: file not recognized: File format not  
recognized



What does

mpic++ --version

report?


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] Meep-mpi installation help

2009-05-17 Thread Steven G. Johnson


On May 17, 2009, at 7:44 PM, Andrew Galdes wrote:

I'm compiling with GCC 4.1.2 and OpenMPI 1.2.5

The following configure line completes but no meep-mpi is created  
as a result of the make command. Does the --whtout-libctl cancel  
out the --with-mpi option?

./configure prefix=/opt/shared/meep/1.0 --with-mpi --without-libctl


The meep or meep-mpi executables require libctl.  If you configure  
--without-libctl, it will only install the C++-callable library.


(Note that it should be --prefix=, not prefix=)

This line doesn't complete and ends wtih the guile-config is  
broken error:

./configure prefix=/opt/shared/meep/1.0 --with-mpi


Again, you need to look in the config.log file to see exactly what the  
error was when configure tried linking a small test program with guile.


Steven


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] Meep-mpi installation help

2009-05-17 Thread Andrew Galdes

Hi Steven,

Thanks. The prefix/--prefix was correct in the real run. I have moved 
on now. The configure compeltes ok. However, i have another problem:


For the record (for others who might ask the same original question as i 
have) the process to get past ./configure not working was to recompile 
the guile and guile-dev[evl] packages. But moving on,


make
...
/usr/lib64/libguile.so: file not recognized: File format not recognized
collect2: ld returned 1 exit status
make[3]: *** [meep_mpi] Error 1
make[3]: Leaving directory `/tmp/meep-1.0/libctl'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/tmp/meep-1.0/libctl'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/tmp/meep-1.0'
make: *** [all] Error 2
corvus meep-1.0 ldd /usr/lib64/libltdl.so
   libdl.so.2 = /lib64/libdl.so.2 (0x2b96909e6000)
   libc.so.6 = /lib64/libc.so.6 (0x2b9690aea000)
   /lib64/ld-linux-x86-64.so.2 (0x4000)

You can see that libguile is not recognised but later in the above 
output you can see the ldd command can get output from that file. I 
don't know how useful that little test is though. Any ideas?


Agian, SLES 10. 64bit

Thanks all,
-Andrew G


Steven G. Johnson wrote:


The meep or meep-mpi executables require libctl.  If you configure 
--without-libctl, it will only install the C++-callable library.


(Note that it should be --prefix=, not prefix=)

This line doesn't complete and ends wtih the guile-config is broken 
error:

./configure prefix=/opt/shared/meep/1.0 --with-mpi


Again, you need to look in the config.log file to see exactly what the 
error was when configure tried linking a small test program with guile.


Steven




___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] Meep-mpi installation help

2009-05-16 Thread Steven G. Johnson

On May 14, 2009, at 9:08 PM, Andrew Galdes wrote:

checking for guile-config... yes
checking if linking to guile works... no
configure: error: guile-config is broken


You can look in the config.log file to find the exact error message  
that caused linking to guile to fail.   (Note that the config.log file  
contains a dump of the output from every test, so it is quite long,  
but by searching for strings like -lguile you should be able to find  
the place where it tried to link a small program to Guile using the  
flags provided by guile-config, and failed).


What MPI implementation are you using?

Steven

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] Meep-mpi installation help

2009-05-15 Thread matt





Hi Andrew,

Check this thread regarding the guile-config error:
http://www.mail-archive.com/meep-discuss@ab-initio.mit.edu/msg00185.html

I don't believe there are packages available for the latest meep 
version.


Best,
Matt





On Fri, 15 May 2009, Andrew Galdes wrote:


Hello all,

I would like some assistance with installing the MPI version of Meep. The 
serial version configures and compiles fine but with the --with-mpi option 
the configure fails:


Using Intel compiler with OpenMPI.

./configure prefix=/opt/shared/meep/1.0 --with-mpi
...
checking for deflate in -lz... yes
checking for H5Pcreate in -lhdf5... yes
checking hdf5.h usability... yes
checking hdf5.h presence... yes
checking for hdf5.h... yes
checking for H5Pset_mpi... no
checking for H5Pset_fapl_mpio... no
Looks like we have got 8 processors
checking for guile-config... yes
checking if linking to guile works... no
configure: error: guile-config is broken

System info:
SLES 10, 64bit, Kernel 2.6.16.46-0.12-smp

RPM guile packages installed:
guile-1.8.1-72
guile-devel-1.8.1-72

Guile config version:
guile-config - Guile version 1.8.1

Any help is appreciated. Also does anyone have an RPM of 64bit binaries for 
meep-mpi, or know where to get it?


-Andrew G


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss



___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


[Meep-discuss] Meep-mpi installation help

2009-05-14 Thread Andrew Galdes

Hello all,

I would like some assistance with installing the MPI version of Meep. 
The serial version configures and compiles fine but with the 
--with-mpi option the configure fails:


Using Intel compiler with OpenMPI.

./configure prefix=/opt/shared/meep/1.0 --with-mpi
...
checking for deflate in -lz... yes
checking for H5Pcreate in -lhdf5... yes
checking hdf5.h usability... yes
checking hdf5.h presence... yes
checking for hdf5.h... yes
checking for H5Pset_mpi... no
checking for H5Pset_fapl_mpio... no
Looks like we have got 8 processors
checking for guile-config... yes
checking if linking to guile works... no
configure: error: guile-config is broken

System info:
SLES 10, 64bit, Kernel 2.6.16.46-0.12-smp

RPM guile packages installed:
guile-1.8.1-72
guile-devel-1.8.1-72

Guile config version:
guile-config - Guile version 1.8.1

Any help is appreciated. Also does anyone have an RPM of 64bit binaries 
for meep-mpi, or know where to get it?


-Andrew G


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] Meep-MPI on Ubuntu 8.10

2009-03-16 Thread Steven G. Johnson

On Mar 15, 2009, at 2:16 PM, Mohamed El-Beheiry wrote:
I have an 8 core machine (2 quad cores) running Ubuntu 8.10 64-bit.   
I am quite new at using MPI (with MPICH), but when I run:


mpiexec - n 8 meep-mpi test.ctl

I get the following output:

Using MPI version 1.2, 1 processes


Probably you compiled Meep with a different version than the one you  
are running with.  For example, if you compiled Meep against MPICH  
(i.e. using the mpiCC from MPICH) but are running with mpiexec from  
LAM or OpenMPI.  Or vice versa.


Steven___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] Meep-MPI on Ubuntu 8.10

2009-03-15 Thread Mohamed El-Beheiry
Hello Meep users and Steven,



I have an 8 core machine (2 quad cores) running Ubuntu 8.10 64-bit.  I am quite 
new at using MPI (with MPICH), but when I run:



mpiexec - n 8 meep-mpi test.ctl



I get the following output:



Using MPI version 1.2, 1 processes

Using MPI version 1.2, 1 processes

Using MPI version 1.2, 1 processes

Using MPI version 1.2, 1 processes

Using MPI version 1.2, 1 processes

Using MPI version 1.2, 1 processes

Using MPI version 1.2, 1 processes

Using MPI version 1.2, 1 processes




And my simulations run for a much longer time than it would take if I
ran Meep on a single processor.  Furthermore, it does not appear there
is any parallel calculation taking place; each processor simply repeats
the same step.


Does anyone know what may be wrong?

Thanks,
Moe



  __
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your 
favourite sites. Download it now at
http://ca.toolbar.yahoo.com.___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] meep-mpi problem

2008-07-24 Thread Steven G. Johnson

On Jul 24, 2008, at 3:58 AM, 조환희 wrote:

I installed MPICH and executed parallel meep.
But this message appeared on the screen.

“MPI Application rank 0 killed before MPI_Init() with signal 9”

I did not understand what this error message means.


Meep with MPI works fine for me; maybe there is something the matter  
with your MPI installation?  I use OpenMPI rather than MPICH myself,  
as OpenMPI is a more modern and easy-to-use free MPI.


Steven___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] meep-mpi and file output

2008-06-11 Thread Steven G. Johnson

On Jun 10, 2008, at 2:10 PM, Andreas Unger wrote:
 (define defaultport (current-output-port))
 (define port1 (open-output-file filename.txt))
 (set-current-output-port port1)

 (display-fluxes fluxInside ScatOutside AbsInside)

 (set-current-output-port defaultport)
 (close-output-port port1)

 This procedure opens a file, set the output port to this file and  
 prints
 the fluxes with display-fluxes. Then it sets the output port back to
 default and closes the file. It works also well with meep-mpi. It  
 can be
 used also to print other stuff to a file.

Andreas, that code has something of a race condition with MPI, because  
it calls (open-output-file ...) on every process.

You can use the variable print-ok?, which is true only on the master  
process, to decide whether to create a file.  (Or you can use (= (meep- 
am-master) 1), which amounts to the same thing.)

The only catch is that you need to call display-fluxes from all the  
processes (although it will only print from the master process),  
because hidden inside display-fluxes is a collective operation (it has  
to sum the flux over all processes).  So, you would do something like  
this:

(define defaultport (current-output-port)
(define port1 (if print-ok? (open-output-file filename.txt) false)
(if port1 (set-current-output-port port1)
(display-fluxes fluxInside ScatOutside AbsInside)
(set-current-output-port defaultport)
(if port1 (close-output-port port1))


However, I would never do any of this myself.  What I always do is to  
just pipe *all* of the output to a file, and then grep for what I want  
afterwards.  i.e.

meep foo.ctl  foo.out
grep flux1: foo.out  flux.out

The same thing works with MPI, too.  This way, you can decide  
afterwards what data you want to plot, etcetera.  This is the whole  
point of the way Meep's text output works (it's why the flux output  
lines are all prefixed with flux1: etc.)

Regards,
Steven G. Johnson


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


[Meep-discuss] meep-mpi and file output

2008-06-10 Thread Robert Rehammar
Dear Meep users,

I am running meep-mpi and now I would like to output the result from a
flux-plane computation to a file. How do I do this? I would like not to
use display-fluxes since I use this already for some other fluxplanes
and I have a set of scripts that are matched to do post processing of
this which will be confused if more data is added there. So idealy I
would like to do something like the scheme construct
(define p (open-input-file out.txt))
and then print to this. But first I do not know how to handle the
fluxplanes datastructure (that is, how to output it w.o. display-
fluxes), and further I do not know how to get only one process to print.
Can I use the (meep-my-rank)? Are all processes aware of the content of
a particular fluxplane?

Regards,
Robert Rehammar

-- 
Robert Rehammar
PhD-Student
Applied Physics, Chalmers University of Technology
Department of Physics, Göteborg University
SE-421 96 Göteborg
Sweden

Tel +46 (0)31 772 3156
Fax +46 (0)31 416 984
Cel +46 (0)738 328834
Web fy.chalmers.se/~e9ravn


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


[Meep-discuss] meep-mpi installation on nonstandard di rectory

2008-05-20 Thread Seong Kyu Kim
Dear meep users:

I still do have problems in installing meep-mpi into a cluster Linux PC which I 
do not have root priviledge. 
Therefore, I tried to install into my local directory (/home/skkim/local) and 
the system had mpi executables in /usr/local/mpich. 
Several days ago, I posted difficulties with hdf5 and meep installations. Some 
of them are now solved. 
For hdf5, I gave up installing hdf5-1.8.0. and installed hdf5-1.6.7 instead. It 
solved the problems with hdf5. 
When I was installing meep, the configure did not find the directories of 
harminv.pc so I added 
PKG_CONFIG_PATH=$HOME/local/lib/pkgconfig during configuration. It seemed to 
solve the problem. 
The meep was barely installed with a couple of warning notices (such as cannot 
find GNU gls libraries, 
 cannot find FFTW). However, when I tried to run an ctl file (mpirun -np 4 
meep-mpi ex1.ctl  ex1.out) 
it did not run properly but gave the following message.
 (Warning: Command line arguments for program should be given after the program 
name.  
Assuming that ex1.ctl is a command line argument for the program.
Missing: program name
Program meep-mpi either does not exist, is not 
executable, or is an erroneous argument to mpirun.)

At this point, I thought the mpich is too old and I gave it up. 

I downloaded openmpi and installed into /home/skkim/openmpi, using a sequence 
of usual commands.
.configure --prefix=$HOME/openmpi
make 
make install.

It looked the installation went O.K. 

After I inserted /home/skkim/openmpi/bin into the path (in front of default 
root paths), I tried to installed hdf5-1.6.7
.configure --prefix=$HOME/local --enable-parallel CC=mpicc
But it gave the following errror message.
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking shell variables initial values... done
checking if basename works... yes
checking if xargs works... yes
checking for cached host... none
checking for sizeof hsize_t and hssize_t... large
checking for config x86_64-unknown-linux-gnu... no
checking for config x86_64-unknown-linux-gnu... no
checking for config unknown-linux-gnu... no
checking for config unknown-linux-gnu... no
checking for config x86_64-linux-gnu... no
checking for config x86_64-linux-gnu... no
checking for config x86_64-unknown... no
checking for config linux-gnu... found
compiler 'mpicc' is GNU gcc-3.4.6
checking for config ./config/site-specific/host-fri.cnm.utexas.edu... no
checking for config ./config/site-specific/host-cnm.utexas.edu... no
checking for config ./config/site-specific/host-utexas.edu... no
checking for config ./config/site-specific/host-edu... no
checking for gcc... mpicc
checking for C compiler default output file name... a.out
checking whether the C compiler works... configure: error: cannot run C 
compiled programs.
If you meant to cross compile, use `--host'.
See `config.log' for more details.

I attache the log file.

Can anybody give me suggestions

config.log
Description: Binary data
___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] meep-mpi crash when using pipe instead of redirect with flux output function

2008-02-27 Thread Tjeerd Pinkert
Hello Steven,

I wrote earlier on about meep-mpi crashing when using a flux output
function. I found out when this happens and attached a simple simulation
together with a script file that runs this on my machine, the second way
of calling will always cause a crash, while the first seems to be
stable. (see the shell script and notice that the first uses a redirect
while the second uses a pipe to output stdout to a file)

This behaviour also applies to the standard flux output way that is
built in.

Is this a bug in meep-mpi? or should I write to the guys making tee?

Yours, Tjeerd Pinkert


fluxcrash.tar.bz2
Description: application/bzip
___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] meep-mpi

2007-12-17 Thread Robert Rehammar
Dear meep-mpi users,

I have been using meep for some time now with good results. Now I want
to scale up to a cluster and use meep-mpi. I my code I use (maybe ugly
hacks, I'm not sure) modules via commands like:
(use-modules (modules square2dpc))
in my script and
(define-module (modules square2dpc))
(module-use! (current-module) (resolve-module '(guile-user)))
in the module file.

Now this does not seem to work well with meep-mpi. I have a PBS file
looking like:
...
#Preparation work
cd ~/job/firstTrial/

# Time the program
time mpiexec ~/bin/meep-mpi sqrpc2.ctl  ~/job/firstTrial/of1 2
~/job/firstTrial/of2

but this dies with (of2):
ERROR: no code for module (modules square2dpc)

Some deprecated features have been used.  Set the environment
variable GUILE_WARN_DEPRECATED to detailed and rerun the
program to get more information.  Set it to no to suppress
this message.
mpiexec: Warning: task 0 exited with status 1.

Anyone has any idea of how to work this out. Of course I could put my
module code in the main script, but I would really like to be able to
work like this, it's very convenient.

Best regards,
Robert Rehammar

-- 
Robert Rehammar
PhD-Student
Applied Physics, Chalmers University of Technology
Department of Physics, Göteborg University
SE-421 96 Göteborg
Sweden

Tel +46 (0)31 772 3156
Fax +46 (0)31 416 984
Cel +46 (0)738 328834
Web fy.chalmers.se/~e9ravn


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] meep-mpi and read-line

2007-08-15 Thread matt



This was a very simple problem; guile 1.6.7 and 1.8.2 behave slightly 
differently.

read-line works with
(use-modules (ice-9 rdelim))

Regards,
Matt



On Mon, 13 Aug 2007, matt wrote:



 Hello,

 I have a meep script which opens a text file and sets some parameters
 according to values in that file.  To do this I am using a function
 called (read-line).

 In non-parallelized meep, this is not a problem, but when I try to do
 this with meep-mpi I get the following error:

 mpirun -np 4 meep-mpi test.ctl
 Using MPI version 2.0, 4 processes
 ERROR: Unbound variable: read-line
 rank 1 in job 57  cariddi_60917   caused collective abort of all ranks
   exit status of rank 1: killed by signal 9

 Is this intentional?  If so, is there some alternative I can try?

 Best Regards,
 Matt


 ___
 meep-discuss mailing list
 meep-discuss@ab-initio.mit.edu
 http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] meep-mpi and read-line

2007-08-15 Thread Steven G. Johnson
On Wed, 15 Aug 2007, matt wrote:
 This was a very simple problem; guile 1.6.7 and 1.8.2 behave slightly
 differently.

 read-line works with
 (use-modules (ice-9 rdelim))

Yes, this is documented in the Guile manual:

http://www.gnu.org/software/guile/manual/html_node/Line_002fDelimited.html

Steven

___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


[Meep-discuss] meep-mpi and read-line

2007-08-13 Thread matt


Hello,

I have a meep script which opens a text file and sets some parameters 
according to values in that file.  To do this I am using a function 
called (read-line).

In non-parallelized meep, this is not a problem, but when I try to do 
this with meep-mpi I get the following error:

 mpirun -np 4 meep-mpi test.ctl
Using MPI version 2.0, 4 processes
ERROR: Unbound variable: read-line
rank 1 in job 57  cariddi_60917   caused collective abort of all ranks
   exit status of rank 1: killed by signal 9

Is this intentional?  If so, is there some alternative I can try?

Best Regards,
Matt


___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


Re: [Meep-discuss] meep-mpi runtime error

2006-11-02 Thread Steven G. Johnson

On Wed, 1 Nov 2006, °­ÈñÁø wrote:

I had installed parallel Meep.
 
I got some error message when I execute sample code on 2 processor.


Remove the output-epsilon command from your .ctl file, to see if it is a 
problem with the HDF5 output.


If that fixes the problem, probably you need to recompile and reinstall 
HDF5 with MPI parallel-I/O enabled.  (And then reconfigure, recompile, and 
reinstall Meep.)


Steven___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] meep-mpi runtime error

2006-10-31 Thread K. Choi
Can I get the test.ctl file used for your test run?On 11/1/06, 강희진 [EMAIL PROTECTED] wrote:






Dear Steven,

I had installed parallel Meep.

I got some error message when I executesample code on 2 
processor.

=
$ mpirun -np 2 meep-mpi test.ctl  test.out

Using MPI version 1.2, 2 processes---Initializing 
structure...Working in 2D dimensions. block, 
center = (0,0,0) size 
(1e+20,1,1e+20) axes 
(1,0,0), (0,1,0), 
(0,0,1) dielectric 
constant epsilon = 12time for set_epsilon = 0.0297339 
s---creating output file 
./test-eps-00.00.h5...MPI_Recv: process in local group is dead (rank 
0, MPI_COMM_WORLD)Rank (0, MPI_COMM_WORLD): Call stack within LAM:Rank 
(0, MPI_COMM_WORLD): - MPI_Recv()Rank (0, MPI_COMM_WORLD): - 
MPI_Allreduce()Rank (0, MPI_COMM_WORLD): - main()

Some deprecated features have been used. Set the 
environmentvariable GUILE_WARN_DEPRECATED to detailed and rerun 
theprogram to get more information. Set it to no to suppressthis 
message.
=
But Wheni execute sample code on 1 processor, That's ok.
=
$ mpirun -np1 meep-mpi test.ctl  test.out
Using MPI version 1.2, 1 processes---Initializing 
structure...Working in 2D dimensions. block, 
center = (0,0,0) size 
(1e+20,1,1e+20) axes 
(1,0,0), (0,1,0), 
(0,0,1) dielectric 
constant epsilon = 12time for set_epsilon = 0.0913239 
s---creating output file ./test-eps-00.00.h5...Meep 
progress: 13.35/200.0 = 6.7% done in 4.0s, 56.1s to goon time step 311 
(time=15.55), 0.0128754 s/stepMeep progress: 30.05/200.0 = 15.0% done in 
8.0s, 45.3s to goon time step 640 (time=32), 0.0121734 s/stepMeep 
progress: 46.55/200.0 = 23.3% done in 12.0s, 39.6s to goon time step 979 
(time=48.95), 0.0118113 s/stepMeep progress: 63.5/200.0 = 31.8% done in 
16.1s, 34.5s to goon time step 1303 (time=65.15), 0.012351 s/stepMeep 
progress: 81.65/200.0 = 40.8% done in 20.1s, 29.1s to goon time step 1683 
(time=84.15), 0.0105425 s/stepMeep progress: 101.25/200.0 = 50.6% done in 
24.1s, 23.5s to goon time step 2049 (time=102.45), 0.0109509 s/stepMeep 
progress: 118.4/200.0 = 59.2% done in 28.1s, 19.3s to goon time step 2420 
(time=121), 0.0108062 s/stepMeep progress: 136.55/200.0 = 68.3% done in 
32.1s, 14.9s to goon time step 2778 (time=138.9), 0.0111751 s/stepMeep 
progress: 153.5/200.0 = 76.8% done in 36.1s, 10.9s to goon time step 3113 
(time=155.65), 0.0119554 s/stepMeep progress: 170.9/200.0 = 85.4% done in 
40.1s, 6.8s to goon time step 3468 (time=173.4), 0.011293 s/stepMeep 
progress: 187.55/200.0 = 93.8% done in 44.1s, 2.9s to goon time step 3795 
(time=189.75), 0.0122399 s/stepcreating output file 
./test-ez-000200.00.h5...run 0 finished at t = 200.0 (4000 
timesteps=
I don't know, whyis this error occured.
please give me advice...






  
	
  
		
		  
		
		  
		
		
		   
		
  

  

  

	  
		  
			

  

  
  

	  
		
		 
		 
  보고싶은 영화의 개봉일이 궁금할 땐
		   
		
	  
	  
		 
	  
		편리한 씨즐 개봉박두 알림 서비스
	  
	

			  
			
			

  
  

  
  
	  
		
		  
		  [해피투게더]함께가요, 희망으로!
		  
		 
	  
	  
		 
	  
		희망연주단 응원하GO 도토리 선물 받GO!
	  
	

			  
		  
		
	
  
			  
			
		

	  
  

___meep-discuss mailing listmeep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss
-- Ki-young Choi 
___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss