Re: [Meep-discuss] Assorted issues/questions with parallel c++ Meep

2014-02-13 Thread Tran Quyet Thang
John Ball ballman2010@... writes:

 
 
 
 
 
 
 Hello all,I'm trying to run my c++ Meep script in parallel. I've found
little documentation on the subject, so I'm hoping to make a record of how
to do it here on the mailing list as well as to clear up some of my own
confusion and questions about the issue.
 
 
 My original, bland, serial c++ compilation command comes straight from the
Meep c++ tutorial page:
 
 
 
 g++ `pkg-config --cflags meep`  main.cpp -o SIM `pkg-config --libs meep` 
 where I've usedexport PKG_CONFIG_PATH = /usr/local/apps/meep/lib/pkgconfig
 
 so that pkg-config knows where in the world the meep.pc file is.
 
 then I can simply run the compiled code WITH:./SIM
 
 In parallel, the equivalent process I've settled upon using is as follows:
 
 First, I've changed the #include statement at the beginning of main.cpp to
point to the header file from the parallel install (not sure if this is
necessary, but it works):
 
 #include /usr/local/apps/meep/1.2.1/mpi/lib/include/meep.hpp
 
 To compile:
 
 mpic++ `pkg-config --cflags meep_mpi` par_main.cpp -o PAR_SIM `pkg-config
--libs meep_mpi`
 
 where I've told pkg-config to instead look for meep_mpi.pc:
 
 export PKG_CONFIG_PATH=/usr/local/apps/meep/1.2.1/mpi/lib/pkgconfig
 
 to run this, I send this command to the job scheduler:
 
 (...)/mpirun    -np $N    ./PAR_SIM
 
 where I choose N depending on the kind of node(s) I'm submitting to.
 
 This runs fine. Now I'm going to talk about performance:When submitting a
particular job to a single 16-core node with 72GB of memory, if I set N=1,
the memory usage is 30 GB, and the simulation runs at about 8.7 sec/step.
The job took about 35 minutes.When I instead set N=8, the memory usage is
62GB, and it runs at about 2.8 sec/step. The total simulation takes about 12
minutes. So! Are these numbers to be expected? A ~3x speedup going from 1 to
8 cores is less than I'd hoped for, but perhaps reasonable. What concerns me
more, though, is that while I suspected that I'd see some memory usage
increase, I did not expect to see a twofold increase when I went from 1 to 8
cores. I want to verify that this behavior is normal and I'm not misusing
the code or screwing up its compilation somehow.
 
 Finally, just asking for some advice: I could feasibly break the job up
and instead of using a single 16 core, 72 GB node like I mention above, I
could use, for example, 9 dual-core, 8GB nodes instead. My guess is that
doing so would increase the overhead due to network communications between
the nodes. However, what about memory usage? Does anyone have experience
with this? Furthermore, are there any tips or best practices for
conditioning the simulation and/or configuration to maximize throughput?
 
 Thanks in advance!
 
 
 
 
 
 
 
 ___
 meep-discuss mailing list
 meep-discuss@...
 http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

It is well known that FDTD simulation is memory bandwidth bound - it scales
well with increasing memory bandwidth in contrast with processing (FLOPS) power.

As you try to increase the number of cores in an SMP configuration, the
total amount of total available bandwidth of the system does not increase -
the memory controller(s) is merely more effectively utilized (saturated).
That is why a cluster with few(er) processing cores, but more memory
controller, would provide better speedup (versus cost) in FDTD than a
multiple core single CPU. In other words, multiple core CPUs are bad choices
for FDTD calculations. In fact, my current FDTD server is a dual CPU, 4
cores each, but with ~ 100GB/s memory bandwidth to feed the calculation.

In my experience with mpi-meep, relatively little memory overhead was
encountered, especially in larger simulations - perhaps you (double) counted
the shared memory space. The unix free and top commands would provide a good
estimation of real free memory. Could you post the result of free and top here?



___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Re: [Meep-discuss] Field divergence in coordinate transforming (stretching)

2013-11-08 Thread Tran Quyet Thang
tran quyet thang hereiam2005@... writes:

 
 
 Dear professor Johnson and Meep users,
 
 It has been mentioned multiple times that a capacity similar to variable
mesh size can be emulated in meep by applying the coordinate transformation
technique formulated in Coordinate Transformation 
 Invariance in Electromagnetism.
 
 However my attempts in applying the technique to a scaling factor larger
than one always resulted in field divergence. 
 
 This script tried to scale the whole computation domain, filled with air,
by a factor of 2 in all directions. According to the article, that would
corresponds into an s factor of 2, or an epsilon and mu of
  0.5.
 
 
 
  (set-param! resolution 200)     (define-param fcen 1.0) (define-param
df   1.0)   (set! geometry-lattice (make lattice (size 0.2 0.2 0.2))) ;(set!
pml-layers     (list (make pml (thickness 0.5 (set! geometry (list       
  (make block (center 0 0 0) (size infinity infinity infinity)           
(material (make dielectric                 (epsilon 0.5)                 (mu
0.5)            )  (set! sources (list         (make source       
    (src (make gaussian-src (frequency fcen) (fwidth df)) )           
(component Ez) (center 0 0 0)        )))(define (field-Ez)   (print Field:
(get-field-point Ez (vector3 0 0 0)) \n)) (run-until 100   
;(after-sources        
  (at-every 0.1 field-Ez)    ;))
 
 
 Field however quickly diverges.
 
 Any attempt to stretch the coordinates would involve stretching the air
surrounding the devices, which will make the field diverge.
 
 Could anyone provide assistance? Thank you so much.Yours sincerely, Tran
Quyet Thang.Ajou University.
 
 
 
 ___
 meep-discuss mailing list
 meep-discuss@...
 http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Hi everyone, 

A quick update: a lower Courant factor - corresponds to the stability
condition - and the field no longer diverges.

Thanks everyone! Hopefully this would be useful to future Meep users!

Best regards.




___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

[Meep-discuss] Field divergence in coordinate transforming (stretching)

2013-11-07 Thread tran quyet thang
Dear professor Johnson and Meep users,

It has been mentioned multiple times that a capacity similar to variable mesh 
size can be emulated in meep by applying the coordinate transformation 
technique formulated in Coordinate Transformation 
Invariance in Electromagnetism.

However my attempts in applying the technique to a scaling factor larger than 
one always resulted in field divergence. 

This script tried to scale the whole computation domain, filled with air, by a 
factor of 2 in all directions. According to the article, that would corresponds 
into an s factor of 2, or an epsilon and mu of 0.5.



 (set-param! resolution 200)
    
 (define-param fcen 1.0)
 (define-param df   1.0)
  
 (set! geometry-lattice (make lattice (size 0.2 0.2 0.2)))

 ;(set! pml-layers     (list (make pml (thickness 0.5

 (set! geometry (list
        (make block (center 0 0 0) (size infinity infinity infinity)
            (material (make dielectric 
                (epsilon 0.5) 
                (mu 0.5)
            )
 
 (set! sources (list 
        (make source
            (src (make gaussian-src (frequency fcen) (fwidth df)) )
            (component Ez) (center 0 0 0)
        )))

(define (field-Ez)
   (print Field: (get-field-point Ez (vector3 0 0 0)) \n)) 

(run-until 100
    ;(after-sources 
        (at-every 0.1 field-Ez)
    ;)
)


Field however quickly diverges.

Any attempt to stretch the coordinates would involve stretching the air 
surrounding the devices, which will make the field diverge.

Could anyone provide assistance? Thank you so much.

Yours sincerely, 
Tran Quyet Thang.
Ajou University.___
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss