Dear libMesh developers and users,

I created a BoundaryMesh from a VolumeMesh, and an ExplicitSystem to do 
calculation only on BoundaryMesh. When running simulation in parallel, there 
seems to be a load imbalance, i.e. the ratios of max to min over all processors 
ranges from 1.8 to 2282 for Vector operations when using 40 cores. See attached 
output.txt for log_summaries on 24, 32 and 40 cores.

The load imbalance might come from using the same mesh partitioner for both 
VolumeMesh and BoundaryMesh. To validate this, I output the processor_ids for 
every nodes on BoundaryMesh and VolumeMesh, and found that BoundaryMesh and 
VolumeMesh have the same partitioning pattern on the surface. For the 
BoundaryMesh, some processors have relatively small fractions of the whole 
surface, while some processors have relatively larger fractions, which could 
lead to load imbalance (see attached picture). If some processors only deal 
with nodes in the interior but not on the boundary, then they'll be idling 
during BoundaryMesh calculation.

I'm thinking
1) Is there a way to define a separate mesh partitioner for BoundaryMesh that 
could partition BoundaryMesh more uniformly onto every processor?
2) Are there any examples or codes to look at to achieve this?

Thank in advance for your time.

Regards,
Xikai

I tried adding a boundary_mesh.partition();, but it generates the same 
partitioning pattern as before:

LibMeshInit init (argc, argv);

Mesh mesh(init.comm());
mesh.read("meshin");
int dim = mesh.mesh_dimension();

BoundaryMesh boundary_mesh(mesh.comm() , dim-1);
mesh.get_boundary_info().sync(boundary_mesh);

boundary_mesh.partition();
                                          
Running /home/xikai/projects/infinitepoisson/boundary_integral-opt -in 
/home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e -r 3 -eps 1e-6


Volume mesh info, 3D spatial, 3D element
 Mesh Information:
  elem_dimensions()={3}
  spatial_dimension()=3
  n_nodes()=5077
    n_local_nodes()=305
  n_elem()=25963
    n_local_elem()=1084
    n_active_elem()=25963
  n_subdomains()=1
  n_partitions()=24
  n_processors()=24
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=1487
    n_local_nodes()=90
  n_elem()=2970
    n_local_elem()=143
    n_active_elem()=2970
  n_subdomains()=1
  n_partitions()=24
  n_processors()=24
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element. Uniformly refine 3 times
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=95042
    n_local_nodes()=4717
  n_elem()=252450
    n_local_elem()=12155
    n_active_elem()=190080
  n_subdomains()=1
  n_partitions()=24
  n_processors()=24
  n_threads()=1
  processor_id()=0


 -------------------------------------------------------------------------
| Processor id:   0                                                       |
| Num Processors: 24                                                      |
| Time:           Wed Jun  3 17:38:07 2015                                |
| OS:             Linux                                                   |
| HostName:       n341                                                    |
| OS Release:     2.6.18-348.1.1.el5                                      |
| OS Version:     #1 SMP Tue Jan 22 16:19:19 EST 2013                     |
| Machine:        x86_64                                                  |
| Username:       xikai                                                   |
| Configuration:  ../configure  '--with-methods=opt oprof dbg'            |
|  '--prefix=/home/xikai/projects/moose/scripts/../libmesh/installed'     |
|  '--enable-silent-rules'                                                |
|  '--enable-unique-id'                                                   |
|  '--disable-warnings'                                                   |
|  '--disable-cxx11'                                                      |
|  '--enable-unique-ptr'                                                  |
|  '--enable-openmp'                                                      |
|  'METHODS=opt oprof dbg'                                                |
|  'PETSC_DIR=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt'|
 -------------------------------------------------------------------------
 
------------------------------------------------------------------------------------------------------------
| Boundary Integration Performance: Alive time=2745.72, Active time=1954.15     
                             |
 
------------------------------------------------------------------------------------------------------------
| Event                         nCalls     Total Time  Avg Time    Total Time  
Avg Time    % of Active Time  |
|                                          w/o Sub     w/o Sub     With Sub    
With Sub    w/o S    With S   |
|------------------------------------------------------------------------------------------------------------|
|                                                                               
                             |
| Phi1 initialization           1          0.0070      0.007023    0.0070      
0.007023    0.00     0.00     |
| integration                   1          1954.1429   1954.142903 1954.1429   
1954.142903 100.00   100.00   |
 
------------------------------------------------------------------------------------------------------------
| Totals:                       2          1954.1499                            
           100.00            |
 
------------------------------------------------------------------------------------------------------------

Running /home/xikai/projects/infinitepoisson/boundary_integral-opt -in 
/home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e -r 3 -eps 1e-6 
-log_summary


Volume mesh info, 3D spatial, 3D element
 Mesh Information:
  elem_dimensions()={3}
  spatial_dimension()=3
  n_nodes()=5077
    n_local_nodes()=305
  n_elem()=25963
    n_local_elem()=1084
    n_active_elem()=25963
  n_subdomains()=1
  n_partitions()=24
  n_processors()=24
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=1487
    n_local_nodes()=90
  n_elem()=2970
    n_local_elem()=143
    n_active_elem()=2970
  n_subdomains()=1
  n_partitions()=24
  n_processors()=24
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element. Uniformly refine 3 times
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=95042
    n_local_nodes()=4717
  n_elem()=252450
    n_local_elem()=12155
    n_active_elem()=190080
  n_subdomains()=1
  n_partitions()=24
  n_processors()=24
  n_threads()=1
  processor_id()=0

Running /home/xikai/projects/infinitepoisson/boundary_integral-opt -in 
/home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e -r 3 -eps 1e-6 
-log_summary


Volume mesh info, 3D spatial, 3D element
 Mesh Information:
  elem_dimensions()={3}
  spatial_dimension()=3
  n_nodes()=5077
    n_local_nodes()=305
  n_elem()=25963
    n_local_elem()=1084
    n_active_elem()=25963
  n_subdomains()=1
  n_partitions()=24
  n_processors()=24
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=1487
    n_local_nodes()=90
  n_elem()=2970
    n_local_elem()=143
    n_active_elem()=2970
  n_subdomains()=1
  n_partitions()=24
  n_processors()=24
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element. Uniformly refine 3 times
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=95042
    n_local_nodes()=4717
  n_elem()=252450
    n_local_elem()=12155
    n_active_elem()=190080
  n_subdomains()=1
  n_partitions()=24
  n_processors()=24
  n_threads()=1
  processor_id()=0


 -------------------------------------------------------------------------
| Processor id:   0                                                       |
| Num Processors: 24                                                      |
| Time:           Thu Jun  4 15:53:03 2015                                |
| OS:             Linux                                                   |
| HostName:       n293                                                    |
| OS Release:     2.6.18-348.1.1.el5                                      |
| OS Version:     #1 SMP Tue Jan 22 16:19:19 EST 2013                     |
| Machine:        x86_64                                                  |
| Username:       xikai                                                   |
| Configuration:  ../configure  '--with-methods=opt oprof dbg'            |
|  '--prefix=/home/xikai/projects/moose/scripts/../libmesh/installed'     |
|  '--enable-silent-rules'                                                |
|  '--enable-unique-id'                                                   |
|  '--disable-warnings'                                                   |
|  '--disable-cxx11'                                                      |
|  '--enable-unique-ptr'                                                  |
|  '--enable-openmp'                                                      |
|  'METHODS=opt oprof dbg'                                                |
|  'PETSC_DIR=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt'|
 -------------------------------------------------------------------------
 
------------------------------------------------------------------------------------------------------------
| Boundary Integration Performance: Alive time=2621.97, Active time=2620.32     
                             |
 
------------------------------------------------------------------------------------------------------------
| Event                         nCalls     Total Time  Avg Time    Total Time  
Avg Time    % of Active Time  |
|                                          w/o Sub     w/o Sub     With Sub    
With Sub    w/o S    With S   |
|------------------------------------------------------------------------------------------------------------|
|                                                                               
                             |
| Phi1 initialization           1          0.0059      0.005911    0.0059      
0.005911    0.00     0.00     |
| integration                   1          2282.5104   2282.510416 2282.5104   
2282.510416 87.11    87.11    |
| output                        1          333.8928    333.892788  333.8928    
333.892788  12.74    12.74    |
| read-in-mesh                  1          1.9726      1.972646    1.9726      
1.972646    0.08     0.08     |
| refine-mesh                   3          1.8797      0.626576    1.8797      
0.626576    0.07     0.07     |
| sync-boundary-mesh            1          0.0579      0.057870    0.0579      
0.057870    0.00     0.00     |
 
------------------------------------------------------------------------------------------------------------
| Totals:                       8          2620.3194                            
           100.00            |
 
------------------------------------------------------------------------------------------------------------

************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r 
-fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: 
----------------------------------------------

/home/xikai/projects/infinitepoisson/boundary_integral-opt on a 
arch-linux2-c-opt named n293 with 24 processors, by xikai Thu Jun  4 15:53:03 
2015
Using Petsc Release Version 3.5.2, Sep, 08, 2014 

                         Max       Max/Min        Avg      Total 
Time (sec):           2.622e+03      1.00001   2.622e+03
Objects:              2.500e+01      1.00000   2.500e+01
Flops:                1.162e+04      3.48201   7.920e+03  1.901e+05
Flops/sec:            4.430e+00      3.48201   3.021e+00  7.250e+01
MPI Messages:         1.350e+02      1.13445   1.267e+02  3.040e+03
MPI Message Lengths:  2.504e+06      3.42430   1.341e+04  4.077e+07
MPI Reductions:       4.200e+01      1.00000

Flop counting convention: 1 flop = 1 real number operation of type 
(multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N 
flops
                            and VecAXPY() for complex vectors of length N --> 
8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- 
Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     
Avg         %Total   counts   %Total 
 0:      Main Stage: 2.6220e+03 100.0%  1.9008e+05 100.0%  3.040e+03 100.0%  
1.341e+04      100.0%  4.100e+01  97.6% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting 
output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and 
PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in 
this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all 
processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                            
 --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct 
 %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecCopy                1 1.0 3.8314e-0418.3 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                 9 1.0 6.3014e-0412.4 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAssemblyBegin       5 1.0 6.5352e-01 2.7 0.00e+00 0.0 2.2e+03 1.8e+04 
1.5e+01  0  0 72 98 36   0  0 72 98 37     0
VecAssemblyEnd         5 1.0 3.8396e-01985.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin        2 1.0 4.1968e-02 6.2 0.00e+00 0.0 1.3e+02 6.8e+03 
1.0e+00  0  0  4  2  2   0  0  4  2  2     0
VecScatterEnd          1 1.0 9.9802e-0476.1 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatZeroEntries         2 1.0 1.2207e-04 5.3 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Vector    11             11      1929024     0
      Vector Scatter     3              3         2820     0
           Index Set     5              5         7224     0
   IS L to G Mapping     2              2        43412     0
              Matrix     3              3        82112     0
              Viewer     1              0            0     0
========================================================================================================================
Average time to get PetscTime(): 9.53674e-08
Average time for MPI_Barrier(): 0.000119019
Average time for zero size MPI_Send(): 6.57638e-06
#PETSc Option Table entries:
-eps 1e-6
-in /home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e
-log_summary
-r 3
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: 
--prefix=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt 
--download-hypre=1 --with-debugging=no --with-pic=1 --with-shared-libraries=1 
--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack=1 
--download-metis=1 --download-parmetis=1 --download-superlu_dist=1 CC=mpicc 
CXX=mpicxx FC=mpif90 F77=mpif77 F90=mpif90 CFLAGS="-fPIC -fopenmp" 
CXXFLAGS="-fPIC -fopenmp" FFLAGS="-fPIC -fopenmp" FCFLAGS="-fPIC -fopenmp" 
F90FLAGS="-fPIC -fopenmp" F77FLAGS="-fPIC -fopenmp" 
PETSC_DIR=/home/xikai/projects/src/petsc-3.5.2
-----------------------------------------
Libraries compiled on Mon Jun  1 22:05:10 2015 on login6 
Machine characteristics: Linux-2.6.18-348.1.1.el5-x86_64-with-redhat-5.11-Final
Using PETSc directory: /home/xikai/projects/src/petsc-3.5.2
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc -fPIC -fopenmp -fPIC -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90 -fPIC -fopenmp -fPIC -O3   ${FOPTFLAGS} 
${FFLAGS} 
-----------------------------------------

Using include paths: 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/opt/soft/openmpi16-1.6.5-intel12-1/include
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lpetsc 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lHYPRE 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpi_cxx -lsuperlu_dist_3.3 -lflapack 
-lfblas -lparmetis -lmetis -lX11 -lpthread -lssl -lcrypto -lmpi_f90 -lmpi_f77 
-lm -lifport -lifcoremt -lm -lmpi_cxx -ldl 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib -lmpi -lnuma -lrt -lnsl -lutil 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -limf -lsvml 
-lipgo -ldecimal -liomp5 -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -ldl  
-----------------------------------------

Running /home/xikai/projects/infinitepoisson/boundary_integral-opt -in 
/home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e -r 3 -eps 1e-6 
-log_summary


Volume mesh info, 3D spatial, 3D element
 Mesh Information:
  elem_dimensions()={3}
  spatial_dimension()=3
  n_nodes()=5077
    n_local_nodes()=238
  n_elem()=25963
    n_local_elem()=830
    n_active_elem()=25963
  n_subdomains()=1
  n_partitions()=32
  n_processors()=32
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=1487
    n_local_nodes()=74
  n_elem()=2970
    n_local_elem()=114
    n_active_elem()=2970
  n_subdomains()=1
  n_partitions()=32
  n_processors()=32
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element. Uniformly refine 3 times
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=95042
    n_local_nodes()=3777
  n_elem()=252450
    n_local_elem()=9690
    n_active_elem()=190080
  n_subdomains()=1
  n_partitions()=32
  n_processors()=32
  n_threads()=1
  processor_id()=0


 -------------------------------------------------------------------------
| Processor id:   0                                                       |
| Num Processors: 32                                                      |
| Time:           Wed Jun  3 19:40:30 2015                                |
| OS:             Linux                                                   |
| HostName:       n218                                                    |
| OS Release:     2.6.18-348.1.1.el5                                      |
| OS Version:     #1 SMP Tue Jan 22 16:19:19 EST 2013                     |
| Machine:        x86_64                                                  |
| Username:       xikai                                                   |
| Configuration:  ../configure  '--with-methods=opt oprof dbg'            |
|  '--prefix=/home/xikai/projects/moose/scripts/../libmesh/installed'     |
|  '--enable-silent-rules'                                                |
|  '--enable-unique-id'                                                   |
|  '--disable-warnings'                                                   |
|  '--disable-cxx11'                                                      |
|  '--enable-unique-ptr'                                                  |
|  '--enable-openmp'                                                      |
|  'METHODS=opt oprof dbg'                                                |
|  'PETSC_DIR=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt'|
 -------------------------------------------------------------------------
 
------------------------------------------------------------------------------------------------------------
| Boundary Integration Performance: Alive time=1657.57, Active time=1534.82     
                             |
 
------------------------------------------------------------------------------------------------------------
| Event                         nCalls     Total Time  Avg Time    Total Time  
Avg Time    % of Active Time  |
|                                          w/o Sub     w/o Sub     With Sub    
With Sub    w/o S    With S   |
|------------------------------------------------------------------------------------------------------------|
|                                                                               
                             |
| Phi1 initialization           1          0.0063      0.006283    0.0063      
0.006283    0.00     0.00     |
| integration                   1          1534.8146   1534.814642 1534.8146   
1534.814642 100.00   100.00   |
 
------------------------------------------------------------------------------------------------------------
| Totals:                       2          1534.8209                            
           100.00            |
 
------------------------------------------------------------------------------------------------------------

************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r 
-fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: 
----------------------------------------------

/home/xikai/projects/infinitepoisson/boundary_integral-opt on a 
arch-linux2-c-opt named n218 with 32 processors, by xikai Wed Jun  3 19:40:31 
2015
Using Petsc Release Version 3.5.2, Sep, 08, 2014 

                         Max       Max/Min        Avg      Total 
Time (sec):           1.662e+03      1.00000   1.662e+03
Objects:              2.500e+01      1.00000   2.500e+01
Flops:                7.720e+03      3.90293   5.940e+03  1.901e+05
Flops/sec:            4.644e+00      3.90293   3.573e+00  1.143e+02
MPI Messages:         1.650e+02      1.16197   1.559e+02  4.988e+03
MPI Message Lengths:  1.953e+06      3.41823   9.194e+03  4.586e+07
MPI Reductions:       4.200e+01      1.00000

Flop counting convention: 1 flop = 1 real number operation of type 
(multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N 
flops
                            and VecAXPY() for complex vectors of length N --> 
8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- 
Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     
Avg         %Total   counts   %Total 
 0:      Main Stage: 1.6623e+03 100.0%  1.9008e+05 100.0%  4.988e+03 100.0%  
9.194e+03      100.0%  4.100e+01  97.6% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting 
output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and 
PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in 
this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all 
processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                            
 --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct 
 %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecCopy                1 1.0 2.7704e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                 9 1.0 1.0738e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAssemblyBegin       5 1.0 8.2777e-02 1.8 0.00e+00 0.0 3.8e+03 1.2e+04 
1.5e+01  0  0 76 98 36   0  0 76 98 37     0
VecAssemblyEnd         5 1.0 1.4602e-0275.7 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin        2 1.0 1.9009e-02 3.4 0.00e+00 0.0 1.8e+02 5.1e+03 
1.0e+00  0  0  4  2  2   0  0  4  2  2     0
VecScatterEnd          1 1.0 1.9073e-05 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatZeroEntries         2 1.0 7.3814e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Vector    11             11      1851312     0
      Vector Scatter     3              3         2820     0
           Index Set     5              5         6772     0
   IS L to G Mapping     2              2        35172     0
              Matrix     3              3        65348     0
              Viewer     1              0            0     0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 4.26292e-05
Average time for zero size MPI_Send(): 9.21637e-06
#PETSc Option Table entries:
-eps 1e-6
-in /home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e
-log_summary
-r 3
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: 
--prefix=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt 
--download-hypre=1 --with-debugging=no --with-pic=1 --with-shared-libraries=1 
--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack=1 
--download-metis=1 --download-parmetis=1 --download-superlu_dist=1 CC=mpicc 
CXX=mpicxx FC=mpif90 F77=mpif77 F90=mpif90 CFLAGS="-fPIC -fopenmp" 
CXXFLAGS="-fPIC -fopenmp" FFLAGS="-fPIC -fopenmp" FCFLAGS="-fPIC -fopenmp" 
F90FLAGS="-fPIC -fopenmp" F77FLAGS="-fPIC -fopenmp" 
PETSC_DIR=/home/xikai/projects/src/petsc-3.5.2
-----------------------------------------
Libraries compiled on Mon Jun  1 22:05:10 2015 on login6 
Machine characteristics: Linux-2.6.18-348.1.1.el5-x86_64-with-redhat-5.11-Final
Using PETSc directory: /home/xikai/projects/src/petsc-3.5.2
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc -fPIC -fopenmp -fPIC -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90 -fPIC -fopenmp -fPIC -O3   ${FOPTFLAGS} 
${FFLAGS} 
-----------------------------------------

Using include paths: 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/opt/soft/openmpi16-1.6.5-intel12-1/include
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lpetsc 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lHYPRE 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpi_cxx -lsuperlu_dist_3.3 -lflapack 
-lfblas -lparmetis -lmetis -lX11 -lpthread -lssl -lcrypto -lmpi_f90 -lmpi_f77 
-lm -lifport -lifcoremt -lm -lmpi_cxx -ldl 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib -lmpi -lnuma -lrt -lnsl -lutil 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -limf -lsvml 
-lipgo -ldecimal -liomp5 -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -ldl  
-----------------------------------------

Running /home/xikai/projects/infinitepoisson/boundary_integral-opt -in 
/home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e -r 3 -eps 1e-6 
-log_summary


Volume mesh info, 3D spatial, 3D element
 Mesh Information:
  elem_dimensions()={3}
  spatial_dimension()=3
  n_nodes()=5077
    n_local_nodes()=238
  n_elem()=25963
    n_local_elem()=830
    n_active_elem()=25963
  n_subdomains()=1
  n_partitions()=32
  n_processors()=32
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=1487
    n_local_nodes()=74
  n_elem()=2970
    n_local_elem()=114
    n_active_elem()=2970
  n_subdomains()=1
  n_partitions()=32
  n_processors()=32
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element. Uniformly refine 3 times
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=95042
    n_local_nodes()=3777
  n_elem()=252450
    n_local_elem()=9690
    n_active_elem()=190080
  n_subdomains()=1
  n_partitions()=32
  n_processors()=32
  n_threads()=1
  processor_id()=0


 -------------------------------------------------------------------------
| Processor id:   0                                                       |
| Num Processors: 32                                                      |
| Time:           Thu Jun  4 11:21:31 2015                                |
| OS:             Linux                                                   |
| HostName:       n315                                                    |
| OS Release:     2.6.18-348.1.1.el5                                      |
| OS Version:     #1 SMP Tue Jan 22 16:19:19 EST 2013                     |
| Machine:        x86_64                                                  |
| Username:       xikai                                                   |
| Configuration:  ../configure  '--with-methods=opt oprof dbg'            |
|  '--prefix=/home/xikai/projects/moose/scripts/../libmesh/installed'     |
|  '--enable-silent-rules'                                                |
|  '--enable-unique-id'                                                   |
|  '--disable-warnings'                                                   |
|  '--disable-cxx11'                                                      |
|  '--enable-unique-ptr'                                                  |
|  '--enable-openmp'                                                      |
|  'METHODS=opt oprof dbg'                                                |
|  'PETSC_DIR=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt'|
 -------------------------------------------------------------------------
 
------------------------------------------------------------------------------------------------------------
| Boundary Integration Performance: Alive time=1723.31, Active time=1668.43     
                             |
 
------------------------------------------------------------------------------------------------------------
| Event                         nCalls     Total Time  Avg Time    Total Time  
Avg Time    % of Active Time  |
|                                          w/o Sub     w/o Sub     With Sub    
With Sub    w/o S    With S   |
|------------------------------------------------------------------------------------------------------------|
|                                                                               
                             |
| Phi1 initialization           1          0.0065      0.006457    0.0065      
0.006457    0.00     0.00     |
| integration                   1          1668.4258   1668.425758 1668.4258   
1668.425758 100.00   100.00   |
 
------------------------------------------------------------------------------------------------------------
| Totals:                       2          1668.4322                            
           100.00            |
 
------------------------------------------------------------------------------------------------------------

************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r 
-fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: 
----------------------------------------------

/home/xikai/projects/infinitepoisson/boundary_integral-opt on a 
arch-linux2-c-opt named n315 with 32 processors, by xikai Thu Jun  4 11:21:31 
2015
Using Petsc Release Version 3.5.2, Sep, 08, 2014 

                         Max       Max/Min        Avg      Total 
Time (sec):           1.728e+03      1.00000   1.728e+03
Objects:              2.500e+01      1.00000   2.500e+01
Flops:                7.720e+03      3.90293   5.940e+03  1.901e+05
Flops/sec:            4.467e+00      3.90293   3.437e+00  1.100e+02
MPI Messages:         1.650e+02      1.16197   1.559e+02  4.988e+03
MPI Message Lengths:  1.953e+06      3.41823   9.194e+03  4.586e+07
MPI Reductions:       4.200e+01      1.00000

Flop counting convention: 1 flop = 1 real number operation of type 
(multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N 
flops
                            and VecAXPY() for complex vectors of length N --> 
8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- 
Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     
Avg         %Total   counts   %Total 
 0:      Main Stage: 1.7284e+03 100.0%  1.9008e+05 100.0%  4.988e+03 100.0%  
9.194e+03      100.0%  4.100e+01  97.6% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting 
output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and 
PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in 
this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all 
processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                            
 --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct 
 %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecCopy                1 1.0 3.0184e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                 9 1.0 1.1532e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAssemblyBegin       5 1.0 1.0217e-01 1.4 0.00e+00 0.0 3.8e+03 1.2e+04 
1.5e+01  0  0 76 98 36   0  0 76 98 37     0
VecAssemblyEnd         5 1.0 1.5508e-0297.8 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin        2 1.0 2.8235e-02 7.5 0.00e+00 0.0 1.8e+02 5.1e+03 
1.0e+00  0  0  4  2  2   0  0  4  2  2     0
VecScatterEnd          1 1.0 1.7881e-05 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatZeroEntries         2 1.0 8.2612e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Vector    11             11      1851312     0
      Vector Scatter     3              3         2820     0
           Index Set     5              5         6772     0
   IS L to G Mapping     2              2        35172     0
              Matrix     3              3        65348     0
              Viewer     1              0            0     0
========================================================================================================================
Average time to get PetscTime(): 9.53674e-08
Average time for MPI_Barrier(): 2.43664e-05
Average time for zero size MPI_Send(): 9.15676e-06
#PETSc Option Table entries:
-eps 1e-6
-in /home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e
-log_summary
-r 3
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: 
--prefix=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt 
--download-hypre=1 --with-debugging=no --with-pic=1 --with-shared-libraries=1 
--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack=1 
--download-metis=1 --download-parmetis=1 --download-superlu_dist=1 CC=mpicc 
CXX=mpicxx FC=mpif90 F77=mpif77 F90=mpif90 CFLAGS="-fPIC -fopenmp" 
CXXFLAGS="-fPIC -fopenmp" FFLAGS="-fPIC -fopenmp" FCFLAGS="-fPIC -fopenmp" 
F90FLAGS="-fPIC -fopenmp" F77FLAGS="-fPIC -fopenmp" 
PETSC_DIR=/home/xikai/projects/src/petsc-3.5.2
-----------------------------------------
Libraries compiled on Mon Jun  1 22:05:10 2015 on login6 
Machine characteristics: Linux-2.6.18-348.1.1.el5-x86_64-with-redhat-5.11-Final
Using PETSc directory: /home/xikai/projects/src/petsc-3.5.2
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc -fPIC -fopenmp -fPIC -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90 -fPIC -fopenmp -fPIC -O3   ${FOPTFLAGS} 
${FFLAGS} 
-----------------------------------------

Using include paths: 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/opt/soft/openmpi16-1.6.5-intel12-1/include
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lpetsc 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lHYPRE 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpi_cxx -lsuperlu_dist_3.3 -lflapack 
-lfblas -lparmetis -lmetis -lX11 -lpthread -lssl -lcrypto -lmpi_f90 -lmpi_f77 
-lm -lifport -lifcoremt -lm -lmpi_cxx -ldl 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib -lmpi -lnuma -lrt -lnsl -lutil 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -limf -lsvml 
-lipgo -ldecimal -liomp5 -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -ldl  
-----------------------------------------

Running /home/xikai/projects/infinitepoisson/boundary_integral-opt -in 
/home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e -r 3 -eps 1e-6 
-log_summary


Volume mesh info, 3D spatial, 3D element
 Mesh Information:
  elem_dimensions()={3}
  spatial_dimension()=3
  n_nodes()=5077
    n_local_nodes()=194
  n_elem()=25963
    n_local_elem()=648
    n_active_elem()=25963
  n_subdomains()=1
  n_partitions()=40
  n_processors()=40
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=1487
    n_local_nodes()=66
  n_elem()=2970
    n_local_elem()=102
    n_active_elem()=2970
  n_subdomains()=1
  n_partitions()=40
  n_processors()=40
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element. Uniformly refine 3 times
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=95042
    n_local_nodes()=3377
  n_elem()=252450
    n_local_elem()=8670
    n_active_elem()=190080
  n_subdomains()=1
  n_partitions()=40
  n_processors()=40
  n_threads()=1
  processor_id()=0


 -------------------------------------------------------------------------
| Processor id:   0                                                       |
| Num Processors: 40                                                      |
| Time:           Wed Jun  3 19:43:49 2015                                |
| OS:             Linux                                                   |
| HostName:       n342                                                    |
| OS Release:     2.6.18-348.1.1.el5                                      |
| OS Version:     #1 SMP Tue Jan 22 16:19:19 EST 2013                     |
| Machine:        x86_64                                                  |
| Username:       xikai                                                   |
| Configuration:  ../configure  '--with-methods=opt oprof dbg'            |
|  '--prefix=/home/xikai/projects/moose/scripts/../libmesh/installed'     |
|  '--enable-silent-rules'                                                |
|  '--enable-unique-id'                                                   |
|  '--disable-warnings'                                                   |
|  '--disable-cxx11'                                                      |
|  '--enable-unique-ptr'                                                  |
|  '--enable-openmp'                                                      |
|  'METHODS=opt oprof dbg'                                                |
|  'PETSC_DIR=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt'|
 -------------------------------------------------------------------------
 
------------------------------------------------------------------------------------------------------------
| Boundary Integration Performance: Alive time=1636.71, Active time=1360.56     
                             |
 
------------------------------------------------------------------------------------------------------------
| Event                         nCalls     Total Time  Avg Time    Total Time  
Avg Time    % of Active Time  |
|                                          w/o Sub     w/o Sub     With Sub    
With Sub    w/o S    With S   |
|------------------------------------------------------------------------------------------------------------|
|                                                                               
                             |
| Phi1 initialization           1          0.0056      0.005641    0.0056      
0.005641    0.00     0.00     |
| integration                   1          1360.5583   1360.558321 1360.5583   
1360.558321 100.00   100.00   |
 
------------------------------------------------------------------------------------------------------------
| Totals:                       2          1360.5640                            
           100.00            |
 
------------------------------------------------------------------------------------------------------------

************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r 
-fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: 
----------------------------------------------

/home/xikai/projects/infinitepoisson/boundary_integral-opt on a 
arch-linux2-c-opt named n342 with 40 processors, by xikai Wed Jun  3 19:43:49 
2015
Using Petsc Release Version 3.5.2, Sep, 08, 2014 

                         Max       Max/Min        Avg      Total 
Time (sec):           1.642e+03      1.00004   1.642e+03
Objects:              2.500e+01      1.00000   2.500e+01
Flops:                7.768e+03      0.00000   4.752e+03  1.901e+05
Flops/sec:            4.732e+00      0.00000   2.895e+00  1.158e+02
MPI Messages:         1.860e+02      4.22727   1.617e+02  6.466e+03
MPI Message Lengths:  2.292e+06   1514.08454   8.157e+03  5.274e+07
MPI Reductions:       4.200e+01      1.00000

Flop counting convention: 1 flop = 1 real number operation of type 
(multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N 
flops
                            and VecAXPY() for complex vectors of length N --> 
8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- 
Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     
Avg         %Total   counts   %Total 
 0:      Main Stage: 1.6418e+03 100.0%  1.9008e+05 100.0%  6.466e+03 100.0%  
8.157e+03      100.0%  4.100e+01  97.6% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting 
output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and 
PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in 
this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all 
processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                            
 --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct 
 %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecCopy                1 1.0 2.4796e-0418.9 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                 9 1.0 9.1410e-0412.7 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAssemblyBegin       5 1.0 7.8920e-02 1.5 0.00e+00 0.0 4.9e+03 1.0e+04 
1.5e+01  0  0 76 98 36   0  0 76 98 37     0
VecAssemblyEnd         5 1.0 1.3820e-021207.6 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin        2 1.0 2.4763e-02310.0 0.00e+00 0.0 2.2e+02 4.4e+03 
1.0e+00  0  0  3  2  2   0  0  3  2  2     0
VecScatterEnd          1 1.0 3.0112e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatZeroEntries         2 1.0 7.4911e-0417.9 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Vector    11             11      1817712     0
      Vector Scatter     3              3         2820     0
           Index Set     5              5         6500     0
   IS L to G Mapping     2              2        31524     0
              Matrix     3              3        55256     0
              Viewer     1              0            0     0
========================================================================================================================
Average time to get PetscTime(): 9.53674e-08
Average time for MPI_Barrier(): 0.000213194
Average time for zero size MPI_Send(): 7.30157e-06
#PETSc Option Table entries:
-eps 1e-6
-in /home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e
-log_summary
-r 3
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: 
--prefix=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt 
--download-hypre=1 --with-debugging=no --with-pic=1 --with-shared-libraries=1 
--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack=1 
--download-metis=1 --download-parmetis=1 --download-superlu_dist=1 CC=mpicc 
CXX=mpicxx FC=mpif90 F77=mpif77 F90=mpif90 CFLAGS="-fPIC -fopenmp" 
CXXFLAGS="-fPIC -fopenmp" FFLAGS="-fPIC -fopenmp" FCFLAGS="-fPIC -fopenmp" 
F90FLAGS="-fPIC -fopenmp" F77FLAGS="-fPIC -fopenmp" 
PETSC_DIR=/home/xikai/projects/src/petsc-3.5.2
-----------------------------------------
Libraries compiled on Mon Jun  1 22:05:10 2015 on login6 
Machine characteristics: Linux-2.6.18-348.1.1.el5-x86_64-with-redhat-5.11-Final
Using PETSc directory: /home/xikai/projects/src/petsc-3.5.2
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc -fPIC -fopenmp -fPIC -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90 -fPIC -fopenmp -fPIC -O3   ${FOPTFLAGS} 
${FFLAGS} 
-----------------------------------------

Using include paths: 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/opt/soft/openmpi16-1.6.5-intel12-1/include
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lpetsc 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lHYPRE 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpi_cxx -lsuperlu_dist_3.3 -lflapack 
-lfblas -lparmetis -lmetis -lX11 -lpthread -lssl -lcrypto -lmpi_f90 -lmpi_f77 
-lm -lifport -lifcoremt -lm -lmpi_cxx -ldl 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib -lmpi -lnuma -lrt -lnsl -lutil 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -limf -lsvml 
-lipgo -ldecimal -liomp5 -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -ldl  
-----------------------------------------

Running /home/xikai/projects/infinitepoisson/boundary_integral-opt -in 
/home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e -r 3 -eps 1e-6 
-log_summary


Volume mesh info, 3D spatial, 3D element
 Mesh Information:
  elem_dimensions()={3}
  spatial_dimension()=3
  n_nodes()=5077
    n_local_nodes()=194
  n_elem()=25963
    n_local_elem()=648
    n_active_elem()=25963
  n_subdomains()=1
  n_partitions()=40
  n_processors()=40
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=1487
    n_local_nodes()=66
  n_elem()=2970
    n_local_elem()=102
    n_active_elem()=2970
  n_subdomains()=1
  n_partitions()=40
  n_processors()=40
  n_threads()=1
  processor_id()=0


Boundary mesh info, 3D spatial, 2D element. Uniformly refine 3 times
 Mesh Information:
  elem_dimensions()={2}
  spatial_dimension()=3
  n_nodes()=95042
    n_local_nodes()=3377
  n_elem()=252450
    n_local_elem()=8670
    n_active_elem()=190080
  n_subdomains()=1
  n_partitions()=40
  n_processors()=40
  n_threads()=1
  processor_id()=0


 -------------------------------------------------------------------------
| Processor id:   0                                                       |
| Num Processors: 40                                                      |
| Time:           Thu Jun  4 11:33:06 2015                                |
| OS:             Linux                                                   |
| HostName:       n229                                                    |
| OS Release:     2.6.18-348.1.1.el5                                      |
| OS Version:     #1 SMP Tue Jan 22 16:19:19 EST 2013                     |
| Machine:        x86_64                                                  |
| Username:       xikai                                                   |
| Configuration:  ../configure  '--with-methods=opt oprof dbg'            |
|  '--prefix=/home/xikai/projects/moose/scripts/../libmesh/installed'     |
|  '--enable-silent-rules'                                                |
|  '--enable-unique-id'                                                   |
|  '--disable-warnings'                                                   |
|  '--disable-cxx11'                                                      |
|  '--enable-unique-ptr'                                                  |
|  '--enable-openmp'                                                      |
|  'METHODS=opt oprof dbg'                                                |
|  'PETSC_DIR=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt'|
 -------------------------------------------------------------------------
 
------------------------------------------------------------------------------------------------------------
| Boundary Integration Performance: Alive time=1645.88, Active time=1343.57     
                             |
 
------------------------------------------------------------------------------------------------------------
| Event                         nCalls     Total Time  Avg Time    Total Time  
Avg Time    % of Active Time  |
|                                          w/o Sub     w/o Sub     With Sub    
With Sub    w/o S    With S   |
|------------------------------------------------------------------------------------------------------------|
|                                                                               
                             |
| Phi1 initialization           1          0.0057      0.005669    0.0057      
0.005669    0.00     0.00     |
| integration                   1          1343.5634   1343.563395 1343.5634   
1343.563395 100.00   100.00   |
 
------------------------------------------------------------------------------------------------------------
| Totals:                       2          1343.5691                            
           100.00            |
 
------------------------------------------------------------------------------------------------------------

************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r 
-fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: 
----------------------------------------------

/home/xikai/projects/infinitepoisson/boundary_integral-opt on a 
arch-linux2-c-opt named n229 with 40 processors, by xikai Thu Jun  4 11:33:06 
2015
Using Petsc Release Version 3.5.2, Sep, 08, 2014 

                         Max       Max/Min        Avg      Total 
Time (sec):           1.651e+03      1.00001   1.651e+03
Objects:              2.500e+01      1.00000   2.500e+01
Flops:                7.768e+03      0.00000   4.752e+03  1.901e+05
Flops/sec:            4.706e+00      0.00000   2.879e+00  1.152e+02
MPI Messages:         1.860e+02      4.22727   1.617e+02  6.466e+03
MPI Message Lengths:  2.292e+06   1514.08454   8.157e+03  5.274e+07
MPI Reductions:       4.200e+01      1.00000

Flop counting convention: 1 flop = 1 real number operation of type 
(multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N 
flops
                            and VecAXPY() for complex vectors of length N --> 
8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- 
Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     
Avg         %Total   counts   %Total 
 0:      Main Stage: 1.6507e+03 100.0%  1.9008e+05 100.0%  6.466e+03 100.0%  
8.157e+03      100.0%  4.100e+01  97.6% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting 
output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and 
PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in 
this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all 
processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                            
 --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct 
 %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecCopy                1 1.0 4.1580e-0422.9 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                 9 1.0 7.6604e-0410.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAssemblyBegin       5 1.0 9.8375e-02 1.8 0.00e+00 0.0 4.9e+03 1.0e+04 
1.5e+01  0  0 76 98 36   0  0 76 98 37     0
VecAssemblyEnd         5 1.0 2.0681e-022282.7 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin        2 1.0 2.6303e-02279.3 0.00e+00 0.0 2.2e+02 4.4e+03 
1.0e+00  0  0  3  2  2   0  0  3  2  2     0
VecScatterEnd          1 1.0 3.1590e-04331.2 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatZeroEntries         2 1.0 7.7796e-0419.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Vector    11             11      1817712     0
      Vector Scatter     3              3         2820     0
           Index Set     5              5         6500     0
   IS L to G Mapping     2              2        31524     0
              Matrix     3              3        55256     0
              Viewer     1              0            0     0
========================================================================================================================
Average time to get PetscTime(): 1.90735e-07
Average time for MPI_Barrier(): 0.000348997
Average time for zero size MPI_Send(): 2.09272e-05
#PETSc Option Table entries:
-eps 1e-6
-in /home/xikai/projects/infinitepoisson/sphere_tet_approx_size0_1.e
-log_summary
-r 3
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: 
--prefix=/home/xikai/projects/moose/petsc/openmpi_petsc-3.5.2/icc-opt 
--download-hypre=1 --with-debugging=no --with-pic=1 --with-shared-libraries=1 
--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack=1 
--download-metis=1 --download-parmetis=1 --download-superlu_dist=1 CC=mpicc 
CXX=mpicxx FC=mpif90 F77=mpif77 F90=mpif90 CFLAGS="-fPIC -fopenmp" 
CXXFLAGS="-fPIC -fopenmp" FFLAGS="-fPIC -fopenmp" FCFLAGS="-fPIC -fopenmp" 
F90FLAGS="-fPIC -fopenmp" F77FLAGS="-fPIC -fopenmp" 
PETSC_DIR=/home/xikai/projects/src/petsc-3.5.2
-----------------------------------------
Libraries compiled on Mon Jun  1 22:05:10 2015 on login6 
Machine characteristics: Linux-2.6.18-348.1.1.el5-x86_64-with-redhat-5.11-Final
Using PETSc directory: /home/xikai/projects/src/petsc-3.5.2
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------

Using C compiler: mpicc -fPIC -fopenmp -fPIC -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90 -fPIC -fopenmp -fPIC -O3   ${FOPTFLAGS} 
${FFLAGS} 
-----------------------------------------

Using include paths: 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/include 
-I/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/include 
-I/opt/soft/openmpi16-1.6.5-intel12-1/include
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lpetsc 
-Wl,-rpath,/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib 
-L/home/xikai/projects/src/petsc-3.5.2/arch-linux2-c-opt/lib -lHYPRE 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpi_cxx -lsuperlu_dist_3.3 -lflapack 
-lfblas -lparmetis -lmetis -lX11 -lpthread -lssl -lcrypto -lmpi_f90 -lmpi_f77 
-lm -lifport -lifcoremt -lm -lmpi_cxx -ldl 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib -lmpi -lnuma -lrt -lnsl -lutil 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -limf -lsvml 
-lipgo -ldecimal -liomp5 -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s 
-Wl,-rpath,/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-L/opt/soft/openmpi16-1.6.5-intel12-1/lib 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 
-Wl,-rpath,/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 
-L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -ldl  
-----------------------------------------

------------------------------------------------------------------------------
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to