From: Peter Brune [mailto:[email protected]]
Sent: Tuesday, April 22, 2014 10:16 AM
To: Fischer, Greg A.
Cc: [email protected]
Subject: Re: [petsc-users] SNES: approximating the Jacobian with computed
residuals?
On Tue, Apr 22, 2014 at 8:48 AM, Fischer, Greg A.
<[email protected]<mailto:[email protected]>> wrote:
Hello PETSc-users,
I'm using the SNES component with the NGMRES method in my application. I'm
using a matrix-free context for the Jacobian and the MatMFFDComputeJacobian()
function in my FormJacobian routine. My understanding is that this effectively
approximates the Jacobian using the equation at the bottom of Page 103 in the
PETSc User's Manual. This works, but the expense of computing two function
evaluations in each SNES iteration nearly wipes out the performance
improvements over Picard iteration.
Try -snes_type anderson. It's less stable than NGMRES, but requires one
function evaluation per iteration. The manual is out of date. I guess it's
time to fix that. It's interesting that the cost of matrix assembly and a
linear solve is around the same as that of a function evaluation. Output from
-log_summary would help in the diagnosis.
I tried the –snes_type anderson option, and it seems to be requiring even more
function evaluations than the Picard iterations. I’ve attached –log_summary
output. This seems strange, because I can use the NLKAIN code
(http://nlkain.sourceforge.net/) to fairly good effect, and I’ve read that it’s
related to Anderson mixing. Would it be useful to adjust the parameters?
I’ve also attached –log_summary output for NGMRES. Does anything jump out as
being amiss?
Based on my (limited) understanding of the Oosterlee/Washio SIAM paper ("Krylov
Subspace Acceleration of Nonlinear Multigrid..."), they seem to suggest that
it's possible to approximate the Jacobian with a series of previously-computed
residuals (eq 2.14), rather than additional function evaluations in each
iteration. Is this correct? If so, could someone point me to a reference that
demonstrates how to do this with PETSc?
What indication do you have that the Jacobian is calculated at all in the
NGMRES method? The two function evaluations are related to computing the
quantities labeled F(u_M) and F(u_A) in O/W. We already use the Jacobian
approximation for the minimization problem (2.14).
- Peter
Thanks for the clarification.
-Greg
Or, perhaps a better question to ask is: are there other ways of reducing the
computing burden associated with estimating the Jacobian?
Thanks,
Greg
---------------------------------------------- PETSc Performance Summary:
----------------------------------------------
./XXXXXXXXXXX on a arch-linux2-c-debug named bl1313.suse.pgh.wec.com with 1
processor, by fischega Tue Apr 22 11:35:07 2014
Using Petsc Release Version 3.4.4, Mar, 13, 2014
Max Max/Min Avg Total
Time (sec): 2.045e+01 1.00000 2.045e+01
Objects: 6.572e+03 1.00000 6.572e+03
Flops: 1.004e+08 1.00000 1.004e+08 1.004e+08
Flops/sec: 4.909e+06 1.00000 4.909e+06 4.909e+06
Memory: 5.495e+05 1.00000 5.495e+05
MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00
MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00
MPI Reductions: 5.940e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N
flops
and VecAXPY() for complex vectors of length N -->
8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- --
Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total
Avg %Total counts %Total
0: Main Stage: 2.0453e+01 100.0% 1.0041e+08 100.0% 0.000e+00 0.0%
0.000e+00 0.0% 5.940e+04 100.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting
output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and
PetscLogStagePop().
%T - percent time in this phase %f - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in
this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all
processors)
------------------------------------------------------------------------------------------------------------------------
##########################################################
# #
# WARNING!!! #
# #
# This code was compiled with a debugging option, #
# To get timing results run ./configure #
# using --with-debugging=no, the performance will #
# be generally two or three times faster. #
# #
##########################################################
Event Count Time (sec) Flops
--- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct
%T %f %M %L %R %T %f %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
ThreadCommRunKer 1 1.0 4.0531e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ThreadCommBarrie 1 1.0 2.1458e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMax 2752 1.0 1.0545e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 5303 1.0 1.0083e-01 1.0 5.98e+07 1.0 0.0e+00 0.0e+00
0.0e+00 0 60 0 0 0 0 60 0 0 0 593
VecNorm 2752 1.0 1.3784e-02 1.0 2.82e+06 1.0 0.0e+00 0.0e+00
0.0e+00 0 3 0 0 0 0 3 0 0 0 204
VecScale 2685 1.0 1.2508e-01 1.0 1.34e+06 1.0 0.0e+00 0.0e+00
0.0e+00 1 1 0 0 0 1 1 0 0 0 11
VecCopy 24232 1.0 9.2041e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 7916 1.0 2.5002e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 5370 1.0 2.6417e-01 1.0 5.50e+06 1.0 0.0e+00 0.0e+00
0.0e+00 1 5 0 0 0 1 5 0 0 0 21
VecMAXPY 2618 1.0 6.2283e-02 1.0 2.95e+07 1.0 0.0e+00 0.0e+00
0.0e+00 0 29 0 0 0 0 29 0 0 0 474
SNESSolve 67 1.0 1.3316e+01 1.0 1.00e+08 1.0 0.0e+00 0.0e+00
4.8e+04 65100 0 0 81 65100 0 0 81 8
SNESFunctionEval 2752 1.0 1.1888e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 58 0 0 0 0 58 0 0 0 0 0
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Vector 5164 5164 28291200 0
Vector Scatter 134 134 86296 0
MatMFFD 67 0 0 0
Matrix 67 0 0 0
Distributed Mesh 134 134 894584 0
Bipartite Graph 268 268 216544 0
Index Set 335 335 393960 0
IS L to G Mapping 67 67 176612 0
SNES 67 67 103984 0
SNESLineSearch 67 67 57352 0
DMSNES 67 67 45024 0
Viewer 1 0 0 0
Krylov Solver 67 67 87904 0
Preconditioner 67 67 65928 0
========================================================================================================================
Average time to get PetscTime(): 0
#PETSc Option Table entries:
-log_summary
-snes_anderson_m 13
-snes_anderson_monitor
-snes_type anderson
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure run at: Fri Mar 21 14:15:12 2014
Configure options: VERBOSE=1
--with-mpi-dir=/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5
--with-blas-lapack-dir=/tools/intel/mkl --with-batch=0
--prefix=/home/fischega/devel/XXXXXXXXXXX/install/petsc-3.4.4
-----------------------------------------
Libraries compiled on Fri Mar 21 14:15:12 2014 on susedev1
Machine characteristics: Linux-2.6.16.60-0.21-smp-x86_64-with-SuSE-10-x86_64
Using PETSc directory: /local/fischega/build/petsc-3.4.4
Using PETSc arch: arch-linux2-c-debug
-----------------------------------------
Using C compiler:
/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/bin/mpicc -fPIC -wd1572
-g ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler:
/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/bin/mpif90 -fPIC -g
${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths:
-I/local/fischega/build/petsc-3.4.4/arch-linux2-c-debug/include
-I/local/fischega/build/petsc-3.4.4/include
-I/local/fischega/build/petsc-3.4.4/include
-I/local/fischega/build/petsc-3.4.4/arch-linux2-c-debug/include
-I/usr/X11/include
-I/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/include
-----------------------------------------
Using C linker: /home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/bin/mpicc
Using Fortran linker:
/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/bin/mpif90
Using libraries:
-Wl,-rpath,/local/fischega/build/petsc-3.4.4/arch-linux2-c-debug/lib
-L/local/fischega/build/petsc-3.4.4/arch-linux2-c-debug/lib -lpetsc
-Wl,-rpath,/tools/intel/mkl/lib/intel64 -L/tools/intel/mkl/lib/intel64
-lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm
-Wl,-rpath,/usr/X11/lib64 -L/usr/X11/lib64 -lX11 -lpthread
-Wl,-rpath,/tools/lsf/7.0.6.EC/7.0/linux2.6-glibc2.3-x86_64/lib
-L/tools/lsf/7.0.6.EC/7.0/linux2.6-glibc2.3-x86_64/lib
-Wl,-rpath,/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/lib
-L/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/lib
-Wl,-rpath,/tools/intel/cce/11.1.046/lib/intel64
-L/tools/intel/cce/11.1.046/lib/intel64
-Wl,-rpath,/tools/intel/cce/11.1.046/ipp/em64t/lib
-L/tools/intel/cce/11.1.046/ipp/em64t/lib
-Wl,-rpath,/tools/intel/cce/11.1.046/mkl/lib/em64t
-L/tools/intel/cce/11.1.046/mkl/lib/em64t
-Wl,-rpath,/tools/intel/cce/11.1.046/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib
-L/tools/intel/cce/11.1.046/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib
-Wl,-rpath,/tools/intel/fce/11.1.046/lib/intel64
-L/tools/intel/fce/11.1.046/lib/intel64
-Wl,-rpath,/tools/intel/fce/11.1.046/mkl/lib/em64t
-L/tools/intel/fce/11.1.046/mkl/lib/em64t
-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2
-L/usr/lib64/gcc/x86_64-suse-linux/4.1.2 -Wl,-rpath,/usr/x86_64-suse-linux/lib
-L/usr/x86_64-suse-linux/lib -lmpi_f90 -lmpi_f77 -lm -lm -lifport -lifcore -lm
-lm -lm -lmpi_cxx -lstdc++ -lmpi_cxx -lstdc++ -ldl -lmpi -losmcomp -lrdmacm
-libverbs -lrt -lnsl -lutil -lbat -llsf -lnuma -limf -lsvml -lipgo -ldecimal
-lirc -lgcc_s -lpthread -lirc_s -ldl
-----------------------------------------
---------------------------------------------- PETSc Performance Summary:
----------------------------------------------
./XXXXXXXXXXX on a arch-linux2-c-debug named bl1313.suse.pgh.wec.com with 1
processor, by fischega Tue Apr 22 11:32:09 2014
Using Petsc Release Version 3.4.4, Mar, 13, 2014
Max Max/Min Avg Total
Time (sec): 1.852e+01 1.00000 1.852e+01
Objects: 4.622e+03 1.00000 4.622e+03
Flops: 3.129e+07 1.00000 3.129e+07 3.129e+07
Flops/sec: 1.690e+06 1.00000 1.690e+06 1.690e+06
Memory: 5.505e+05 1.00000 5.505e+05
MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00
MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00
MPI Reductions: 2.664e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N
flops
and VecAXPY() for complex vectors of length N -->
8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- --
Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total
Avg %Total counts %Total
0: Main Stage: 1.8520e+01 100.0% 3.1293e+07 100.0% 0.000e+00 0.0%
0.000e+00 0.0% 2.664e+04 100.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting
output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and
PetscLogStagePop().
%T - percent time in this phase %f - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in
this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all
processors)
------------------------------------------------------------------------------------------------------------------------
##########################################################
# #
# WARNING!!! #
# #
# This code was compiled with a debugging option, #
# To get timing results run ./configure #
# using --with-debugging=no, the performance will #
# be generally two or three times faster. #
# #
##########################################################
Event Count Time (sec) Flops
--- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct
%T %f %M %L %R %T %f %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
ThreadCommRunKer 1 1.0 5.0068e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ThreadCommBarrie 1 1.0 2.1458e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMax 802 1.0 3.0303e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 1462 1.0 1.7657e-02 1.0 9.59e+06 1.0 0.0e+00 0.0e+00
0.0e+00 0 31 0 0 0 0 31 0 0 0 543
VecNorm 4488 1.0 1.8884e-02 1.0 4.59e+06 1.0 0.0e+00 0.0e+00
0.0e+00 0 15 0 0 0 0 15 0 0 0 243
VecScale 735 1.0 3.5746e-02 1.0 3.76e+05 1.0 0.0e+00 0.0e+00
0.0e+00 0 1 0 0 0 0 1 0 0 0 11
VecCopy 14298 1.0 5.2569e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 4016 1.0 1.2653e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 6012 1.0 2.9981e-01 1.0 6.16e+06 1.0 0.0e+00 0.0e+00
0.0e+00 2 20 0 0 0 2 20 0 0 0 21
VecWAXPY 735 1.0 4.6608e-03 1.0 3.76e+05 1.0 0.0e+00 0.0e+00
0.0e+00 0 1 0 0 0 0 1 0 0 0 81
VecMAXPY 735 1.0 1.0615e-02 1.0 4.53e+06 1.0 0.0e+00 0.0e+00
0.0e+00 0 14 0 0 0 0 14 0 0 0 426
VecReduceArith 5145 1.0 1.6298e-02 1.0 5.26e+06 1.0 0.0e+00 0.0e+00
0.0e+00 0 17 0 0 0 0 17 0 0 0 323
VecReduceComm 2205 1.0 4.3511e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SNESSolve 67 1.0 7.4085e+00 1.0 3.13e+07 1.0 0.0e+00 0.0e+00
1.6e+04 40100 0 0 59 40100 0 0 59 4
SNESFunctionEval 1537 1.0 6.6015e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 36 0 0 0 0 36 0 0 0 0 0
SNESLineSearch 735 1.0 3.2174e+00 1.0 2.63e+06 1.0 0.0e+00 0.0e+00
7.4e+02 17 8 0 0 3 17 8 0 0 3 1
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Vector 3214 3214 17402400 0
Vector Scatter 134 134 86296 0
MatMFFD 67 0 0 0
Matrix 67 0 0 0
Distributed Mesh 134 134 894584 0
Bipartite Graph 268 268 216544 0
Index Set 335 335 393960 0
IS L to G Mapping 67 67 176612 0
SNES 67 67 103984 0
SNESLineSearch 67 67 57352 0
DMSNES 67 67 45024 0
Viewer 1 0 0 0
Krylov Solver 67 67 87904 0
Preconditioner 67 67 65928 0
========================================================================================================================
Average time to get PetscTime(): 0
#PETSc Option Table entries:
-log_summary
-snes_ngmres_m 12
-snes_ngmres_monitor
-snes_type ngmres
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure run at: Fri Mar 21 14:15:12 2014
Configure options: VERBOSE=1
--with-mpi-dir=/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5
--with-blas-lapack-dir=/tools/intel/mkl --with-batch=0
--prefix=/home/fischega/devel/XXXXXXXXXXX/install/petsc-3.4.4
-----------------------------------------
Libraries compiled on Fri Mar 21 14:15:12 2014 on susedev1
Machine characteristics: Linux-2.6.16.60-0.21-smp-x86_64-with-SuSE-10-x86_64
Using PETSc directory: /local/fischega/build/petsc-3.4.4
Using PETSc arch: arch-linux2-c-debug
-----------------------------------------
Using C compiler:
/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/bin/mpicc -fPIC -wd1572
-g ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler:
/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/bin/mpif90 -fPIC -g
${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths:
-I/local/fischega/build/petsc-3.4.4/arch-linux2-c-debug/include
-I/local/fischega/build/petsc-3.4.4/include
-I/local/fischega/build/petsc-3.4.4/include
-I/local/fischega/build/petsc-3.4.4/arch-linux2-c-debug/include
-I/usr/X11/include
-I/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/include
-----------------------------------------
Using C linker: /home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/bin/mpicc
Using Fortran linker:
/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/bin/mpif90
Using libraries:
-Wl,-rpath,/local/fischega/build/petsc-3.4.4/arch-linux2-c-debug/lib
-L/local/fischega/build/petsc-3.4.4/arch-linux2-c-debug/lib -lpetsc
-Wl,-rpath,/tools/intel/mkl/lib/intel64 -L/tools/intel/mkl/lib/intel64
-lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm
-Wl,-rpath,/usr/X11/lib64 -L/usr/X11/lib64 -lX11 -lpthread
-Wl,-rpath,/tools/lsf/7.0.6.EC/7.0/linux2.6-glibc2.3-x86_64/lib
-L/tools/lsf/7.0.6.EC/7.0/linux2.6-glibc2.3-x86_64/lib
-Wl,-rpath,/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/lib
-L/home/fischega/devel/XXXXXXXXXXX/install/openmpi-1.6.5/lib
-Wl,-rpath,/tools/intel/cce/11.1.046/lib/intel64
-L/tools/intel/cce/11.1.046/lib/intel64
-Wl,-rpath,/tools/intel/cce/11.1.046/ipp/em64t/lib
-L/tools/intel/cce/11.1.046/ipp/em64t/lib
-Wl,-rpath,/tools/intel/cce/11.1.046/mkl/lib/em64t
-L/tools/intel/cce/11.1.046/mkl/lib/em64t
-Wl,-rpath,/tools/intel/cce/11.1.046/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib
-L/tools/intel/cce/11.1.046/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib
-Wl,-rpath,/tools/intel/fce/11.1.046/lib/intel64
-L/tools/intel/fce/11.1.046/lib/intel64
-Wl,-rpath,/tools/intel/fce/11.1.046/mkl/lib/em64t
-L/tools/intel/fce/11.1.046/mkl/lib/em64t
-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/4.1.2
-L/usr/lib64/gcc/x86_64-suse-linux/4.1.2 -Wl,-rpath,/usr/x86_64-suse-linux/lib
-L/usr/x86_64-suse-linux/lib -lmpi_f90 -lmpi_f77 -lm -lm -lifport -lifcore -lm
-lm -lm -lmpi_cxx -lstdc++ -lmpi_cxx -lstdc++ -ldl -lmpi -losmcomp -lrdmacm
-libverbs -lrt -lnsl -lutil -lbat -llsf -lnuma -limf -lsvml -lipgo -ldecimal
-lirc -lgcc_s -lpthread -lirc_s -ldl
-----------------------------------------