You should look for error messages in the output of QE (if any). What you reported is a MPI warning and is related to the way your MPI environment is installed, not to QE
Paolo On Wed, Aug 17, 2016 at 2:14 PM, SELLAVEL E ca15m006 <[email protected]> wrote: > I installed QE in the following machine. > > Linux c1hn1 3.0.13-0.27-default #1 SMP Wed Feb 15 13:33:49 UTC 2012 > (d73692b) x86_64 x86_64 x86_64 GNU/Linux > > While running calculations on parallel the following error message is > coming.Kindly help in this regard. > > -------------------------------------------------------------------------- > WARNING: It appears that your OpenFabrics subsystem is configured to only > allow registering part of your physical memory. This can cause MPI jobs to > run with erratic performance, hang, and/or crash. > > This may be caused by your OpenFabrics vendor limiting the amount of > physical memory that can be registered. You should investigate the > relevant Linux kernel module parameters that control how much physical > memory can be registered, and increase them to allow registering all > physical memory on your machine. > > See this Open MPI FAQ item for more information on these Linux kernel module > parameters: > > http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages > > Local host: a1n32 > Registerable memory: 32768 MiB > Total memory: 65512 MiB > > Your MPI job will continue, but may be behave poorly and/or hang. > -------------------------------------------------------------------------- > [a1n32:11971] 3 more processes have sent help message > help-mpi-btl-openib.txt / reg mem limit low > [a1n32:11971] Set MCA parameter "orte_base_help_aggregate" to 0 to see all > help / error messages > > > _______________________________________________ > Pw_forum mailing list > [email protected] > http://pwscf.org/mailman/listinfo/pw_forum -- Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche, Univ. Udine, via delle Scienze 208, 33100 Udine, Italy Phone +39-0432-558216, fax +39-0432-558222 _______________________________________________ Pw_forum mailing list [email protected] http://pwscf.org/mailman/listinfo/pw_forum
