Yes I am aware of this redirection issue. Actually this is related to npool. For example, for the scf run, i used mpirun -np 16 pw.x -npool 8 < file.in > file.out If I use the efg.x in the following way: mpirun -np 16 efg.x -npool 8 < efg.in > efg.out I will get 7 "error reading inputpp namelist" error. And I guess (I didn't check that) if I use '-npool 4' I will get 3 errors. I also tried mpirun -np 16 efg.x -npool 1 < efg.in > efg.out This time there was no reading inputpp namelist error. But I still get the davcio error. Probably with a different npool, efg.x reassigns the calculation differently than the previous scf run.
Lilong On Fri, Mar 18, 2005 at 06:58:07PM +0100, Paolo Giannozzi wrote: > On Friday 18 March 2005 05:15, Lilong Li wrote: > > > I also tried to run 16 processors for efg.x, but it complains from the > > beginning "error reading inputpp namelist". I guess this is described > > in the manual, too many processors for simple jobs. > > it should work anyway, even if it doesn't do anything in parallel. > Does it work with the same data on one processor? if it does, > consider the following: > > > Some implementations of the MPI library may have problems > > with input redirection in parallel. If this happens, use the option > > -in (or -inp or -input), followed by the input file name. > > Example: pw.x -in input -npool 4 > output. > > Paolo > -- > Paolo Giannozzi e-mail: giannozz at nest.sns.it > Scuola Normale Superiore Phone: +39/050-509876, Fax:-563513 > Piazza dei Cavalieri 7 I-56126 Pisa, Italy > _______________________________________________ > Pw_forum mailing list > Pw_forum at pwscf.org > http://www.democritos.it/mailman/listinfo/pw_forum
