I am using a cluster with queueing system to do calculations with pwscf, because of the massive i/o load, I have to distribute the $TMP_DIR on each node. Therefore, it is not practical for me to collect all the files in $TMP_DIR and then re-distribute them in the subsequent run if time_max is reached in a previous run. I guess the necessary file should be the .save or recover, but recover is written for each cpu used, if I am not wrong. Could anyone who is familiar with this code tell me which files are necessary for a continueous run? In that case I can collet the necessary files and then re-distribute them for the subsequent run.
Thanks. Konzern -------------- next part -------------- An HTML attachment was scrubbed... URL: /pipermail/attachments/20060106/ac836cf2/attachment.htm
