Question #690973 on Yade changed: https://answers.launchpad.net/yade/+question/690973
yang yi posted a new comment: To Bruno Chareyre (bruno-chareyre): Thank you very much. I test -j1 and -j8 on my PC, yes. the speed is the same. And I test the command ~$ export OMP_NUM_THREADS=1 and the result is the same with you. I explain my understanding about your mean: the parallel programming, means there are 98 programming copies, and each copy for a cores. If my understanding right or not? If my understanding is right, I have the following question. I know that my programming cannot parallelly operated. I just want the calculation of particles in a programming is parallel, such as calculation the pressure between the particles. Because there are huge number of particles, so the calculation is huge. If I increase the core, this kind of calculation can be improved? Thank you for you second suggestion. Actually, the algorithm can not be trained parallelly. The aim of the training is to make the parameters to close the optimal value. The next episode must based on the current training result. Thank you very much. -- You received this question notification because your team yade-users is an answer contact for Yade. _______________________________________________ Mailing list: https://launchpad.net/~yade-users Post to : [email protected] Unsubscribe : https://launchpad.net/~yade-users More help : https://help.launchpad.net/ListHelp

