Dear all, in our application, we observe that memory consumption is highly dependent on the supplied OS-Threads. Two otherwise identical runs of the application with "--hpx:threads 4" and "--hpx:threads 8" consume 40GB and above 90GB RAM respectively (reproducable on different hardware). It could be possible, that the additional threads alter the execution order and provoke a race condition that leaks crazy amounts of memory. But is there a possible explanation for the memory consumption within HPX itself?
Regarding the application: We schedule a large number of tasks (approximately 16 million) that have no dependencies on each other from a simple for loop. Some of them might schedule a continuing task when done, resulting in approximately 30 million tasks in total. Most data should be held on the heap, required stack sizes of tasks should not exceed a few hundred bytes. We do not supply a configuration or any command line parameters beyond hpx:threads. The application is run on a single node, without remote calls of any kind. Thank you for your time. Kilian Werner _______________________________________________ hpx-users mailing list [email protected], stellar-group.org https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
