Hallo everybody,

I am Francesco Zappa, a PhD student at the Jena university. I am using a 
Cactus-based code and I am running simulations on Hawk cluster. Unfortunately, 
recent policy of Hawk have severe restrictions in terms of number of files that 
can be produced at the same time. The problem is that running several mpi 
processes generate many hdf5 output files and I would like to have them packed 
somehow. I have tried to use the option


CarpetIOHDF5::one_file_per_proc = "yes"


which works fine for 2D data files but it does not seem to work on 3D data 
files (which have the form <variable>.file_<process>.h5).


I have been aware of the fact that it exists a patch for the Carpet Thorn to 
have these files packed together somehow. Could you please help me with this 
issue?

Best regards,


Francesco Zappa

-------------------------------------

Friedrich-Schiller-Universität Jena
Theoretisch-Physikalisches Institut
Fröbelstieg 1, Office 219, Phone

Number 0049-3641-9-47133
D-07743 Jena
_______________________________________________
Users mailing list
[email protected]
http://lists.einsteintoolkit.org/mailman/listinfo/users

Reply via email to