Hello, Hello,
I have an unstructured mesh, with 2 million points, in a single 'HDF5/XDMF' file format that I am trying to visualize on our clusters. The goal is to perform remote parallel rendering on our cluster ( 16 processors per node and 64 GB memory) nodes using a client-server model. Currently I load this file on a single node with 16 processes, apply the D3 filter to it, followed by the application of a 'Clip' filter. This dataset contains time-steps and when I try to animate it over time, the memory usage on the remote compute node increases with each time-step and this eventually causes 'pvserver' to crash. However, I can partition the data using D3 and write it out as '*.pvtu' files. When I restart ParaView and read the '*.pvtu' files back in and perform the same rendering tests, the performance seems to scale pretty well and memory usage is kept to a minimum. Is this something to do with the XDMF reader ? Is the D3 filter not being applied correctly ? Thanks, Srijith Rajamohan Computational Scientist
_______________________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Search the list archives at: http://markmail.org/search/?q=ParaView Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/paraview
