Hello,
I'm trying to parallelize a process using pvbatch and MPI, with
MultiBlock data set; thus using the vtk composite pipeline.
I made a sample python program that is representative of what I have to do :
--------------------------------------------------------------------------------------------------
/from paraview.simple import *//
//
//r = servermanager.sources.XMLMultiBlockDataReader()//
//r.FileName = "input.vtm"//
//
//# Defining a sample fake data processing
nbTs = 1000//
//ts = {}//
//for tIndex in range( 0, nbTs )://
// ts[tIndex] = servermanager.filters.Transform()//
// if tIndex == 0://
// ts[tIndex].Input = r//
// else://
// ts[tIndex].Input = ts[tIndex - 1]//
// ts[tIndex].Transform.Scale = [1.01,1.01,1.01]//
//
//w = servermanager.writers.XMLMultiBlockDataWriter()//
//w.Input = ts[nbTs - 1]//
//w.FileName = "output.vtm"//
//
//w.UpdatePipeline()//
/
--------------------------------------------------------------------------------------------------
I launch that using /"mpiexec -np 4 pvbatch myscript.py/"
All run well but I get a longer time using MPI than using only "/pvbatch
myscript.py".
/By monitoring RAM, I noticed that it seems the data is loaded on time
by MPI process, and (maybe) all the MPI processes do exactly the same
job, computing four times all the data.
Why my blocks in MultiBlock data set aren't dispatched over the MPI
processes ?
What am I doing wrong ?
Many thanks for any help,
Yves
_______________________________________________
Powered by www.kitware.com
Visit other Kitware open-source projects at
http://www.kitware.com/opensource/opensource.html
Please keep messages on-topic and check the ParaView Wiki at:
http://paraview.org/Wiki/ParaView
Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview