aha, then your problem is that the way readers handle parallel has been changed 
and you didn't set CAN_HANDLE_PIECEs or whatever the new name is

[pause]

outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);

is the new way. if you don't set this then your reader only gets created on 
rank 0


JB

-----Original Message-----
From: Schlottke, Michael [mailto:[email protected]] 
Sent: 14 April 2015 09:09
To: Biddiscombe, John A.
Cc: Utkarsh Ayachit; ParaView
Subject: Re: [Paraview] MPI-aware reader plugin only has rank 0

> are you sure you don't mean that only printf/std:::cout from rank 0 is 
> visible?
I also thought that it might be a visibility issue, thus I opened a file with 
std::ofstream on each rank with the rank id encoded in the filename. Only one 
file ever gets created, though, and it is the one with “0” in the name.

> but I actual fact the other pvservers are fine. Create a sphere and check if 
> it has N pieces.
I did that and visualized it by vtkProcessId. The number of ids indeed matches 
the number of ranks, so I guess nothing fundamental is wrong with the MPI use 
within ParaView. I just can’t fathom why the reader plugin does not run in 
parallel. Just to be sure, I added a call 

MPI_Barrier(MPI_COMM_WORLD);

in RequestData and indeed, ParaView gets stuck there, as apparently the 
collective call is never issued from any rank != 0.

Michael

_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

Reply via email to