Hi Olaf,
I am having some difficulty reproducing this issue. It seems to scale fine
for me. I have some suggestions for improving workflow andperformance:
1. You can simply use pvbatch for this if you remove the Connect() command
from the script. I was running it as:
mpiexec -n 2 ~/Work/ParaView
Hmmm. These results are surprising to me. Is there a difference between the
1 process run and the 2 processes run? Is one compiled to use Mesa and the
other one accelerated OpenGL? Rendering should not jump like that. Also,
how many cores are on this system? I am surprised that processing goes up
w
Dear Berk,
I tried your suggestion and coloured the result with the
ProcessIdScalars Filter. I can see the partitioning and it also makes
sense, so there should not be any major load imbalance. I did add some
timing information in the code. I add the changed program. It is evident
that the data in
Hi Olaf,
>From your previous message, I am assuming that you are using vtr files. In
this case, the processing should scale. If you can make an example files
available, I can verify this. Feel free to e-mail them to me directly or I
can download them somewhere if they are too big. The two potentia
Dear Paraview developers and users,
I tried to run paraview in parallel using a python script. I compiled a
server including OpenMPI support and support for MESA off-screen
rendering and started the server using mpirun. The I connected from a
python script (see attachment). I could see that there