Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-08-04 Thread u . utku . turuncoglu
Hi Andy,

Thanks for your help. You can find the Python script as attachment. BTW,
the "coprocessorinitializewithpython" method is called by subset of
processors that are supposed to do co-processing. I think that the usage
of vtkCPPythonScriptPipeline is correct in this case. It is weird but it
works under Mac OS and allinputsgridwriter.py script works on both Linux
and Mac OS.

extern "C" void coprocessorinitializewithpython_(int *fcomm, const char*
pythonScriptName, const char strarr[][255], int *size) {
  if (pythonScriptName != NULL) {
if (!g_coprocessor) {
  g_coprocessor = vtkCPProcessor::New();
  MPI_Comm handle = MPI_Comm_f2c(*fcomm);
  vtkMPICommunicatorOpaqueComm *Comm = new
vtkMPICommunicatorOpaqueComm();
  g_coprocessor->Initialize(*Comm);
  vtkSmartPointer pipeline =
vtkSmartPointer::New();
  pipeline->Initialize(pythonScriptName);
  g_coprocessor->AddPipeline(pipeline);
  //pipeline->FastDelete();
}

if (!g_coprocessorData) {
  g_coprocessorData = vtkCPDataDescription::New();
  // must be input port for all model components and for all dimensions
  for (int i = 0; i < *size; i++) {
g_coprocessorData->AddInput(strarr[i]);
std::cout << "adding input port [" << i << "] = " << strarr[i] <<
std::endl;
  }
}
  }
}

Regards,

--ufuk

> Can you share your Python script? Another thought is that your Python
> script was added to each process instead of the subset of processes that
> are supposed to do the calculation on it. For example, the Python script
> that is supposed to generate the image should only be added through a
> vtkCPPythonScriptPipeline on those 8 processes.
>
> On Thu, Aug 4, 2016 at 2:48 AM, Ufuk Utku Turuncoglu (BE) <
> u.utku.turunco...@be.itu.edu.tr> wrote:
>
>> Hi,
>>
>> After getting help from the list, i finished the initial implementation
>> of
>> the code but in this case i have a strange experience with Catalyst. The
>> prototype code is working with allinputsgridwriter.py script and could
>> write multi-piece dataset in VTK format without any problem. In this
>> case,
>> the code also handles four different input ports to get data in
>> different
>> grid structure and dimensions (2d/3d).
>>
>> The main problem is that if i try to use the same code to output a png
>> file after creating iso-surface from single 3d field (141x115x14 =
>> 227K),
>> it is hanging. In this case, if i check the utilization of the
>> processors
>> (on Linux, Centos 7.1,
>>
>> 12064 turuncu   20   0 1232644 216400  77388 R 100.0  0.7  10:44.17
>> main.x
>> 12068 turuncu   20   0 1672156 483712  70420 R 100.0  1.5 10:44.17
>> main.x
>> 12069 turuncu   20   0 1660620 266716  70500 R 100.0  0.8 10:44.26
>> main.x
>> 12070 turuncu   20   0 1660412 267204  71204 R 100.0  0.8 10:44.22
>> main.x
>> 12071 turuncu   20   0 1659988 266644  71360 R 100.0  0.8 10:44.18
>> main.x
>> 12065 turuncu   20   0 1220328 202224  77620 R  99.7  0.6 10:44.08
>> main.x
>> 12066 turuncu   20   0 1220236 204696  77444 R  99.7  0.6 10:44.16
>> main.x
>> 12067 turuncu   20   0 1219644 199116  77152 R  99.7  0.6 10:44.18
>> main.x
>> 12078 turuncu   20   0 1704272 286924 102940 S  10.6  0.9 1:12.91 main.x
>> 12074 turuncu   20   0 1704488 287668 103456 S  10.0  0.9 1:08.50 main.x
>> 12072 turuncu   20   0 170 287488 103316 S   9.6  0.9 1:09.09 main.x
>> 12076 turuncu   20   0 1704648 287268 102848 S   9.6  0.9 1:10.16 main.x
>> 12073 turuncu   20   0 1704132 284128 103384 S   9.3  0.9 1:05.27 main.x
>> 12077 turuncu   20   0 1706236 286228 103380 S   9.3  0.9 1:05.49 main.x
>> 12079 turuncu   20   0 1699944 278800 102864 S   9.3  0.9 1:05.87 main.x
>> 12075 turuncu   20   0 1704356 284408 103436 S   8.6  0.9 1:07.03 main.x
>>
>> they seems normal because the co-processing component only works on a
>> subset of the resource (8 processor, has utilization around 99 percent).
>> The GPU utilization (from nvidia-smi command) is
>>
>> +--+
>> | NVIDIA-SMI 352.79 Driver Version: 352.79 |
>> |---+--+
>> --+
>> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile
>> Uncorr.
>> ECC |
>> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util
>> Compute
>> M. |
>> |===+==+
>> ==|
>> |   0  Quadro K5200Off  | :42:00.0  On |
>> Off |
>> | 26%   42CP814W / 150W |227MiB /  8191MiB | 0%  Default
>> |
>> +---+--+
>> --+
>>
>> +---
>> --+
>> | Processes: GPU Memory |
>> |  GPU   PID  Type  Process name Usage  |
>> |===
>> ==|
>> |0  1937G /usr/bin/Xorg
>>  81MiB |
>> |0  3817G 

Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-08-04 Thread Andy Bauer
Can you share your Python script? Another thought is that your Python
script was added to each process instead of the subset of processes that
are supposed to do the calculation on it. For example, the Python script
that is supposed to generate the image should only be added through a
vtkCPPythonScriptPipeline on those 8 processes.

On Thu, Aug 4, 2016 at 2:48 AM, Ufuk Utku Turuncoglu (BE) <
u.utku.turunco...@be.itu.edu.tr> wrote:

> Hi,
>
> After getting help from the list, i finished the initial implementation of
> the code but in this case i have a strange experience with Catalyst. The
> prototype code is working with allinputsgridwriter.py script and could
> write multi-piece dataset in VTK format without any problem. In this case,
> the code also handles four different input ports to get data in different
> grid structure and dimensions (2d/3d).
>
> The main problem is that if i try to use the same code to output a png
> file after creating iso-surface from single 3d field (141x115x14 = 227K),
> it is hanging. In this case, if i check the utilization of the processors
> (on Linux, Centos 7.1,
>
> 12064 turuncu   20   0 1232644 216400  77388 R 100.0  0.7  10:44.17 main.x
> 12068 turuncu   20   0 1672156 483712  70420 R 100.0  1.5 10:44.17 main.x
> 12069 turuncu   20   0 1660620 266716  70500 R 100.0  0.8 10:44.26 main.x
> 12070 turuncu   20   0 1660412 267204  71204 R 100.0  0.8 10:44.22 main.x
> 12071 turuncu   20   0 1659988 266644  71360 R 100.0  0.8 10:44.18 main.x
> 12065 turuncu   20   0 1220328 202224  77620 R  99.7  0.6 10:44.08 main.x
> 12066 turuncu   20   0 1220236 204696  77444 R  99.7  0.6 10:44.16 main.x
> 12067 turuncu   20   0 1219644 199116  77152 R  99.7  0.6 10:44.18 main.x
> 12078 turuncu   20   0 1704272 286924 102940 S  10.6  0.9 1:12.91 main.x
> 12074 turuncu   20   0 1704488 287668 103456 S  10.0  0.9 1:08.50 main.x
> 12072 turuncu   20   0 170 287488 103316 S   9.6  0.9 1:09.09 main.x
> 12076 turuncu   20   0 1704648 287268 102848 S   9.6  0.9 1:10.16 main.x
> 12073 turuncu   20   0 1704132 284128 103384 S   9.3  0.9 1:05.27 main.x
> 12077 turuncu   20   0 1706236 286228 103380 S   9.3  0.9 1:05.49 main.x
> 12079 turuncu   20   0 1699944 278800 102864 S   9.3  0.9 1:05.87 main.x
> 12075 turuncu   20   0 1704356 284408 103436 S   8.6  0.9 1:07.03 main.x
>
> they seems normal because the co-processing component only works on a
> subset of the resource (8 processor, has utilization around 99 percent).
> The GPU utilization (from nvidia-smi command) is
>
> +--+
> | NVIDIA-SMI 352.79 Driver Version: 352.79 |
> |---+--+
> --+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr.
> ECC |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute
> M. |
> |===+==+
> ==|
> |   0  Quadro K5200Off  | :42:00.0  On |
> Off |
> | 26%   42CP814W / 150W |227MiB /  8191MiB | 0%  Default |
> +---+--+
> --+
>
> +---
> --+
> | Processes: GPU Memory |
> |  GPU   PID  Type  Process name Usage  |
> |===
> ==|
> |0  1937G /usr/bin/Xorg
>  81MiB |
> |0  3817G /usr/bin/gnome-shell
>  110MiB |
> |0  9551G paraview
> 16MiB |
> +---
> --+
>
> So, the GPU is not overloaded in this case. I tested the code with two
> different version of ParaView (5.0.0 and 5.1.0). The results are same for
> both cases even if i create the co-processing Python scripts with same
> version of the ParaView that is used to compile the code. I also tried to
> use 2d field (141x115) but the result is also same and the code is still
> hanging. The different machine (MacOS+ParaView 5.0.0) works without
> problem. There might be a issue of Linux or installation but i am not sure
> and it was working before. Is there any flag or tool that allows to analyze
> Paraview deeply to find the source of the problem.
>
> Regards,
>
> --ufuk
>
>
>
>
>
> ___
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at http://www.kitware.com/opensou
> rce/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Search the list archives at: http://markmail.org/search/?q=ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://public.kitware.com/mailman/listinfo/paraview
>
___
Powered by www.kitware.com

Visit other Kitware open-source projects at 

Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-08-04 Thread Ufuk Utku Turuncoglu (BE)

Hi,

After getting help from the list, i finished the initial implementation 
of the code but in this case i have a strange experience with Catalyst. 
The prototype code is working with allinputsgridwriter.py script and 
could write multi-piece dataset in VTK format without any problem. In 
this case, the code also handles four different input ports to get data 
in different grid structure and dimensions (2d/3d).


The main problem is that if i try to use the same code to output a png 
file after creating iso-surface from single 3d field (141x115x14 = 
227K), it is hanging. In this case, if i check the utilization of the 
processors (on Linux, Centos 7.1,


12064 turuncu   20   0 1232644 216400  77388 R 100.0  0.7  10:44.17 main.x
12068 turuncu   20   0 1672156 483712  70420 R 100.0  1.5 10:44.17 main.x
12069 turuncu   20   0 1660620 266716  70500 R 100.0  0.8 10:44.26 main.x
12070 turuncu   20   0 1660412 267204  71204 R 100.0  0.8 10:44.22 main.x
12071 turuncu   20   0 1659988 266644  71360 R 100.0  0.8 10:44.18 main.x
12065 turuncu   20   0 1220328 202224  77620 R  99.7  0.6 10:44.08 main.x
12066 turuncu   20   0 1220236 204696  77444 R  99.7  0.6 10:44.16 main.x
12067 turuncu   20   0 1219644 199116  77152 R  99.7  0.6 10:44.18 main.x
12078 turuncu   20   0 1704272 286924 102940 S  10.6  0.9 1:12.91 main.x
12074 turuncu   20   0 1704488 287668 103456 S  10.0  0.9 1:08.50 main.x
12072 turuncu   20   0 170 287488 103316 S   9.6  0.9 1:09.09 main.x
12076 turuncu   20   0 1704648 287268 102848 S   9.6  0.9 1:10.16 main.x
12073 turuncu   20   0 1704132 284128 103384 S   9.3  0.9 1:05.27 main.x
12077 turuncu   20   0 1706236 286228 103380 S   9.3  0.9 1:05.49 main.x
12079 turuncu   20   0 1699944 278800 102864 S   9.3  0.9 1:05.87 main.x
12075 turuncu   20   0 1704356 284408 103436 S   8.6  0.9 1:07.03 main.x

they seems normal because the co-processing component only works on a 
subset of the resource (8 processor, has utilization around 99 percent). 
The GPU utilization (from nvidia-smi command) is


+--+
| NVIDIA-SMI 352.79 Driver Version: 352.79 |
|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile 
Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util 
Compute M. |

|===+==+==|
|   0  Quadro K5200Off  | :42:00.0  On 
|  Off |

| 26%   42CP814W / 150W |227MiB /  8191MiB | 0%  Default |
+---+--+--+

+-+
| Processes: GPU Memory |
|  GPU   PID  Type  Process name Usage  |
|=|
|0  1937G /usr/bin/Xorg   
81MiB |
|0  3817G /usr/bin/gnome-shell   
110MiB |
|0  9551G paraview
16MiB |

+-+

So, the GPU is not overloaded in this case. I tested the code with two 
different version of ParaView (5.0.0 and 5.1.0). The results are same 
for both cases even if i create the co-processing Python scripts with 
same version of the ParaView that is used to compile the code. I also 
tried to use 2d field (141x115) but the result is also same and the code 
is still hanging. The different machine (MacOS+ParaView 5.0.0) works 
without problem. There might be a issue of Linux or installation but i 
am not sure and it was working before. Is there any flag or tool that 
allows to analyze Paraview deeply to find the source of the problem.


Regards,

--ufuk




___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview


Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-06-20 Thread Andy Bauer
Hi,

Glad to hear that this is working for you and thanks for sharing how you
did it! This is definitely a corner case that few Catalyst users/developers
will ever care about so I'm glad that the Catalyst API is flexible enough
to handle this.

Best,
Andy

On Sun, Jun 19, 2016 at 4:07 AM,  wrote:

> Hi Andy,
>
> I used first approach and fix the issue using following customised
> coprocessorinitializewithpython function. In this case, i converted type
> of communicator coming from Fortran to C using MPI_Comm_f2c call. Now, the
> code works without any problem. Thanks for your kindly help.
>
> extern "C" void my_coprocessorinitializewithpython_(int *fcomm, const
> char* pythonScriptName, const char strarr[][255], int *size) {
>   if (pythonScriptName != NULL) {
> if (!g_coprocessor) {
>   g_coprocessor = vtkCPProcessor::New();
>   MPI_Comm handle = MPI_Comm_f2c(*fcomm);
>   vtkMPICommunicatorOpaqueComm *Comm = new
> vtkMPICommunicatorOpaqueComm();
>   g_coprocessor->Initialize(*Comm);
>   vtkSmartPointer pipeline =
> vtkSmartPointer::New();
>   pipeline->Initialize(pythonScriptName);
>   g_coprocessor->AddPipeline(pipeline);
>   //pipeline->FastDelete();
> }
>
> if (!g_coprocessorData) {
>   g_coprocessorData = vtkCPDataDescription::New();
>   // must be input port for all model components and for all dimensions
>   for (int i = 0; i < *size; i++) {
> g_coprocessorData->AddInput(strarr[i]);
> std::cout << "adding input port [" << i << "] = " << strarr[i] <<
> std::endl;
>   }
> }
>   }
> }
>
> Regards,
>
> --ufuk
>
> > Hi Ufuk,
> >
> > I can think of two potential fixes:
> >
> >- Use the vtkCPProcessor:: Initialize(vtkMPICommunicatorOpaqueComm&
> >comm) method to initialize each process with the proper MPI
> > communicator.
> >Note that vtkMPICommunicatorOpaqueComm is defined in
> >vtkMPICommunicator.cxx. A similar example to this is available in the
> >Examples/Catalyst/MPISubCommunicatorExample directory.
> >- Call vtkCPProcessor::Initialize() on all processes with your global
> >communicator and then create a vtkMPIController partitioned the way
> you
> >want and set that to be the "global" communicator through
> >vtkMPIController::SetGlobalController().
> >
> > Please let us know if either of these methods work for you.
> >
> > Also, what code are you working on and is it a publicly available code?
> If
> > you show your implementation I may have some in-depth suggestions for
> > improvements.
> >
> > Best,
> >
> > Andy
> >
> >
> >
> > On Fri, Jun 17, 2016 at 4:17 AM, 
> wrote:
> >
> >> Hi All,
> >>
> >> I was working on the issue recently and i am very close to having
> >> prototype code but i had some difficulties in initialization of the
> >> co-processing component with coprocessorinitializewithpython call. In my
> >> case, two model components and adaptor have its own processing source
> >> (or
> >> MPI_COMM_WORLD). For example, MPI processor 0, 1, 2, 3 are used by 1st
> >> model, 4, 5, 6, 7 are used by 2nd model code and 8, 9, 10, 11 is used by
> >> adaptor. The code basically handles transferring the grid information
> >> and
> >> data to adaptor. So, the problem is that if i try to call my custom
> >> coprocessorinitializewithpython call in adaptor (only in 8, 9, 10, 11)
> >> then it hangs in g_coprocessor->Initialize(); (see code at the end of
> >> the
> >> mail) step but if i call coprocessorinitializewithpython in the main
> >> code
> >> that uses all the available processor (between 0 and 11) and it runs
> >> without any problem. It seems that there is a restriction in the
> >> ParaView
> >> side (expecially vtkCPProcessor::Initialize() that can be found in
> >> CoProcessing/Catalyst/vtkCPProcessor.cxx) but i am not sure. Do you have
> >> any suggestion about that? Do you think that is it possible to fix it
> >> easily. Of corse the adaptor code could use all the processor but it is
> >> better to have its own dedicated resource that might have GPU support in
> >> those specific servers or processors. I am relatively new to VTK and it
> >> might be difficult for me to fix it and i need your guidance to start.
> >>
> >> Best Regards,
> >>
> >> --ufuk
> >>
> >> vtkCPProcessor* g_coprocessor;
> >>
> >> extern "C" void my_coprocessorinitializewithpython_(const char*
> >> pythonScriptName, const char strarr[][255], int *size) {
> >>   if (pythonScriptName != NULL) {
> >> if (!g_coprocessor) {
> >>   g_coprocessor = vtkCPProcessor::New();
> >>   g_coprocessor->Initialize();
> >>   vtkSmartPointer pipeline =
> >> vtkSmartPointer::New();
> >>   pipeline->Initialize(pythonScriptName);
> >>   g_coprocessor->AddPipeline(pipeline);
> >>   //pipeline->FastDelete();
> >> }
> >>
> >> if (!g_coprocessorData) {
> >>   g_coprocessorData = vtkCPDataDescription::New();
> >>   // must be 

Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-06-19 Thread u . utku . turuncoglu
Hi Andy,

I used first approach and fix the issue using following customised
coprocessorinitializewithpython function. In this case, i converted type
of communicator coming from Fortran to C using MPI_Comm_f2c call. Now, the
code works without any problem. Thanks for your kindly help.

extern "C" void my_coprocessorinitializewithpython_(int *fcomm, const
char* pythonScriptName, const char strarr[][255], int *size) {
  if (pythonScriptName != NULL) {
if (!g_coprocessor) {
  g_coprocessor = vtkCPProcessor::New();
  MPI_Comm handle = MPI_Comm_f2c(*fcomm);
  vtkMPICommunicatorOpaqueComm *Comm = new
vtkMPICommunicatorOpaqueComm();
  g_coprocessor->Initialize(*Comm);
  vtkSmartPointer pipeline =
vtkSmartPointer::New();
  pipeline->Initialize(pythonScriptName);
  g_coprocessor->AddPipeline(pipeline);
  //pipeline->FastDelete();
}

if (!g_coprocessorData) {
  g_coprocessorData = vtkCPDataDescription::New();
  // must be input port for all model components and for all dimensions
  for (int i = 0; i < *size; i++) {
g_coprocessorData->AddInput(strarr[i]);
std::cout << "adding input port [" << i << "] = " << strarr[i] <<
std::endl;
  }
}
  }
}

Regards,

--ufuk

> Hi Ufuk,
>
> I can think of two potential fixes:
>
>- Use the vtkCPProcessor:: Initialize(vtkMPICommunicatorOpaqueComm&
>comm) method to initialize each process with the proper MPI
> communicator.
>Note that vtkMPICommunicatorOpaqueComm is defined in
>vtkMPICommunicator.cxx. A similar example to this is available in the
>Examples/Catalyst/MPISubCommunicatorExample directory.
>- Call vtkCPProcessor::Initialize() on all processes with your global
>communicator and then create a vtkMPIController partitioned the way you
>want and set that to be the "global" communicator through
>vtkMPIController::SetGlobalController().
>
> Please let us know if either of these methods work for you.
>
> Also, what code are you working on and is it a publicly available code? If
> you show your implementation I may have some in-depth suggestions for
> improvements.
>
> Best,
>
> Andy
>
>
>
> On Fri, Jun 17, 2016 at 4:17 AM,  wrote:
>
>> Hi All,
>>
>> I was working on the issue recently and i am very close to having
>> prototype code but i had some difficulties in initialization of the
>> co-processing component with coprocessorinitializewithpython call. In my
>> case, two model components and adaptor have its own processing source
>> (or
>> MPI_COMM_WORLD). For example, MPI processor 0, 1, 2, 3 are used by 1st
>> model, 4, 5, 6, 7 are used by 2nd model code and 8, 9, 10, 11 is used by
>> adaptor. The code basically handles transferring the grid information
>> and
>> data to adaptor. So, the problem is that if i try to call my custom
>> coprocessorinitializewithpython call in adaptor (only in 8, 9, 10, 11)
>> then it hangs in g_coprocessor->Initialize(); (see code at the end of
>> the
>> mail) step but if i call coprocessorinitializewithpython in the main
>> code
>> that uses all the available processor (between 0 and 11) and it runs
>> without any problem. It seems that there is a restriction in the
>> ParaView
>> side (expecially vtkCPProcessor::Initialize() that can be found in
>> CoProcessing/Catalyst/vtkCPProcessor.cxx) but i am not sure. Do you have
>> any suggestion about that? Do you think that is it possible to fix it
>> easily. Of corse the adaptor code could use all the processor but it is
>> better to have its own dedicated resource that might have GPU support in
>> those specific servers or processors. I am relatively new to VTK and it
>> might be difficult for me to fix it and i need your guidance to start.
>>
>> Best Regards,
>>
>> --ufuk
>>
>> vtkCPProcessor* g_coprocessor;
>>
>> extern "C" void my_coprocessorinitializewithpython_(const char*
>> pythonScriptName, const char strarr[][255], int *size) {
>>   if (pythonScriptName != NULL) {
>> if (!g_coprocessor) {
>>   g_coprocessor = vtkCPProcessor::New();
>>   g_coprocessor->Initialize();
>>   vtkSmartPointer pipeline =
>> vtkSmartPointer::New();
>>   pipeline->Initialize(pythonScriptName);
>>   g_coprocessor->AddPipeline(pipeline);
>>   //pipeline->FastDelete();
>> }
>>
>> if (!g_coprocessorData) {
>>   g_coprocessorData = vtkCPDataDescription::New();
>>   // must be input port for all model components and for all
>> dimensions
>>   for (int i = 0; i < *size; i++) {
>> g_coprocessorData->AddInput(strarr[i]);
>> std::cout << "adding input port [" << i << "] = " << strarr[i]
>> <<
>> std::endl;
>>   }
>> }
>>   }
>> }
>>
>> ___
>> Powered by www.kitware.com
>>
>> Visit other Kitware open-source projects at
>> http://www.kitware.com/opensource/opensource.html
>>
>> Please keep messages on-topic and check the ParaView Wiki at:
>> 

Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-06-17 Thread Andy Bauer
Hi Ufuk,

I can think of two potential fixes:

   - Use the vtkCPProcessor:: Initialize(vtkMPICommunicatorOpaqueComm&
   comm) method to initialize each process with the proper MPI communicator.
   Note that vtkMPICommunicatorOpaqueComm is defined in
   vtkMPICommunicator.cxx. A similar example to this is available in the
   Examples/Catalyst/MPISubCommunicatorExample directory.
   - Call vtkCPProcessor::Initialize() on all processes with your global
   communicator and then create a vtkMPIController partitioned the way you
   want and set that to be the "global" communicator through
   vtkMPIController::SetGlobalController().

Please let us know if either of these methods work for you.

Also, what code are you working on and is it a publicly available code? If
you show your implementation I may have some in-depth suggestions for
improvements.

Best,

Andy



On Fri, Jun 17, 2016 at 4:17 AM,  wrote:

> Hi All,
>
> I was working on the issue recently and i am very close to having
> prototype code but i had some difficulties in initialization of the
> co-processing component with coprocessorinitializewithpython call. In my
> case, two model components and adaptor have its own processing source (or
> MPI_COMM_WORLD). For example, MPI processor 0, 1, 2, 3 are used by 1st
> model, 4, 5, 6, 7 are used by 2nd model code and 8, 9, 10, 11 is used by
> adaptor. The code basically handles transferring the grid information and
> data to adaptor. So, the problem is that if i try to call my custom
> coprocessorinitializewithpython call in adaptor (only in 8, 9, 10, 11)
> then it hangs in g_coprocessor->Initialize(); (see code at the end of the
> mail) step but if i call coprocessorinitializewithpython in the main code
> that uses all the available processor (between 0 and 11) and it runs
> without any problem. It seems that there is a restriction in the ParaView
> side (expecially vtkCPProcessor::Initialize() that can be found in
> CoProcessing/Catalyst/vtkCPProcessor.cxx) but i am not sure. Do you have
> any suggestion about that? Do you think that is it possible to fix it
> easily. Of corse the adaptor code could use all the processor but it is
> better to have its own dedicated resource that might have GPU support in
> those specific servers or processors. I am relatively new to VTK and it
> might be difficult for me to fix it and i need your guidance to start.
>
> Best Regards,
>
> --ufuk
>
> vtkCPProcessor* g_coprocessor;
>
> extern "C" void my_coprocessorinitializewithpython_(const char*
> pythonScriptName, const char strarr[][255], int *size) {
>   if (pythonScriptName != NULL) {
> if (!g_coprocessor) {
>   g_coprocessor = vtkCPProcessor::New();
>   g_coprocessor->Initialize();
>   vtkSmartPointer pipeline =
> vtkSmartPointer::New();
>   pipeline->Initialize(pythonScriptName);
>   g_coprocessor->AddPipeline(pipeline);
>   //pipeline->FastDelete();
> }
>
> if (!g_coprocessorData) {
>   g_coprocessorData = vtkCPDataDescription::New();
>   // must be input port for all model components and for all dimensions
>   for (int i = 0; i < *size; i++) {
> g_coprocessorData->AddInput(strarr[i]);
> std::cout << "adding input port [" << i << "] = " << strarr[i] <<
> std::endl;
>   }
> }
>   }
> }
>
> ___
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Search the list archives at: http://markmail.org/search/?q=ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://public.kitware.com/mailman/listinfo/paraview
>
___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview


Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-05-22 Thread Gallagher, Timothy P
If you can control which communicator Paraview uses, you might want to look at 
using MPI_INTERCOMM_MERGE, which will take the union of the two disjoint sets 
described by an intercommunicator. So with that, you would create a new 
communicator that has all of your processors as members.

Paraview should be able to use that merged communicator seamlessly because it 
is a single communicator with all of the processor members attached. Meanwhile, 
your individual codes still use their own MPI_COMM_WORLD and the 
intercommunicator when they need to share information. 

That way, the codes do not need to change. Your adapter does the work of 
dealing with the merge process and the managing the merged communicator. 

I haven't tested anything like this of course. Just putting the idea out there.

Tim


From: u.utku.turunco...@be.itu.edu.tr <u.utku.turunco...@be.itu.edu.tr>
Sent: Sunday, May 22, 2016 7:59 AM
To: Andy Bauer
Cc: Ufuk Utku Turuncoglu; Gallagher, Timothy P; paraview@paraview.org
Subject: Re: [Paraview] capability of ParaView,  Catalyst in distributed 
computing environment ...

Thanks for the information. Currently, i am working on two component case
and the initial results show that grid and data information belong to each
model component must be accessible by all the MPI processes (defined in
global MPI_COMM_WORLD) in adaptor side. This makes the implementation very
complex when the 2d decomposition configuration of both model components
(which run in a specific subset of processors) are considered. In this
case, it seems that the easiest way is to interpolate/ redistribute the
data of both components into common grid or creating new 2d decomposition
in adaptor. Another possibility might be to implement MPI sections
specific for each model component (basically having two distinct
MPI_COMM_WORLD inside of global one) to access grid and fields in the
adaptor side but in this case i am not sure ParaView could handle these
kind of information or not. Anyway, it seems that it is a challanging
problem and probably it would be good to have this feature. I'll keep to
continue to try different implementations to test different ideas and keep
you posted about it. In the mean time, if you have any other idea, let me
know.

Regards,

--ufuk

> It may be possible to do this with Catalyst. I would guess that nearly all
> of the complex work would need to be done in the adaptor to integrate this
> properly though.
>
> On Wed, May 18, 2016 at 11:17 AM, <u.utku.turunco...@be.itu.edu.tr> wrote:
>
>> Yes, you are right. In this case, there will be two separate
>> MPI_COMM_WORLD. Plus, one that covers all the resources (let's say that
>> global MPI_COMM_WORLD). Actually, this kind of setup is very common for
>> multi-physics applications such as fluid-structure interaction. So, is
>> it
>> possible to tight this kind of environment with Catalyst? I am not
>> expert
>> about Catalyst but it seems that there might be a problem in the
>> rendering
>> stage even underlying grids and fields are defined without any problem.
>>
>> Regards,
>>
>> --ufuk
>>
>> > I'm not sure if this is exactly what the original user is referring
>> to,
>> > but it is possible to have two separate codes communicate using MPI
>> > through the dynamic processes in MPI-2. Essentially, one program
>> starts
>> up
>> > on N processors and begins running and gets an MPI_COMM_WORLD. It then
>> > spawns another executable on M different processors and that new
>> > executable will call MPI_INIT and also get its own MPI_COMM_WORLD. So
>> you
>> > have two, disjoint MPI_COMM_WORLD's that get linked together through a
>> > newly created intercommunicator.
>> >
>> >
>> > I've used this to couple a structural mechanics code to our fluid
>> dynamics
>> > solver for example. It sounds like that is similar to what is being
>> done
>> > here.
>> >
>> >
>> > How that would interact with coprocessing is beyond my knowledge
>> though.
>> > It does sound like an interesting problem and one I would be very
>> curious
>> > to find out the details.
>> >
>> >
>> > Tim
>> >
>> >
>> > 
>> > From: ParaView <paraview-boun...@paraview.org> on behalf of Andy Bauer
>> > <andy.ba...@kitware.com>
>> > Sent: Wednesday, May 18, 2016 10:52 AM
>> > To: Ufuk Utku Turuncoglu (BE)
>> > Cc: paraview@paraview.org
>> > Subject: Re: [Paraview] capability of ParaView, Catalyst in
>> distributed
>> > computing environment ...
>> >
&g

Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-05-22 Thread u . utku . turuncoglu
Thanks for the information. Currently, i am working on two component case
and the initial results show that grid and data information belong to each
model component must be accessible by all the MPI processes (defined in
global MPI_COMM_WORLD) in adaptor side. This makes the implementation very
complex when the 2d decomposition configuration of both model components
(which run in a specific subset of processors) are considered. In this
case, it seems that the easiest way is to interpolate/ redistribute the
data of both components into common grid or creating new 2d decomposition
in adaptor. Another possibility might be to implement MPI sections
specific for each model component (basically having two distinct
MPI_COMM_WORLD inside of global one) to access grid and fields in the
adaptor side but in this case i am not sure ParaView could handle these
kind of information or not. Anyway, it seems that it is a challanging
problem and probably it would be good to have this feature. I'll keep to
continue to try different implementations to test different ideas and keep
you posted about it. In the mean time, if you have any other idea, let me
know.

Regards,

--ufuk

> It may be possible to do this with Catalyst. I would guess that nearly all
> of the complex work would need to be done in the adaptor to integrate this
> properly though.
>
> On Wed, May 18, 2016 at 11:17 AM, <u.utku.turunco...@be.itu.edu.tr> wrote:
>
>> Yes, you are right. In this case, there will be two separate
>> MPI_COMM_WORLD. Plus, one that covers all the resources (let's say that
>> global MPI_COMM_WORLD). Actually, this kind of setup is very common for
>> multi-physics applications such as fluid-structure interaction. So, is
>> it
>> possible to tight this kind of environment with Catalyst? I am not
>> expert
>> about Catalyst but it seems that there might be a problem in the
>> rendering
>> stage even underlying grids and fields are defined without any problem.
>>
>> Regards,
>>
>> --ufuk
>>
>> > I'm not sure if this is exactly what the original user is referring
>> to,
>> > but it is possible to have two separate codes communicate using MPI
>> > through the dynamic processes in MPI-2. Essentially, one program
>> starts
>> up
>> > on N processors and begins running and gets an MPI_COMM_WORLD. It then
>> > spawns another executable on M different processors and that new
>> > executable will call MPI_INIT and also get its own MPI_COMM_WORLD. So
>> you
>> > have two, disjoint MPI_COMM_WORLD's that get linked together through a
>> > newly created intercommunicator.
>> >
>> >
>> > I've used this to couple a structural mechanics code to our fluid
>> dynamics
>> > solver for example. It sounds like that is similar to what is being
>> done
>> > here.
>> >
>> >
>> > How that would interact with coprocessing is beyond my knowledge
>> though.
>> > It does sound like an interesting problem and one I would be very
>> curious
>> > to find out the details.
>> >
>> >
>> > Tim
>> >
>> >
>> > 
>> > From: ParaView <paraview-boun...@paraview.org> on behalf of Andy Bauer
>> > <andy.ba...@kitware.com>
>> > Sent: Wednesday, May 18, 2016 10:52 AM
>> > To: Ufuk Utku Turuncoglu (BE)
>> > Cc: paraview@paraview.org
>> > Subject: Re: [Paraview] capability of ParaView, Catalyst in
>> distributed
>> > computing environment ...
>> >
>> > Hi,
>> >
>> > I'm a bit confused. MPI_COMM_WORLD is the global communicator and as
>> far
>> > as I'm aware, can't be modified which means there can't be two
>> different
>> > communicators.
>> >
>> > Catalyst can be set to use a specific MPI communicator and that's been
>> > done by at least one code (Code_Saturne). I think they have a
>> multiphysics
>> > simulation as well.
>> >
>> > Cheers,
>> > Andy
>> >
>> > On Wed, May 18, 2016 at 5:22 AM, Ufuk Utku Turuncoglu (BE)
>> > <u.utku.turunco...@be.itu.edu.tr<mailto:u.utku.turunco...@be.itu.edu.tr
>> >>
>> > wrote:
>> > Hi All,
>> >
>> > I just wonder about the capability of ParaView, Catalyst in
>> distributed
>> > computing environment. I have little bit experience in in-situ
>> > visualization but it is hard for me to see the big picture at this
>> point.
>> > So, i decided to ask to the user list to get some suggestion from the
>&

Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-05-20 Thread Andy Bauer
It may be possible to do this with Catalyst. I would guess that nearly all
of the complex work would need to be done in the adaptor to integrate this
properly though.

On Wed, May 18, 2016 at 11:17 AM, <u.utku.turunco...@be.itu.edu.tr> wrote:

> Yes, you are right. In this case, there will be two separate
> MPI_COMM_WORLD. Plus, one that covers all the resources (let's say that
> global MPI_COMM_WORLD). Actually, this kind of setup is very common for
> multi-physics applications such as fluid-structure interaction. So, is it
> possible to tight this kind of environment with Catalyst? I am not expert
> about Catalyst but it seems that there might be a problem in the rendering
> stage even underlying grids and fields are defined without any problem.
>
> Regards,
>
> --ufuk
>
> > I'm not sure if this is exactly what the original user is referring to,
> > but it is possible to have two separate codes communicate using MPI
> > through the dynamic processes in MPI-2. Essentially, one program starts
> up
> > on N processors and begins running and gets an MPI_COMM_WORLD. It then
> > spawns another executable on M different processors and that new
> > executable will call MPI_INIT and also get its own MPI_COMM_WORLD. So you
> > have two, disjoint MPI_COMM_WORLD's that get linked together through a
> > newly created intercommunicator.
> >
> >
> > I've used this to couple a structural mechanics code to our fluid
> dynamics
> > solver for example. It sounds like that is similar to what is being done
> > here.
> >
> >
> > How that would interact with coprocessing is beyond my knowledge though.
> > It does sound like an interesting problem and one I would be very curious
> > to find out the details.
> >
> >
> > Tim
> >
> >
> > 
> > From: ParaView <paraview-boun...@paraview.org> on behalf of Andy Bauer
> > <andy.ba...@kitware.com>
> > Sent: Wednesday, May 18, 2016 10:52 AM
> > To: Ufuk Utku Turuncoglu (BE)
> > Cc: paraview@paraview.org
> > Subject: Re: [Paraview] capability of ParaView, Catalyst in distributed
> > computing environment ...
> >
> > Hi,
> >
> > I'm a bit confused. MPI_COMM_WORLD is the global communicator and as far
> > as I'm aware, can't be modified which means there can't be two different
> > communicators.
> >
> > Catalyst can be set to use a specific MPI communicator and that's been
> > done by at least one code (Code_Saturne). I think they have a
> multiphysics
> > simulation as well.
> >
> > Cheers,
> > Andy
> >
> > On Wed, May 18, 2016 at 5:22 AM, Ufuk Utku Turuncoglu (BE)
> > <u.utku.turunco...@be.itu.edu.tr<mailto:u.utku.turunco...@be.itu.edu.tr
> >>
> > wrote:
> > Hi All,
> >
> > I just wonder about the capability of ParaView, Catalyst in distributed
> > computing environment. I have little bit experience in in-situ
> > visualization but it is hard for me to see the big picture at this point.
> > So, i decided to ask to the user list to get some suggestion from the
> > experts. Hypothetically, lets assume that we have two simulation code
> that
> > are coupled together (i.e. fluid-structure interaction) and both of them
> > have their own MPI_COMM_WORLD and run on different processors (model1
> runs
> > on MPI rank 0,1,2,3 and model2 runs on 4,5,6,7). What is the correct
> > design to create integrated in-situ visualization analysis (both model
> > contributes to same visualization pipeline) in this case? Do you know any
> > implementation that is similar to this design? At least, is it possible?
> >
> > In this case, the adaptor code will need to access to two different
> > MPI_COMM_WORLD and it could run on all processor (from 0 to 7) or its own
> > MPI_COMM_WORLD (i.e. MPI ranks 8,9,10,11). Also, the both simulation code
> > have its own grid and field definitions (might be handled via defining
> > different input ports). Does it create a problem in Paraview, Catalyst
> > side, if the multiblock dataset is used to define the grids of the
> > components in adaptor. I am asking because some MPI processes (belongs to
> > adaptor code) will not have data for specific model component due to the
> > domain decomposition implementation of the individual models. For
> example,
> > MPI rank 4,5,6,7 will not have data for model1 (that runs on MPI rank
> > 0,1,2,3) and 0,1,2,3 will not have data for model2 (that runs on MPI rank
> > 4,5,6,7). To that end, do i need to collect all the data from the
> > components? If this is the case, how can

Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-05-18 Thread Gallagher, Timothy P
I'm not sure if this is exactly what the original user is referring to, but it 
is possible to have two separate codes communicate using MPI through the 
dynamic processes in MPI-2. Essentially, one program starts up on N processors 
and begins running and gets an MPI_COMM_WORLD. It then spawns another 
executable on M different processors and that new executable will call MPI_INIT 
and also get its own MPI_COMM_WORLD. So you have two, disjoint MPI_COMM_WORLD's 
that get linked together through a newly created intercommunicator.


I've used this to couple a structural mechanics code to our fluid dynamics 
solver for example. It sounds like that is similar to what is being done here.


How that would interact with coprocessing is beyond my knowledge though. It 
does sound like an interesting problem and one I would be very curious to find 
out the details.


Tim



From: ParaView <paraview-boun...@paraview.org> on behalf of Andy Bauer 
<andy.ba...@kitware.com>
Sent: Wednesday, May 18, 2016 10:52 AM
To: Ufuk Utku Turuncoglu (BE)
Cc: paraview@paraview.org
Subject: Re: [Paraview] capability of ParaView, Catalyst in distributed 
computing environment ...

Hi,

I'm a bit confused. MPI_COMM_WORLD is the global communicator and as far as I'm 
aware, can't be modified which means there can't be two different communicators.

Catalyst can be set to use a specific MPI communicator and that's been done by 
at least one code (Code_Saturne). I think they have a multiphysics simulation 
as well.

Cheers,
Andy

On Wed, May 18, 2016 at 5:22 AM, Ufuk Utku Turuncoglu (BE) 
<u.utku.turunco...@be.itu.edu.tr<mailto:u.utku.turunco...@be.itu.edu.tr>> wrote:
Hi All,

I just wonder about the capability of ParaView, Catalyst in distributed 
computing environment. I have little bit experience in in-situ visualization 
but it is hard for me to see the big picture at this point. So, i decided to 
ask to the user list to get some suggestion from the experts. Hypothetically, 
lets assume that we have two simulation code that are coupled together (i.e. 
fluid-structure interaction) and both of them have their own MPI_COMM_WORLD and 
run on different processors (model1 runs on MPI rank 0,1,2,3 and model2 runs on 
4,5,6,7). What is the correct design to create integrated in-situ visualization 
analysis (both model contributes to same visualization pipeline) in this case? 
Do you know any implementation that is similar to this design? At least, is it 
possible?

In this case, the adaptor code will need to access to two different 
MPI_COMM_WORLD and it could run on all processor (from 0 to 7) or its own 
MPI_COMM_WORLD (i.e. MPI ranks 8,9,10,11). Also, the both simulation code have 
its own grid and field definitions (might be handled via defining different 
input ports). Does it create a problem in Paraview, Catalyst side, if the 
multiblock dataset is used to define the grids of the components in adaptor. I 
am asking because some MPI processes (belongs to adaptor code) will not have 
data for specific model component due to the domain decomposition 
implementation of the individual models. For example, MPI rank 4,5,6,7 will not 
have data for model1 (that runs on MPI rank 0,1,2,3) and 0,1,2,3 will not have 
data for model2 (that runs on MPI rank 4,5,6,7). To that end, do i need to 
collect all the data from the components? If this is the case, how can i handle 
2d decomposition problem? Because, the adaptor code has no any common grid 
structure that fits for all the model components.

Regards,

Ufuk Turuncoglu
___
Powered by www.kitware.com<http://www.kitware.com>

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview


Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-05-18 Thread u . utku . turuncoglu
Yes, you are right. In this case, there will be two separate
MPI_COMM_WORLD. Plus, one that covers all the resources (let's say that
global MPI_COMM_WORLD). Actually, this kind of setup is very common for
multi-physics applications such as fluid-structure interaction. So, is it
possible to tight this kind of environment with Catalyst? I am not expert
about Catalyst but it seems that there might be a problem in the rendering
stage even underlying grids and fields are defined without any problem.

Regards,

--ufuk

> I'm not sure if this is exactly what the original user is referring to,
> but it is possible to have two separate codes communicate using MPI
> through the dynamic processes in MPI-2. Essentially, one program starts up
> on N processors and begins running and gets an MPI_COMM_WORLD. It then
> spawns another executable on M different processors and that new
> executable will call MPI_INIT and also get its own MPI_COMM_WORLD. So you
> have two, disjoint MPI_COMM_WORLD's that get linked together through a
> newly created intercommunicator.
>
>
> I've used this to couple a structural mechanics code to our fluid dynamics
> solver for example. It sounds like that is similar to what is being done
> here.
>
>
> How that would interact with coprocessing is beyond my knowledge though.
> It does sound like an interesting problem and one I would be very curious
> to find out the details.
>
>
> Tim
>
>
> 
> From: ParaView <paraview-boun...@paraview.org> on behalf of Andy Bauer
> <andy.ba...@kitware.com>
> Sent: Wednesday, May 18, 2016 10:52 AM
> To: Ufuk Utku Turuncoglu (BE)
> Cc: paraview@paraview.org
> Subject: Re: [Paraview] capability of ParaView, Catalyst in distributed
> computing environment ...
>
> Hi,
>
> I'm a bit confused. MPI_COMM_WORLD is the global communicator and as far
> as I'm aware, can't be modified which means there can't be two different
> communicators.
>
> Catalyst can be set to use a specific MPI communicator and that's been
> done by at least one code (Code_Saturne). I think they have a multiphysics
> simulation as well.
>
> Cheers,
> Andy
>
> On Wed, May 18, 2016 at 5:22 AM, Ufuk Utku Turuncoglu (BE)
> <u.utku.turunco...@be.itu.edu.tr<mailto:u.utku.turunco...@be.itu.edu.tr>>
> wrote:
> Hi All,
>
> I just wonder about the capability of ParaView, Catalyst in distributed
> computing environment. I have little bit experience in in-situ
> visualization but it is hard for me to see the big picture at this point.
> So, i decided to ask to the user list to get some suggestion from the
> experts. Hypothetically, lets assume that we have two simulation code that
> are coupled together (i.e. fluid-structure interaction) and both of them
> have their own MPI_COMM_WORLD and run on different processors (model1 runs
> on MPI rank 0,1,2,3 and model2 runs on 4,5,6,7). What is the correct
> design to create integrated in-situ visualization analysis (both model
> contributes to same visualization pipeline) in this case? Do you know any
> implementation that is similar to this design? At least, is it possible?
>
> In this case, the adaptor code will need to access to two different
> MPI_COMM_WORLD and it could run on all processor (from 0 to 7) or its own
> MPI_COMM_WORLD (i.e. MPI ranks 8,9,10,11). Also, the both simulation code
> have its own grid and field definitions (might be handled via defining
> different input ports). Does it create a problem in Paraview, Catalyst
> side, if the multiblock dataset is used to define the grids of the
> components in adaptor. I am asking because some MPI processes (belongs to
> adaptor code) will not have data for specific model component due to the
> domain decomposition implementation of the individual models. For example,
> MPI rank 4,5,6,7 will not have data for model1 (that runs on MPI rank
> 0,1,2,3) and 0,1,2,3 will not have data for model2 (that runs on MPI rank
> 4,5,6,7). To that end, do i need to collect all the data from the
> components? If this is the case, how can i handle 2d decomposition
> problem? Because, the adaptor code has no any common grid structure that
> fits for all the model components.
>
> Regards,
>
> Ufuk Turuncoglu
> ___
> Powered by www.kitware.com<http://www.kitware.com>
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Search the list archives at: http://markmail.org/search/?q=ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://public.kitware.com/mailman/listinfo/parav

Re: [Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-05-18 Thread Andy Bauer
Hi,

I'm a bit confused. MPI_COMM_WORLD is the global communicator and as far as
I'm aware, can't be modified which means there can't be two different
communicators.

Catalyst can be set to use a specific MPI communicator and that's been done
by at least one code (Code_Saturne). I think they have a multiphysics
simulation as well.

Cheers,
Andy

On Wed, May 18, 2016 at 5:22 AM, Ufuk Utku Turuncoglu (BE) <
u.utku.turunco...@be.itu.edu.tr> wrote:

> Hi All,
>
> I just wonder about the capability of ParaView, Catalyst in distributed
> computing environment. I have little bit experience in in-situ
> visualization but it is hard for me to see the big picture at this point.
> So, i decided to ask to the user list to get some suggestion from the
> experts. Hypothetically, lets assume that we have two simulation code that
> are coupled together (i.e. fluid-structure interaction) and both of them
> have their own MPI_COMM_WORLD and run on different processors (model1 runs
> on MPI rank 0,1,2,3 and model2 runs on 4,5,6,7). What is the correct design
> to create integrated in-situ visualization analysis (both model contributes
> to same visualization pipeline) in this case? Do you know any
> implementation that is similar to this design? At least, is it possible?
>
> In this case, the adaptor code will need to access to two different
> MPI_COMM_WORLD and it could run on all processor (from 0 to 7) or its own
> MPI_COMM_WORLD (i.e. MPI ranks 8,9,10,11). Also, the both simulation code
> have its own grid and field definitions (might be handled via defining
> different input ports). Does it create a problem in Paraview, Catalyst
> side, if the multiblock dataset is used to define the grids of the
> components in adaptor. I am asking because some MPI processes (belongs to
> adaptor code) will not have data for specific model component due to the
> domain decomposition implementation of the individual models. For example,
> MPI rank 4,5,6,7 will not have data for model1 (that runs on MPI rank
> 0,1,2,3) and 0,1,2,3 will not have data for model2 (that runs on MPI rank
> 4,5,6,7). To that end, do i need to collect all the data from the
> components? If this is the case, how can i handle 2d decomposition problem?
> Because, the adaptor code has no any common grid structure that fits for
> all the model components.
>
> Regards,
>
> Ufuk Turuncoglu
> ___
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Search the list archives at: http://markmail.org/search/?q=ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://public.kitware.com/mailman/listinfo/paraview
>
___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview


[Paraview] capability of ParaView, Catalyst in distributed computing environment ...

2016-05-18 Thread Ufuk Utku Turuncoglu (BE)

Hi All,

I just wonder about the capability of ParaView, Catalyst in distributed 
computing environment. I have little bit experience in in-situ 
visualization but it is hard for me to see the big picture at this 
point. So, i decided to ask to the user list to get some suggestion from 
the experts. Hypothetically, lets assume that we have two simulation 
code that are coupled together (i.e. fluid-structure interaction) and 
both of them have their own MPI_COMM_WORLD and run on different 
processors (model1 runs on MPI rank 0,1,2,3 and model2 runs on 4,5,6,7). 
What is the correct design to create integrated in-situ visualization 
analysis (both model contributes to same visualization pipeline) in this 
case? Do you know any implementation that is similar to this design? At 
least, is it possible?


In this case, the adaptor code will need to access to two different 
MPI_COMM_WORLD and it could run on all processor (from 0 to 7) or its 
own MPI_COMM_WORLD (i.e. MPI ranks 8,9,10,11). Also, the both simulation 
code have its own grid and field definitions (might be handled via 
defining different input ports). Does it create a problem in Paraview, 
Catalyst side, if the multiblock dataset is used to define the grids of 
the components in adaptor. I am asking because some MPI processes 
(belongs to adaptor code) will not have data for specific model 
component due to the domain decomposition implementation of the 
individual models. For example, MPI rank 4,5,6,7 will not have data for 
model1 (that runs on MPI rank 0,1,2,3) and 0,1,2,3 will not have data 
for model2 (that runs on MPI rank 4,5,6,7). To that end, do i need to 
collect all the data from the components? If this is the case, how can i 
handle 2d decomposition problem? Because, the adaptor code has no any 
common grid structure that fits for all the model components.


Regards,

Ufuk Turuncoglu
___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview