OK, if I understand correctly, you have 6 nodes with 4 processors per node.  
You want to launch a job with 4 processes per node (one per processor).  When 
you do that, the first 4 processes are assigned to one node, and all try to 
create their own tile on the same node because ParaView assumes that you have 
arranged the first 6 processes on the display nodes.

Each implementation of MPI has its own way of specifying how to assign the 
processes in your job.  You have not said, but it looks like you are using 
OpenMPI.  You might want to try the -bynode flag to mpirun.  This flag is 
supposed to interleave the process allocation amongst nodes.  Thus, 136 should 
get process 0, 6, 12, and 18; 134 should get processes 1, 7, 13, and 19; and so 
on.

-Ken


On 2/3/09 8:04 AM, "Camilo Marin" <[email protected]> wrote:

Hi Moreland,

As i said when i tried running paraview on the tiled display with 6 procesors 
only everything worked right. I could see perfectly each model loaded on 
Paraview on the Tiled Display, even glgears i tested and run correctly on each 
screen. Now i corrected the Paraview's host file and it looks like this.

192.168.51.136 slots=4
192.168.51.134 slots=4
192.168.51.132 slots=4
192.168.51.135 slots=4
192.168.51.133 slots=4
192.168.51.131 slots=4

Each of the six computers comforming the cluster has 4 slots available meaning 
a total of 24 processors running in parallel. 192.168.51.136 is the first 
screen, 192.168.51.134 the second and so on... When i run the command with np 
24 then the mapping on each screen is incorrect, as it 136 was no longer 
associated with process 0 and so on. So at the end you see the model but the 
correspondance between the tiles and process is incorrect. I don't know if i am 
specifying wrong the arguments tdx and tdy as tdx=3 and tdy=2 and 3*2=6 which 
is different to 24...maybe there is something i am going over which is causing 
the incorrect mapping.


2009/2/2 Utkarsh Ayachit <[email protected]>
I knew I should have looked it up before responding :), I stand corrected.

On Mon, Feb 2, 2009 at 1:00 PM, Moreland, Kenneth <[email protected]> wrote:
> Sorry, but that's not true at all.  It is in fact encouraged to have more
> processors than tiles when driving a tiled display.  All the processors,
> even the non-display ones, will be involved in the processing and parallel
> rendering work.
>
> -Ken
>
>
> On 1/30/09 11:43 AM, "Utkarsh Ayachit" <[email protected]> wrote:
>
> The number of processes must match the number of tiles (num of tiles =
> tdx * tdy). In your case you have 8 processes are only 2*1 = 3 tiles?
>
> Utkarsh
>
> On Fri, Jan 30, 2009 at 10:21 AM, Camilo Marin
> <[email protected]> wrote:
>> Hi all,
>>
>>
>> We are trying to configure and run Paraview 3.4.0 on a tiled display with
>> the following command:
>>
>> mpirun -np 8 --mca btl ^openib,udapl --mca btl_tcp_if_exclude lo
>> --hostfile
>> /home/imagine/ParaView/hosts /bin/env DISPLAY=:0
>> ~/OpenFOAM/ThirdParty/ParaView3.3-cvs/platforms/linux64Gcc/bin/pvserver
>> --server-port=1100 -tdx=1 -tdy=2
>>
>> Then when we connect throught the paraview client it doesn't show in the
>> two
>> displays we requested.
>>
>> So, is there a guideline or some kind of command/configuration we are
>> missing so it can be displayed as wished?
>>
>>
>>
>> Thnaks in advance.
>>
>>
>> _______________________________________________
>> ParaView mailing list
>> [email protected]
>> http://www.paraview.org/mailman/listinfo/paraview
>>
>>
> _______________________________________________
> ParaView mailing list
> [email protected]
> http://www.paraview.org/mailman/listinfo/paraview
>
>
>
>
>    ****      Kenneth Moreland
>     ***      Sandia National Laboratories
> ***********
> *** *** ***  email: [email protected]
> **  ***  **  phone: (505) 844-8919
>     ***      web:   http://www.cs.unm.edu/~kmorel 
> <http://www.cs.unm.edu/%7Ekmorel>
>
>




   ****      Kenneth Moreland
    ***      Sandia National Laboratories
***********
*** *** ***  email: [email protected]
**  ***  **  phone: (505) 844-8919
    ***      web:   http://www.cs.unm.edu/~kmorel

_______________________________________________
ParaView mailing list
[email protected]
http://www.paraview.org/mailman/listinfo/paraview

Reply via email to