[OMPI users] Using hostfile with default hostfile

2010-10-27 Thread Stefan Kuhne
Hello,

my Cluster has a configured default hostfile.

When i use another hostfile for one job i get:

cluster-admin@Head:~/Cluster/hello$ mpirun --hostfile ../Cluster.hosts
./hello
--
There are no allocated resources for the application
  ./hello
that match the requested mapping:
  ../Cluster.hosts

Verify that you have mapped the allocated resources properly using the
--host or --hostfile specification.

...

Any ideas for it?

Regards,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] Running simple MPI program

2010-10-23 Thread Stefan Kuhne
Am 23.10.2010 18:58, schrieb Brandon Fulcher:

Hello,

> So I checked the OMPI package details on both machines, they each are
> running Open MPI 1.3. . . but then I noticed that the packages are
> different versions.   Basically, the slave is running the previous
> Ubuntu release, and the master is running the current one. Both have the
> most recent packages for their release. . .but perhaps that is enough of
> a difference?
> 
I've found that Ubuntu-9.10 has an openmpi-1.3 packages with openmpi in
version 1.3. But 10.04 has an openmpi-1.3 package to, but it contains
openmpi in version 1.4.

Regards,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] MPE logging GUI

2010-07-19 Thread Stefan Kuhne
Am 19.07.2010 16:32, schrieb Anthony Chan:

Hello Anthony,
> 
> Just curious, is there any reason you are looking for another
> tool to view slog2 file ?
> 
I'm looking for a more clearer tool.
I find jumpstart a little bit overloaded.

Regards,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


[OMPI users] MPE logging GUI

2010-07-19 Thread Stefan Kuhne
Hello,

does anybody know another tool as jumpstart to view a MPE logging file?

Regards,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] default hostfile (Ubuntu-9.10)

2010-05-19 Thread Stefan Kuhne
Am 18.05.2010 15:46, schrieb Ralph Castain:

Hello,

> Starting in the 1.3 series, you have to tell OMPI where to find the
> default hostfile. So put this in your default MCA param file:
> 
> orte_default_hostfile=
> 
> That should fix it.
> 
yes it fix it.

Thanks,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] default hostfile (Ubuntu-9.10)

2010-05-18 Thread Stefan Kuhne
Am 18.05.2010 15:09, schrieb Ralph Castain:

Hello,

> Could you tell us what version of OMPI you are using?
> 
it's openmpi-1.3.2.

Regards,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


[OMPI users] default hostfile (Ubuntu-9.10)

2010-05-18 Thread Stefan Kuhne
Hello,

I manage a little HPC-Cluster.
It seams like the default-hostfile is located in /etc/openmpi.
But when i write my hosts in it, it isn't used by mpirun.

How can i use an default hostfile?

Regards,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] OFED-1.5rc1 with OpenMPI and IB

2009-12-07 Thread Stefan Kuhne
Stefan Kuhne schrieb:
> Stefan Kuhne schrieb:
> 
Hello,

>> I'll try it on monday.
>>
> with:
> user@head:~$ ulimit -l
> unlimited
> user@head:~$
> 
> it works.
> 
it works in ssh and FreeNX, but an Terminal on real X11 tells 64 again.
But i need X11 for testing an MPE issue.

Regards,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] OFED-1.5rc1 with OpenMPI and IB

2009-11-16 Thread Stefan Kuhne

Stefan Kuhne schrieb:

Hello,


Who can i resolve if really IB is used?
My switch shows no packages.
(I use: --mca btl openib,self)


my "hello" example was a bad example.
A real mpi program uses openib.

Regards,
Stefan Kuhne
<>

Re: [OMPI users] OFED-1.5rc1 with OpenMPI and IB

2009-11-16 Thread Stefan Kuhne

Stefan Kuhne schrieb:

Hello,


I'll try it on monday.


with:
user@head:~$ ulimit -l
unlimited
user@head:~$

it works.

Who can i resolve if really IB is used?
My switch shows no packages.
(I use: --mca btl openib,self)

Regards,
Stefan Kuhne
<>

Re: [OMPI users] OFED-1.5rc1 with OpenMPI and IB

2009-11-14 Thread Stefan Kuhne
Jeff Squyres schrieb:
> On Nov 13, 2009, at 1:06 AM, Stefan Kuhne wrote:
> 
Hello,

>> user@head:~$ ulimit -l
>> 64
>> 
> This should really be unlimited.  See:
> 
> http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages

with such an error message i had find it, but with my error message
thwrw is no chance.

I'll try it on monday.

Regards,
Stefan Kuhne



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] OFED-1.5rc1 with OpenMPI and IB

2009-11-13 Thread Stefan Kuhne

Jeff Squyres schrieb:

Hello,


Can you submit all the information requested here:

http://www.open-mpi.org/community/help/


OpenFabrics version: 1.5 RC1 (Download from www.openfrabrics.org)

Distro: Ubuntu 9.04 (2.6.28-11-generic)

SM: Infinicon InfinIO 9024 Switch

ibv_devinfo:
user@head:~$ ibv_devinfo
hca_id: mthca0
fw_ver: 3.5.0
node_guid:  0006:6a00:b000:476b
sys_image_guid: 0006:6a00:b000:476e
vendor_id:  0x02c9
vendor_part_id: 23108
hw_ver: 0xA1
board_id:   MT_003001
phys_port_cnt:  2
port:   1
state:  PORT_ACTIVE (4)
max_mtu:2048 (4)
active_mtu: 2048 (4)
sm_lid: 1
port_lid:   2
port_lmc:   0x00

port:   2
state:  PORT_DOWN (1)
max_mtu:2048 (4)
active_mtu: 512 (2)
sm_lid: 0
port_lid:   0
port_lmc:   0x00

user@head:~$

ifconfig:
user@head:~$ ifconfig ib0
ib0   Link encap:UNSPEC  Hardware Adresse 
80-00-04-04-FE-80-00-00-00-00-00-00-00-00-00-00
  inet Adresse:192.168.100.207  Bcast:192.168.100.255 
Maske:255.255.255.0
  inet6-Adresse: fe80::206:6a00:b000:476c/64 
Gültigkeitsbereich:Verbindung

  UP BROADCAST RUNNING MULTICAST  MTU:65520  Metrik:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:7 overruns:0 carrier:0
  Kollisionen:0 Sendewarteschlangenlänge:256
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

user@head:~$

ulimit -l:
user@head:~$ ulimit -l
64
user@head:~$

I hope it's what you need.

Regards,
Stefan Kuhne
<>

[OMPI users] OFED-1.5rc1 with OpenMPI and IB

2009-11-12 Thread Stefan Kuhne
Hello,

i try to install a small HPC-cluster for education usage.
Infiniband is working as well i can ping over IB.
When i try to run an MPI program i get:

user@head:~/Cluster/hello$ mpirun --hostfile ../Cluster.hosts hello
--
WARNING: There was an error initializing an OpenFabrics device.

   Local host:   head
   Local device: mthca0
--
Hier ist Job  0 von  1 auf head
user@head:~/Cluster/hello$

How can i get more information about this error?

Regards,
Stefan Kuhne




signature.asc
Description: OpenPGP digital signature