Hi Ralph, I just copy your code in the trunk ,it seems working well. But the "mpirun ls " still stuck at the orte_plm_select ,after using ompi_info ,there is no available component for plm. I think my configure option is not the problem, for I use the same configure option to build the program under the linux ,the rsh & slurm component are both available . Maybe it is a problem of my enviroment , but I really don't know how to find the condition to include rsh component.
2010/6/30 Ralph Castain <r...@open-mpi.org>: > You may be working off of an old version of OMPI - I updated opal_paffinity > awhile ago to no longer require that a component be selected. Then you can > no-build all paffinity components if you like and the system will still > startup okay. > > I don't believe this was moved over to the 1.4 release branch - afraid you > would have to use a developer's trunk tarball or svn checkout. It -might- be > in the 1.5.0 release candidates, though (I haven't looked). > > On Jun 29, 2010, at 9:36 PM, 张晶 wrote: > >> Hi all , >> >> I tried to run the openmpi on vxworks. Now I can run program ompi_info >> etc. But it failed running "mpirun ls", the error is : >> -------------------------------------------------------------------------- >> It looks like opal_init failed for some reason; your parallel process is >> likely to abort. There are many reasons that a parallel process can >> fail during opal_init; some of which are due to configuration or >> environment problems. This failure appears to be an internal failure; >> here's some additional information (which may only be relevant to an >> Open MPI developer): >> >> opal_paffinity_base_select failed >> --> Returned value -13 instead of OPAL_SUCCESS >> -------------------------------------------------------------------------- >> >> After using the ompi_info , I cann't find any available component for >> paffinity. It seems the linux component of paffinity isn't available >> . As the paffinity is not a must in openmpi . I wonder whether I can >> disable the paffinity during the running of mpirun? >> -- >> Jing Zhang >> _______________________________________________ >> devel mailing list >> de...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/devel > > > _______________________________________________ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel -- 张晶