On Wed, 4 Nov 2015 15:14:58 +0000 Emil Eriksen <[email protected]> wrote: > Hi Jan, > > Thank you very much. Running the correct demos, everything works. I > downloaded the a tarball with the demos from > > http://fenicsproject.org/documentation/tutorial/index.html > > which is apparently not up to date.
Yes, this is known and will be solved soon hopefully. Jan > > \emher > > ________________________________________ > Fra: Jan Blechta [[email protected]] > Sendt: 4. november 2015 11:45 > Til: Emil Eriksen > Cc: [email protected] > Emne: Re: [FEniCS-support] Ubuntu 14.04 LTS - MPI Errors > > On Tue, 3 Nov 2015 20:32:40 +0000 > Emil Eriksen <[email protected]> wrote: > > > Hi all, > > > > i have some problems running examples which uses MPI, e.g. the > > "d1_p2D.py" example. Examples not using MPI, e.g. the > > "alg_newton_np.py" example runs just fine. > > These files are from outdated, unmaintained FEniCS tutorial. Start > rather with documented demos included in DOLFIN source tree or on > FEniCS website > http://fenicsproject.org/documentation/dolfin/1.6.0/python/demo/index.html > > Jan > > > > > Trying to isolate the problem, i booted an Ubuntu 14.04 LTS live > > installation from USB and updated all packages > > > > sudo apt-get update > > sudo apt-get upgrade > > sudo apt-get dist-upgrade > > > > Next i added the official FEniCS repository > > > > sudo add-apt-repository ppa:fenics-packages/fenics > > > > and installed FEniCS using the Synaptic Package Manager. Now > > "alg_newton_np.py" runs, but "d1_p2D.py" gives the error listed > > below. Printing the > > > > ompi_info > > > > i can see that I have version 1.6.5 of Open MPI (latest version on > > default Ubuntu repo). Do you have any idea of what i might be doing > > wrong? > > > > Cheers, > > emher > > > > Reading DOLFIN parameters from file "dolfin_parameters.xml". > > [ubuntu:11850] mca: base: component_find: unable to > > open /usr/lib/openmpi/lib/openmpi/mca_paffinity_hwloc: perhaps a > > missing symbol, or compiled for a different version of Open MPI? > > (ignored) [ubuntu:11850] mca: base: component_find: unable to > > open /usr/lib/openmpi/lib/openmpi/mca_carto_auto_detect: perhaps a > > missing symbol, or compiled for a different version of Open MPI? > > (ignored) [ubuntu:11850] mca: base: component_find: unable to > > open /usr/lib/openmpi/lib/openmpi/mca_carto_file: perhaps a missing > > symbol, or compiled for a different version of Open MPI? (ignored) > > [ubuntu:11850] mca: base: component_find: unable to > > open /usr/lib/openmpi/lib/openmpi/mca_shmem_mmap: perhaps a missing > > symbol, or compiled for a different version of Open MPI? (ignored) > > [ubuntu:11850] mca: base: component_find: unable to > > open /usr/lib/openmpi/lib/openmpi/mca_shmem_posix: perhaps a missing > > symbol, or compiled for a different version of Open MPI? (ignored) > > [ubuntu:11850] mca: base: component_find: unable to > > open /usr/lib/openmpi/lib/openmpi/mca_shmem_sysv: perhaps a missing > > symbol, or compiled for a different version of Open MPI? (ignored) > > -------------------------------------------------------------------------- > > It looks like opal_init failed for some reason; your parallel > > process is likely to abort. There are many reasons that a parallel > > process can fail during opal_init; some of which are due to > > configuration or environment problems. This failure appears to be > > an internal failure; here's some additional information (which may > > only be relevant to an Open MPI developer): > > > > opal_shmem_base_select failed > > --> Returned value -1 instead of OPAL_SUCCESS > > -------------------------------------------------------------------------- > > [ubuntu:11850] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in file > > runtime/orte_init.c at line 79 > > -------------------------------------------------------------------------- > > It looks like MPI_INIT failed for some reason; your parallel process > > is likely to abort. There are many reasons that a parallel process > > can fail during MPI_INIT; some of which are due to configuration or > > environment problems. This failure appears to be an internal > > failure; here's some additional information (which may only be > > relevant to an Open MPI developer): > > > > ompi_mpi_init: orte_init failed > > --> Returned "Error" (-1) instead of "Success" (0) > > -------------------------------------------------------------------------- > > *** An error occurred in MPI_Init_thread > > *** on a NULL communicator > > *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort > > [ubuntu:11850] Local abort before MPI_INIT completed successfully; > > not able to aggregate error messages, and not able to guarantee that > > all other processes were killed! > > > _______________________________________________ > fenics-support mailing list > [email protected] > http://fenicsproject.org/mailman/listinfo/fenics-support _______________________________________________ fenics-support mailing list [email protected] http://fenicsproject.org/mailman/listinfo/fenics-support
