I don't know if it would make sense to send someone (or even if someone is
already supposed to go) but they are planning for the next open mpi developer
meeting and since we have so much going on with open mpi, I thought it would
make sense to forward this email.
Thanks,
Hello,
Some of the latest modifications to the SM BTL make a direct reference to ORTE
instead of the equivalent at the OMPI level.
The attached patch fixes that problem.
Thanks,
btl_sm_component_c.patch
Description: btl_sm_component_c.patch
This patch will actually apply correctly, not the first one. Sorry about that.
btl_sm_component_c.patch
Description: btl_sm_component_c.patch
On Feb 22, 2013, at 11:57 AM, "Vallee, Geoffroy R." <valle...@ornl.gov> wrote:
> Hello,
>
> Some of the latest modific
Well apparently not… another try… sorry for the extra noise.
btl_sm_component_c.patch
Description: btl_sm_component_c.patch
On Feb 22, 2013, at 12:08 PM, "Vallee, Geoffroy R." <valle...@ornl.gov> wrote:
> This patch will actually apply correctly, not the fir
o typedef
> ompi_local_rank_t. I've committed the complete fix.
>
> Thanks
> Ralph
>
>
> On Feb 22, 2013, at 9:15 AM, "Vallee, Geoffroy R." <valle...@ornl.gov> wrote:
>
>> Well apparently not… another try… sorry for the extra noise.
>>
>>
>>
>
Hi,
Small patch that remove the use of a ORTE constants that is not justified; the
OPAL one should be used instead.
Thanks,
ompi_info_support.patch
Description: ompi_info_support.patch
Hi,
I found a very unexpected behavior with r29217:
% cat ~/.openmpi/mca-params.conf
#pml_base_verbose=0
pml_base_verbose=0
% mpicc -o helloworld helloworld.c
Then if i update the mca-params.conf to have two identical entries, i have
segfaults:
% cat ~/.openmpi/mca-params.conf
Hi,
Instead of references to the RTE layer, there are a few direct references to
ORTE symbols in the current OMPI layer. The attached patches fix the problem.
Thanks,
proc_c.patch
Description: proc_c.patch
comm_c.patch
Description: comm_c.patch
Too bad all this happened so fast otherwise ORNL would have at least
participated to the call to understand what is going to happen (since we have a
RTE module that we maintain). Any chance we could have a summary?
Thanks,
On May 1, 2014, at 2:40 PM, Ralph Castain wrote:
t;> On Sep 1, 2016, at 2:56 PM, Vallee, Geoffroy R. <valle...@ornl.gov> wrote:
>>
>> Hello,
>>
>> I get the following problem when we compile OpenMPI-2.0.0 (it seems to be
>> specific to 2.x; the problem did not appear with 1.10.x) with PGI:
>>
&
Hello,
I get the following problem when we compile OpenMPI-2.0.0 (it seems to be
specific to 2.x; the problem did not appear with 1.10.x) with PGI:
CCLD opal_wrapper
../../../opal/.libs/libopen-pal.so: undefined reference to `opal_atomic_sc_64'
../../../opal/.libs/libopen-pal.so: undefined
g to test 2.0.2.rc3 ASAP and try to get PGI 16.4 coverage added in
>
> -Paul
>
> On Thu, Sep 1, 2016 at 12:48 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> Please send all the information on the build support page and open an issue
> at github. Thanks.
>
look.
>
>
>> On Sep 1, 2016, at 9:20 PM, Paul Hargrove <phhargr...@lbl.gov> wrote:
>>
>> I failed to get PGI 16.x working at all (licence issue, I think).
>> So, I can neither confirm nor refute Geoffroy's reported problems.
>>
>> -Paul
>&
Hi,
I am running some tests on a PPC platform that is using LSF and I see the
following problem every time I launch a job that runs on 2 nodes or more:
[crest1:49998] *** Process received signal ***
[crest1:49998] Signal: Segmentation fault (11)
[crest1:49998] Signal code: Address not mapped
Hi,
HWLOC 2.0.x support was brought up during the call. FYI, I am currently using
(and still testing) hwloc 2.0.1 as an external library with master and I did
not face any major problem; I only had to fix minor things, mainly for putting
the HWLOC topology in a shared memory segment. Let me
://github.com/open-mpi/ompi/pull/4677.
>
> If all those issues are now moot, great. I really haven't followed up much
> since I made the initial PR; I'm happy to have someone else take it over...
>
>
>> On May 22, 2018, at 11:46 AM, Vallee, Geoffroy R. <valle...@ornl
Hi,
Sorry for the slow feedback but hopefully I have now what I need to give
feedback in a more timely manner...
I tested the RC on Summitdev at ORNL
(https://www.olcf.ornl.gov/for-users/system-user-guides/summitdev-quickstart-guide/)
by running a simple test (I will be running more tests for
Hi,
I do not see a 3.1.1rc2 but instead a final 3.1.1, is it normal? Anyway, I
tested the 3.1.1 tarball on 8 summit nodes with netpipe and imb. I did not see
any problem and performance numbers look good.
Thanks
From: Barrett, Brian via devel
Date: July 1,
Hi,
I tested on Summitdev here at ORNL and here are my comments (but I only have a
limited set of data for summitdev so my feedback is somewhat limited):
- netpipe/mpi is showing a slightly lower bandwidth than the 3.x series (I do
not believe it is a problem).
- I am facing a problem with UCX,
Hi,
I tested the RC on Summitdev at ORNL and everything is looking for fine.
Thanks,
> On Aug 15, 2018, at 6:16 PM, Barrett, Brian via devel
> wrote:
>
> The first release candidate for the 3.1.2 release is posted at
> https://www.open-mpi.org/software/ompi/v3.1/
>
> Major changes include
FYI, that segfault problem did not occur when I tested 3.1.2rc1.
Thanks,
> On Aug 17, 2018, at 10:28 AM, Pavel Shamis wrote:
>
> It looks to me like mxm related failure ?
>
> On Thu, Aug 16, 2018 at 1:51 PM Vallee, Geoffroy R. wrote:
> Hi,
>
> I ran some tests o
ng.
>
> I'm assuming the MXM failure has been around for a while, and the correct way
> to fix it is to upgrade to a newer Open MPI and/or use UCX.
>
>
>> On Aug 17, 2018, at 11:01 AM, Vallee, Geoffroy R. wrote:
>>
>> FYI, that segfault problem did no
22 matches
Mail list logo