> On May 18, 2016, at 6:59 PM, Jeff Squyres (jsquyres)
> wrote:
>
> On May 18, 2016, at 6:16 PM, Ryan Novosielski wrote:
>>
>> I’m pretty sure this is no longer relevant (having read Roland’s messages
>> about it from a couple of years ago now). Can
Hi Siegmar,
Sorry for the delay, I seem to have missed this one.
It looks like there's an error in the way the native methods are processing
java exceptions. The code correctly builds up an exception message for
cases where MPI 'c' returns non-success but, not if the problem occured
in one of
Works perfectly for me, so I believe this must be an environment issue - I am
using gcc 6.0.0 on CentOS7 with x86:
$ mpirun -n 1 -host bend001 --slot-list 0:0-1,1:0-1 --report-bindings
./simple_spawn
[bend001:17599] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket 0[core
1[hwt 0-1]],
Hi Ralph and Gilles,
the program breaks only, if I combine "--host" and "--slot-list". Perhaps this
information is helpful. I use a different machine now, so that you can see that
the problem is not restricted to "loki".
pc03 spawn 115 ompi_info | grep -e "OPAL repo revision:" -e "C compiler
> On May 24, 2016, at 6:21 AM, Siegmar Gross
> wrote:
>
> Hi Ralph,
>
> I copy the relevant lines to this place, so that it is easier to see what
> happens. "a.out" is your program, which I compiled with mpicc.
>
> >> loki spawn 153 ompi_info | grep -e
Most commercial applications, i.e., Ansys Fluent, Abaqus, NASTRAN, and
PAM-CRASH is IBM Platform MPI bundled with each application and is the default
MPI when running parallel simulations. Depending on which Abaqus release
you're using your choices are IBM Platform MPI or Intel MPI. I don't
Hi Ralph,
I copy the relevant lines to this place, so that it is easier to see what
happens. "a.out" is your program, which I compiled with mpicc.
>> loki spawn 153 ompi_info | grep -e "OPAL repo revision:" -e "C compiler
>> absolute:"
>> OPAL repo revision: v1.10.2-201-gd23dda8
>> C
On May 24, 2016, at 7:19 AM, Siegmar Gross
wrote:
>
> I don't see a difference for my spawned processes, because both functions will
> "wait" until all pending operations have finished, before the object will be
> destroyed. Nevertheless, perhaps my small
Doesn't Abaqus do its own environment setup? I.e., I'm *guessing* that you
should be able to set your environment startup files (e.g., $HOME/.bashrc) to
point your PATH / LD_LIBRARY_PATH to point to whichever MPI implementation you
want, and Abaqus will do whatever it needs to a) be
> On May 24, 2016, at 4:19 AM, Siegmar Gross
> wrote:
>
> Hi Ralph,
>
> thank you very much for your answer and your example program.
>
> On 05/23/16 17:45, Ralph Castain wrote:
>> I cannot replicate the problem - both scenarios work fine for me. I’m not
Hi Ralph,
thank you very much for your answer and your example program.
On 05/23/16 17:45, Ralph Castain wrote:
I cannot replicate the problem - both scenarios work fine for me. I’m not
convinced your test code is correct, however, as you call Comm_free the
inter-communicator but didn’t call
Megdich Islem writes:
> Yes, Empire does the fluid structure coupling. It couples OpenFoam (fluid
> analysis) and Abaqus (structural analysis).
> Does all the software need to have the same MPI architecture in order to
> communicate ?
I doubt it's doing that, and
Ralph Castain writes:
> Nobody ever filed a PR to update the branch with the patch - looks
> like you never responded to confirm that George’s proposed patch was
> acceptable.
I've never seen anything asking me about it, but I'm not an OMPI
developer in a position to review
Yes, Empire does the fluid structure coupling. It couples OpenFoam (fluid
analysis) and Abaqus (structural analysis).
Does all the software need to have the same MPI architecture in order to
communicate ?
Regards,Islem
Le Mardi 24 mai 2016 1h02, Gilles Gouaillardet a
14 matches
Mail list logo