>From the behaviour you explained, this problem is most likely not from
deal.II, but from the MPI setup or the HPC cluster configuration. The code
runs fine on any single node but it fails when different types of nodes are
mixed, which clearly indicates an MPI or system-level issue. You are using
the dealii_9.7.0_intel module and sourcing Intel oneAPI, but the job is
launched using /usr/bin/mpiexec, which belongs to OpenMPI, and this
mismatch may cause (in my opinion) node-dependent failures. The MPI
launcher and the MPI library used to compile the code must be the same, and
this can be checked using ldd ./your_executable | grep mpi. Also, it is
very likely that node3 and node4 have a different network or communication
setup compared to node1 and node2, such as different interconnects or MPI
transport settings, which can make the code run on some node combinations
but fail on others. To check this, you can try forcing MPI to use TCP
communication only, and if the code runs fine in that case, it means the
issue is with the HPC configuration and not with deal.II. The correct way
to troubleshoot is to first make sure the MPI build and MPI launcher are
consistent, then run MPI using the PBS hostfile.

On Sun, 18 Jan, 2026, 12:44 Praveen C, <[email protected]> wrote:

> Have you tried running some hello world example ?
>
> what happens if you put
>
> mpirun -np 64 hostname
>
> in your pbs script ?
>
> best
> praveen
>
> On Sun, Jan 18, 2026 at 12:23 PM ME20D503 NEWTON <[email protected]>
> wrote:
>
>> Hello deal.II community
>>
>> I am working with the deal.II finite element library and recently
>> transitioned from workstation based simulations to an HPC environment. My
>> simulations are 3D problems with a large number of degrees of freedom, so I
>> am using MPI parallelisation.
>>
>> I am facing an issue while running an MPI-based deal.II application on
>> our institute’s HPC cluster, and I would appreciate your guidance.
>>
>> *Software details:*
>>
>>    -
>>
>>    deal.II version: 9.7.0
>>    -
>>
>>    deal.II module: dealii_9.7.0_intel
>>    -
>>
>>    MPI launcher available on system: /usr/bin/mpiexec (OpenMPI 4.1.5)
>>    -
>>
>>    Intel oneAPI environment is sourced in the job script
>>    -
>>
>>    Scheduler: PBS Pro (version 23.06.06)
>>
>> *HPC node configuration (from pbsnodes):*
>>
>>    -
>>
>>    node1: 32 CPUs 125 GB RAM
>>    -
>>
>>    node2: 32 CPUs, 126 GB RAM
>>    -
>>
>>    node3: 32 CPU   504 GB RAM
>>    -
>>
>>    node4: 32 CPUs,  504 GB RAM
>>
>> *Observed behaviour:*
>>
>>    -
>>
>>    The code runs correctly on any single node.
>>    -
>>
>>    The code runs correctly when using node3 + node4 together.
>>    -
>>
>>    The code fails when using node1 + node2 together, or other mixed node
>>    combinations.
>>
>> *PBS job script and **Error file** attached for your reference*
>>
>>
>> *Question:*
>> Does this behaviour indicate a known issue related to MPI launcher usage,
>> node allocation, or deal.II configuration on PBS-based clusters? Any
>> guidance on how such node-combination-dependent failures should be
>> diagnosed from the deal.II side would be very helpful.
>>
>> Thank you for your time and support.
>>
>> Best regards,
>> Newton
>>
>> --
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see
>> https://groups.google.com/d/forum/dealii?hl=en
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion visit
>> https://groups.google.com/d/msgid/dealii/7ffb3122-ac5a-4218-9645-1cdb8a9ef91an%40googlegroups.com
>> <https://groups.google.com/d/msgid/dealii/7ffb3122-ac5a-4218-9645-1cdb8a9ef91an%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion visit
> https://groups.google.com/d/msgid/dealii/CAEvUdMLVGmb7JRRYmRncQPc6_McqTz2ahSLAXJPAvQL7YWXFLA%40mail.gmail.com
> <https://groups.google.com/d/msgid/dealii/CAEvUdMLVGmb7JRRYmRncQPc6_McqTz2ahSLAXJPAvQL7YWXFLA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
**************************************************************************

This e-mail is for the sole use of the intended recipient(s) and may

contain confidential and privileged information. If you are not the

intended recipient, please contact the sender by reply e-mail and destroy

all copies and the original message. Any unauthorized review, use,

disclosure, dissemination, forwarding, printing or copying of this email


is strictly prohibited and appropriate legal action will be taken. 


************************************************************************************************

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/dealii/CAP%3DwvwDROTd30xKfFEeNcJ-syso%2BYhFd0LjfRFMcwm%2BULE7j%2BQ%40mail.gmail.com.

Reply via email to