Hi,
I have solved the problem installing only the petsc-openmpi
packages:
~]# rpm -qa | grep petsc
petsc-openmpi-3.16.4-3.fc36.x86_64
petsc-openmpi-devel-3.16.4-3.fc36.x86_64
python3-petsc-openmpi-3.16.4-3.fc36.x86_64
~]#
~$ cat e.c
#include
int main(int argc,
Hi all.
On 23/08/22 16:33, Rafel Amer Ramon wrote:
Hi,
the files rules and variables are in /usr/lib64/petsc/conf, so I have set
~$ export PETSC_DIR=/usr
~$ cat Makefile
e: e.o
${CLINKER} -o e e.o ${PETSC_LIB}
include ${PETSC_DIR}/lib64/petsc/conf/variables
include
Hm - I don't see this.
[root@pj01 ~]# ldd /usr/lib64/libpetsc64.so
linux-vdso.so.1 => linux-vdso.so.1 (0x7ffd10f66000)
libflexiblas64.so.3 => /lib64/libflexiblas64.so.3 (0x7f9e1fe0)
libcgns.so.4.3 => /lib64/libcgns.so.4.3 (0x7f9e1fd19000)
[root@pj01 ~]# yum search petsc |grep devel
Last metadata expiration check: 0:01:57 ago on Tue 23 Aug 2022 10:23:51 AM CDT.
petsc-devel.i686 : Portable Extensible Toolkit for Scientific Computation
(developer files)
petsc-devel.x86_64 : Portable Extensible Toolkit for Scientific Computation
On Tue, Aug 23, 2022 at 10:33 AM Rafel Amer Ramon
wrote:
> Hi,
>
> the files rules and variables are in /usr/lib64/petsc/conf, so I have set
>
> ~$ export PETSC_DIR=/usr
>
> ~$ cat Makefile
>
> e: e.o
> ${CLINKER} -o e e.o ${PETSC_LIB}
>
> include ${PETSC_DIR}/lib64/petsc/conf/variables
>
1) Run the trivial PETSc program with 2 MPI ranks and -info (send the output)
2) do a ldd on the PETSc program and the pure MPI program to see if they are
all using the same libraries
3) add a MPI_Initialize() immediately before the PetscInitialize() and
immediately after the
Hi,
the files rules and variables are in /usr/lib64/petsc/conf, so I
have set
~$ export PETSC_DIR=/usr
~$ cat Makefile
e: e.o
${CLINKER} -o e e.o ${PETSC_LIB}
include ${PETSC_DIR}/lib64/petsc/conf/variables
include
On Tue, Aug 23, 2022 at 9:48 AM Rafel Amer Ramon wrote:
>
> Hi,
>
> yes, I can compile and run it:
>
> ~$ mpicc -o pi-mpi pi-mpi.c -lm
> ~$ mpirun -np 16 --hostfile ~/hosts ./pi-mpi
> Number of intervals: 2048
> Result: 3.1411043724
> Accuracy: 0.0004882812
> Time: 0.0314383930
> ~$
>
>
>
Hi,
yes, I can compile and run it:
~$ mpicc -o pi-mpi pi-mpi.c -lm
~$ mpirun -np 16 --hostfile ~/hosts ./pi-mpi
Number of intervals: 2048
Result: 3.1411043724
Accuracy: 0.0004882812
Time: 0.0314383930
~$
Best regards,
Rafel
Can you run anything in parallel? Say the small sample code that calculates
pi?
https://www.cs.usfca.edu/~mmalensek/cs220/schedule/code/week09/pi-mpi.c.html
Thanks,
Matt
On Tue, Aug 23, 2022 at 6:34 AM Rafel Amer Ramon wrote:
>
> Hi,
>
> mpicc and mpirun are from the package
On Mon, Aug 22, 2022 at 12:12 PM Patrick Alken
wrote:
> Thank you, I have read that document. I have changed my criteria to:
>
>
> if (j >= first && (j - first) < m) {
>
As Barry says, this should be
if ((j >= first) && (j < last)) {
> /*diagonal*/
>
> else
>
> /*off-diagonal*/
>
>
> It
You can try the following. Save your shell matrix to disk with the option
-eps_view_mat0 binary:amatrix.bin
then repeat the computation with ex4.c loading this file.
Note that this should be done for a small problem size, because the conversion
from a shell matrix implies a matrix-vector product
Hi Jose,
Thanks for your reply.
It represents a linear operator. In my shell matrix, I am computing the
non-linear residuals twice with perturbed flow variables. The matrix-vector
product is computed as:
A*v = (R(q+eps*v) - R(q-eps*v))/(2*eps)
R is the non-linear residual. q is my flow
Hi,
mpicc and mpirun are from the package
openmpi-devel-4.1.2-3.fc36.x86_64
and the petsc64 library is linked with
/usr/lib64/openmpi/lib/libmpi.so.40
~# ldd /lib64/libpetsc64.so.3.16.4
linux-vdso.so.1 (0x7fff5becc000)
The relative residual norms that are printed at the end are too large. For NHEP
problems, they should be below the tolerance. Don't know what is happening.
Does your shell matrix represent a linear (constant) operator? Or does it
change slightly depending on the input vector?
> El 23 ago
Hi Jose,
I think the previous problem comes from my side. I have some uninitialized
values in my part of code to compute the non-linear residuals. so, it produces
a NAN when it tries to compute the matrix-vector product using finite
difference. This might make the slepc/pestc do unexpected
16 matches
Mail list logo